id
stringlengths
10
16
pid
stringlengths
12
18
input
stringlengths
2.75k
221k
output
stringlengths
500
9.79k
gao_GAO-20-265
gao_GAO-20-265_0
Background Established by the Communications Act of 1934, FCC regulates interstate and international communications by radio, television, wire, satellite, and cable in all 50 states, the District of Columbia, and U.S. territories. FCC is responsible for, among other things, making available nationwide worldwide wire and radio communication service. More recently, it has been responsible for promoting competition and reducing regulation of the telecommunications industry in order to secure lower prices and higher quality services for consumers. FCC’s functions include: issuing licenses for broadcast television and radio; overseeing licensing, enforcement, and regulatory functions of carriers of cellular phones and other personal communication services; regulating the use of radio spectrum and conducting auctions of licenses for spectrum; investigating complaints and taking enforcement actions if it finds that there have been violations of the various communications laws and commission rules that are designed to protect consumers; addressing issues related to public safety, homeland security, emergency management, and preparedness; educating and informing consumers about communications goods and reviewing mergers of companies holding FCC-issued licenses. FCC Relies on Information Technology to Support Its Operations FCC relies extensively on computerized systems to support its mission- related operations, and on information security controls to protect the commission’s data. FCC’s Information Technology Center, within the Office of the Managing Director, uses IT to perform the commission’s business operations. Through its computer network and systems, the commission collects and maintains nonpublic information, including proprietary information of businesses regulated by the commission, as well as information available to the public through rulemaking proceedings. FCC Has Defined Organizational Roles and Responsibilities for Information Security FCC’s Chairman, chief information officer (CIO), and chief information security officer (CISO) each have specific responsibilities for information security. Specifically, the FCC Chairman has responsibility for, among other things: 1. providing information security protections commensurate with the risk and magnitude of harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of the commission’s information systems and information; 2. ensuring that senior officials provide security for the information and systems that support the operations and assets under their control; and 3. delegating to the CIO the authority to ensure compliance with the information security requirements imposed on the commission. In addition, the CIO is responsible for establishing and enforcing policies and procedures for protecting information resources. Toward this end, the CIO has designated and assigned responsibilities to the CISO for managing the cybersecurity program. The CISO, among other things, is responsible for providing information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems that support the operations and assets of the commission. Federal Law and Guidance Establish Security Requirements to Protect Federal Information and Systems The Federal Information Security Modernization Act of 2014 (FISMA) provides a comprehensive framework for information security controls over information resources that support federal operations and assets. The law also requires each agency to develop, document, and implement an agency-wide information security program to provide risk-based protections for the information and information systems that support the operations and assets of the agency. Such a program should include assessing risks; developing and implementing policies and procedures to cost-effectively reduce risks; developing and implementing plans for providing adequate information security for networks, facilities, and systems; and providing security awareness and specialized training. Further, the program should include testing and evaluating the effectiveness of controls; planning, implementing, evaluating, and documenting remedial actions to address information security deficiencies; developing and implementing procedures for detecting, reporting, and responding to security incidents; and ensuring continuity of operations. FISMA requires agencies to comply with the federal information processing standards (FIPS) publications issued by NIST and Office of Management and Budget (OMB) Circular A-130 requires agencies to comply with the information security guidelines prescribed in NIST special publications. Consequently, NIST FIPS publications and special publications contain many of the cybersecurity-related requirements for federal agencies. For example, NIST FIPS Publication 199 requires agencies to categorize their information and information systems according to the potential harm and impact to agency assets, operations, or individuals should the confidentiality, integrity, or availability of its information and information systems be compromised through unauthorized access, use, disclosure, disruption, modification, or destruction. In addition, NIST FIPS Publication 200 requires agencies to meet minimum security requirements by selecting the appropriate security controls, as described in NIST Special Publication 800-53. This special publication provides a catalog of 18 security control areas for federal information systems and a process for selecting controls to protect organizational operations and assets. The publication provides baseline security controls for low-, moderate-, and high-impact systems, and agencies have the ability to tailor or supplement their security requirements and policies based on agency mission, business requirements, and operating environment. Further, in May 2017, the President issued an executive order requiring agencies to immediately begin using NIST’s cybersecurity framework for managing their cybersecurity risks. The framework, which provides guidance for cybersecurity activities, is based on five core security functions: Identify: Develop an organizational understanding to manage cybersecurity risk to systems, people, assets, data, and capabilities. Protect: Develop and implement the appropriate safeguards to ensure delivery of critical infrastructure services. Detect: Develop and implement the appropriate activities to identify the occurrence of a cybersecurity event. Respond: Develop and implement the appropriate activities to take action regarding a detected cybersecurity incident. Recover: Develop and implement the appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired due to a cybersecurity incident. According to NIST, these five functions occur concurrently and continuously, and provide a strategic view of the life cycle of an organization’s management of cybersecurity risk. Within the five functions are 23 categories and 108 subcategories that include information security program-related controls and technical controls for achieving the intent of each function. Appendix II provides a description of the framework categories and subcategories of controls. FCC Experienced a Service Disruption in May 2017 On May 7 and 8, 2017, FCC experienced a dramatic surge in the number of comments sent to the commission through its ECFS during a public comment period. This surge led to a disruption of services, which prevented the system from being able to accept additional comments for a period of time. The FCC Office of Inspector General determined that the system service disruption was likely due to a combination of the sudden increase in traffic from commenters all trying to access the system’s website over a short period of time and system design deficiencies that negatively impacted the capacity and performance of the system to collect and process the increase in traffic. Figure 1 presents a timeline of the May 2017 ECFS service disruption and subsequent related events. Additional details on the timeline are provided in appendix III. FCC Increased ECFS’s Capacity and Performance to Reduce Risk of Future Service Disruptions In response to the ECFS service disruption that occurred on May 7 and 8, 2017, FCC Information Technology Center officials took four key actions to reduce the risk of future service disruptions to the system. 1. Conducted Internal Assessments In response to the service disruption, in early May 2017, the FCC CIO initially stated that the cause was a cyberattack on the ECFS. However, upon further assessment, FCC Information Technology Center officials later determined that the disruption was caused by a surge in comment traffic to the system and existing system performance and capacity deficiencies. In response to multiple congressional inquiries, in late July 2017, FCC Information Technology Center officials assessed the extent to which malicious intent was involved in causing the disruption based on whether: (a) internet protocol (IP) addresses from foreign sources were present on the commission’s network at the time of the May 2017 event; (b) comment submissions were denied (i.e., dropped) from the commission’s network, (c) observable botnet traffic was present; and (d) duplicate comment submissions were accepted into ECFS. The assessment concluded that the commission did not have sufficient information and tools to determine whether there was any malicious intent. 2. Deployed Additional Virtual Hardware Following the disruption, in early May 2017, FCC deployed additional virtual hardware to address system performance issues and support system stabilization efforts of ECFS during the period in which service was disrupted. In early July 2017, the commission installed security sensors and forwarding agents on the ECFS virtual servers. These devices are intended to provide additional layers of security capability for the system. In mid-July 2017, FCC automated the process for deploying virtual hardware resources to support system availability subsequent to the May 2017 service disruption. 3. Optimized and Acquired System Software From late May 2017 to early June 2017, FCC acquired a diagnostic tool to measure system performance. According to the commission, this tool is used to determine the maximum amount of simultaneous user capacity within ECFS during periods of high web traffic. In early June 2017, the commission optimized the search functionality within the ECFS database to reduce the system response time. In mid-June 2017, FCC removed redundant internal processes for ECFS web requests to increase the responsiveness of the system. During late July 2017, the commission acquired a security information and event management tool to collect and analyze security-related events that may indicate a cybersecurity incident. In late August 2017, FCC established rate control limits within ECFS to safeguard against potential distributed denial-of-service attacks aiming to flood one target with network traffic. 4. Updated Incident Response Policy and Procedures In January 2018 and March 2018, during its annual policy review, FCC Information Technology Center officials updated the commission’s incident response and reporting policy and procedures to incorporate lessons learned from the May 2017 ECFS service disruption and clarify their processes. For example, FCC Information Technology Center officials revised the commission’s incident response procedures to document internal escalation time frames for notifying management of potential security incidents and reporting the incidents to the United States Computer Emergency Readiness Team within 1 hour of identification of an incident. Figure 2 shows a chronological sequence of the hardware and software improvements that FCC officials implemented after the May 2017 event. FCC provided evidence that indicated its actions to add additional hardware and software resources increased ECFS’s capacity and performance and demonstrated that the system was stable from June 2017 through December 2017. For example, FCC acquired a performance diagnostic tool in late May 2017, which was designed to determine the maximum number of potential simultaneous public users within ECFS during periods of high web traffic. Using the diagnostic tool, FCC Information Technology Center officials determined in June 2017, that the system became unstable when the number of simultaneous simulated public users reached 500. However, by December 2017, the system had demonstrated that it could accept a capacity of over 3,000 simultaneous public users without a service disruption. FCC data showed that the increased capacity and improved performance of the ECFS prevented further service disruptions during periods of sharp spikes in the volume of comments received. For example, on May 8, 2017, service was disrupted on the system when it received a peak of about 249,000 comments in 1 day, whereas on July 12, 2017, the system accepted and processed at least 1.4 million comments in 1 day without a reported service disruption. Similar spikes in traffic volumes that occurred through December 2017 also did not result in service disruptions. Figure 3 shows the daily comment submissions to ECFS from May 2017 through December 2017 and demonstrates FCC’s ability to accept a higher volume of comments without a service disruption. FCC Did Not Consistently Implement Security Controls, Which Placed Selected Systems at Risk We reported in September 2019 that FCC had implemented numerous security controls for the three systems we reviewed, but it had not consistently implemented the NIST cybersecurity framework’s five core security functions to effectively protect the confidentiality, integrity, and availability of these systems and the information maintained on them. Deficiencies existed in the FCC information security program and technical controls for the five core functions that were intended to (1) identify risk, (2) protect systems from threats and vulnerabilities, (3) detect cybersecurity events, (4) respond to these events, and (5) recover system operations when disruptions occur. These deficiencies increased the risk that sensitive information could be disclosed or modified without authorization or be unavailable when needed. As shown in table 1, deficiencies existed in all five core security functions for the FCC systems we reviewed. Also shown are the numbers of recommendations we made to FCC to rectify the deficiencies. FCC Generally Identified Risks and Developed Security Plans for Selected Systems, but Shortcomings Remained Activities associated with the identify core security function are intended to help an agency to develop an understanding of its resources and related cybersecurity risks to its organizational operations, systems, and data. Essential elements of a FISMA-mandated information security program include assessing risks, developing system security plans, and authorizing information systems to operate. NIST guidance states that agencies should assess risks and authorize systems on an ongoing basis. Additionally, FCC requires that security plans, risk assessments, and system authorizations be reviewed annually or whenever significant changes occur to the information system, computing environment, or business operations. Consistent with its guidance, FCC had developed system security plans for each of the three systems we reviewed and had updated the risk assessments for two of the systems in 2017 and 2018, respectively. However, as of March 2019, the commission had not reviewed or updated the risk assessment for the third system reviewed since May 2017—a lag of about 22 months. Commission officials stated that they had not reviewed or updated the system’s risk assessment because the commission had implemented a new risk assessment process and officials had not yet had time to review and update documentation for this system. In addition, FCC continued to operate two of the three selected systems on expired authorizations to operate. Although FCC granted a full authorization to operate to one system in May 2018, the commission allowed the authorizations for the other two systems we reviewed to expire. Both of these systems had received a conditional authorization to operate so that the systems could continue to operate while the commission mitigated known system vulnerabilities. However, in December 2018, the conditional authorizations for both systems expired because, according to FCC officials, the commission had not mitigated the vulnerabilities. Nevertheless, FCC continued to operate the systems. By not regularly updating the risk assessment of one system and continuing to operate another system without a current authorization to operate, FCC unnecessarily exposed the information on these systems to increased risks of unauthorized changes and access to information. Subsequent to our September 2019 report, FCC reviewed and updated the system’s risk assessment in accordance with its new risk assessment process. In addition, FCC granted a full authorization to operate to one of the systems in October 2019, but does not expect to grant a full authorization to operate for the other system until later in 2020. FCC’s Contract Provisions with Its Cloud Service Provider Did Not Reflect All Applicable Security Requirements NIST SP 800-144, Guidelines on Security and Privacy in Public Cloud Computing, states that a service-level agreement should define the terms and conditions for access and use of the services offered by the cloud service provider. In addition, FedRAMP Control Specific Contract Clauses provides security control specifications that may need to be included in the task order for the service and specified in the service level agreement. These contract clauses include specifications related to data jurisdiction, audit records storage, time frames for reporting security incidents, and system boundary protection. FCC’s task order and service level agreement with its cloud service provider specified activities the provider was to perform, such as providing access and support for products and services, and completing performance deliverables to ensure service availability. However, FCC had not documented specific contract clauses associated with implementing security control requirements related to retaining audit records, meeting reporting incident time frames, and protecting system boundaries in accordance with FedRAMP. According to FCC’s associate chief information officer, the commission relied on FedRAMP’s oversight to ensure that its cloud provider implemented security controls that comply with federal data requirements. However, FedRAMP assesses and monitors only the security controls that the program and cloud service provider agree that the provider will implement. These agreed-upon controls may not include an agency’s specific security requirements. Thus, responsibility falls on FCC to ensure that its information security requirements are being implemented in cloud computing environments. Nevertheless, by not specifying its specific control requirements when procuring services from its cloud provider, FCC increased the risk that its data and sensitive regulatory information will not be adequately protected in the event that its cloud service provider experiences a security breach. Subsequent to our September 2019 report, FCC developed a plan of action and milestones (POA&M) for this deficiency and stated that it plans to rectify the deficiency by May 2020. FCC Did Not Consistently Implement Appropriate Safeguards to Protect Information on Selected Systems Activities associated with the protect core security function are intended to help agencies develop and implement appropriate system safeguards. These activities include limiting access to computing resources to authorized users, processes and devices; encrypting data to protect its confidentiality and integrity; configuring devices securely; and updating software to protect systems from known vulnerabilities. FCC implemented activities that established multiple layers of technical controls, including access controls and firewalls, encryption of sensitive data, and system configuration management. However, we reported in September 2019 that implementation of these technical controls were not consistent. For example, 37 technical control deficiencies and an information security program-related deficiency diminished the effectiveness of the controls protecting the systems we reviewed. A brief summary of the results of our tests of FCC’s controls for protecting the three systems we reviewed follows. FCC Did Not Consistently Implement Effective Access Controls FCC policy states that, in accordance with NIST SP 800-53 guidelines, users should not share the same identifier and the commission should configure its information systems to require users to create complex passwords. FCC’s policy also stipulates that the commission employ the principle of “least privilege” and enforce approved authorizations for controlling the flow of information within the system and between interconnected systems. However, FCC did not consistently implement technical controls to effectively limit access to the systems we reviewed, as the following examples illustrate. Although FCC policy states that individual user accounts are not to be shared, the commission allowed multiple users to share the credentials of several privileged accounts. While FCC policy established minimum requirements for password complexity and account lock-out provisions, the commission did not routinely enforce these requirements. While FCC policy requires limiting access rights for users to only those they need to perform their work, the commission inappropriately granted excessive permissions to users to access server configuration files. Although FCC established a policy for monitoring and controlling access between systems, it did not securely configure network devices to effectively control access and communications between systems. Access control deficiencies existed primarily because FCC network administrators did not adequately monitor configuration settings and did not implement sufficient controls to enforce consistent authentication and authorization across all of the commission’s systems that we reviewed. However, until FCC fully implements those actions and remediates related technical deficiencies, the commission remains at increased risk that unauthorized individuals or attackers could obtain inappropriate access to its network devices, firewalls, and servers, and compromise its network. As of November 2019, FCC had acted to address several technical control deficiencies related to access control. FCC Did Not Consistently Encrypt Sensitive Data NIST SP 800-53 recommends that organizations employ cryptographic mechanisms to prevent the unauthorized disclosure of information during transmission and establish a trusted communications path between users and security functions of information systems. NIST also requires that, when agencies use encryption, they use an encryption algorithm that complies with FIPS Publication 140-2. In addition, FCC’s System and Communication Protection Policy states that confidentially sensitive data must be encrypted before being transmitted using any nonprotected communication method and that all passwords must be encrypted. However, in seven instances, the commission did not consistently deploy strong encryption capabilities to protect sensitive data or establish a secure communications path between users and information systems. For example, FCC sometimes sent data in clear text over the network and did not enable FIPS 140-2 compliant encryption algorithms on certain devices. These deficiencies existed primarily because commission personnel did not adequately monitor configuration settings. By not consistently deploying strong encryption capabilities, FCC limits its ability to protect the confidentiality and integrity of its sensitive information. According to Information Technology Center officials, as of November 2019, the commission was still working toward full compliance with federal encryption standards. FCC Did Not Consistently Configure Servers Securely or Update Software in a Timely Manner NIST SP 800-53 states that agencies should configure security settings to the most restrictive mode consistent with operational requirements and disable services within the information system deemed to be unnecessary or non-secure. FCC policy on risk assessment states that systems and devices should be scanned periodically and software patches should be applied for all known critical security vulnerabilities. In addition, OMB Circular A-130 states that agencies are to implement current updates and patches for all software components of information systems, and prohibit the use of unsupported systems and system components. Although FCC established policies for applying software patches on a prescribed basis, it did not update software in a consistent or timely manner to effectively protect the three systems we reviewed. For example, FCC did not apply software patches in a timely manner to resolve known security vulnerabilities, and used unsupported or out-of- date system software on multiple network devices, firewalls, and servers. Patching control deficiencies existed because FCC did not adequately monitor configuration settings of devices on its network. According to Information Technology Center officials, as of February 2019, the commission was in the process of (1) migrating and modernizing its systems’ portfolio and (2) implementing an application monitoring and testing tool to reduce patching times. However, until FCC applies software patches in a timely manner, and replaces unsupported software and devices, it will remain at increased risk that individuals could exploit known vulnerabilities to gain unauthorized access to its computing resources. As of November 2019, FCC had taken corrective actions to address certain technical control deficiencies related to configuring servers securely and updating software in a timely manner. Although FCC Had Documented Security Policies, It Had Not Documented Operational Procedures Developing, documenting, and implementing information security policies and procedures are essential elements of an agency’s FISMA-mandated information security program. FCC’s Policy for Information Security and Privacy states that FCC shall implement procedures and controls at all levels to protect the confidentiality and integrity of information stored and processed on the commission’s systems, and to ensure that the systems and information are available to authorized persons when required. Although FCC developed and documented commission-wide policies addressing the 18 control areas—such as access control, configuration management, security awareness training, and contingency planning— identified in NIST SP 800-53, the commission had not fully developed or documented the detailed operating procedures that are needed to effectively implement its security policies. For example, FCC had not documented detailed procedures for implementing the following NIST- specified control areas: (1) access control, (2) configuration management, (3) identification and authentication, (4) system maintenance, (5) media protection, (6) physical and environmental protection, (7) information security program management, (8) risk assessment, (9) system and services acquisition, (10) system and communication protection, and (11) system and information integrity. The lack of detailed operating procedures likely was an underlying cause for many of the technical control deficiencies we identified. According to the FCC CISO, as of February 2019, the commission was in the process of reviewing and revising its information security policies and had issued POA&Ms to develop and document the missing procedures. Nevertheless, until FCC fully develops and documents detailed operating procedures for implementing its security policies, the commission faces increased risks that it will not effectively protect its information systems and information from cyber threats. FCC Had Not Effectively Implemented Controls Intended to Detect Cybersecurity Events or Deficiencies The detect core security function is intended to allow for the timely discovery of cybersecurity events and deficiencies. Controls associated with this function include logging and monitoring system activities, and assessing security controls in place. NIST SP 800-53 states that agencies should enable system logging features and retain sufficient audit logs to support the investigations of security incidents and monitoring of select activities for significant security-related events. Additionally, NIST SP 800-53 and industry leading practices state that organizations should increase their situational awareness through enhanced monitoring capabilities to analyze network traffic data over an extended period of time at external boundaries and inside their internal network to identify anomalous, inappropriate, or unusual malicious activities. Lastly, FISMA requires each agency to periodically test and evaluate the effectiveness of its information security controls in place applicable to policies, procedures, and practices. In September 2019, we reported that FCC had implemented security monitoring controls, such as performing regular vulnerability scanning and deploying a system information and event management tool, to detect the presence of potential malicious threats. However, six technical control deficiencies in these capabilities diminished the effectiveness of the controls to detect cybersecurity events in the systems we reviewed. For example, FCC did not fully capture system log data on certain devices and had limited network monitoring visibility into portions of its data center environment. According to Information Technology Center officials, FCC had deficiencies in logging, retention, and monitoring because the commission had not fully configured its security information and event monitoring tool to capture and monitor sufficient system log and network traffic data to adequately detect cybersecurity events. As a result, FCC may not be able to detect or investigate anomalous activities inside its network. In addition, although the commission established a process for assessing the effectiveness of the security controls for its systems, its control tests and evaluations were not sufficiently robust. For example, the commission’s evaluations did not identify many of the security control deficiencies we identified. Consequently, FCC had limited assurance that the security controls were in place and operating as intended. As of November 2019, FCC had acted to address several technical control deficiencies, and associated recommendations, such as capturing network traffic data and providing for real-time network monitoring; however, other technical control deficiencies remain. FCC Did Not Fully Implement Its Incident Response Controls and Remediate Deficiencies in a Timely Manner The respond core security function is intended to support the ability to contain the impact of a potential cybersecurity event. Controls associated with this function include implementing an incident response capability and remediating newly identified deficiencies. We reported in September 2019 that, as part of its information security program, FCC had implemented controls for incident response by developing, documenting, and annually updating its incident handling policy and procedures, along with its guidelines for remediating deficiencies. However, two information security program-related deficiencies and a technical control deficiency diminished the effectiveness of the controls to respond to cybersecurity events for the systems we reviewed. For example, the commission did not adequately address security incidents and mitigate known deficiencies in a timely manner. FCC Had Developed and Documented an Incident Response Capability, but Did Not Report Several Incidents in a Timely Manner NIST SP 800-53 and SP 800-61 state that agencies should develop, document, and implement incident response policy and procedures, and keep them updated according to agency requirements. FCC incident response policy also states that all employees are required to report suspected security incidents to the FCC Network Security Operations Center (NSOC) group within 1 hour of discovery or detection, and all other incidents within 24 hours of discovery. Further, FCC’s incident response procedures require internal escalation and external notification to the United States Computer Emergency Readiness Team (US-CERT) within 1 hour. FCC had developed, documented, and updated its incident response policy and procedures on an annual basis to address security incidents. The commission also established a NSOC group as the single point of contact for potential security incidents. However, FCC did not report internally to the NSOC group or externally to US-CERT in a timely manner for three of 10 security incidents we reviewed. Specifically, A FCC employee took 2 days to report the existence of an information spillage incident to the NSOC instead of the required 1-hour reporting time frame. The NSOC group took approximately 4 hours to report a December 2017 distributed denial-of-service attack incident and a February 2018 malicious attack incident to the US-CERT, instead of the 1 hour required for each. According to the FCC CISO, the commission plans to review its incident response policy and procedures, as well as re-train its staff, to ensure that staff consistently follow the commission’s policy and US-CERT incident notification guidelines. Subsequent to the issuance of our September 2019 report, FCC indicated that it plans to address these matters by October 2020. Until it does so, the commission may impede its ability to receive timely assistance from appropriate federal agencies and mitigate any harm. FCC Had Action Plans to Remedy Identified Deficiencies for Selected Systems, but Did Not Implement Them in a Timely Manner NIST 800-53 states that agencies are to develop a POA&M for an information system to document the agencies’ planned remedial actions to correct identified deficiencies. FCC’s Plan of Action and Milestone Guide also states that the maximum completion time frames for implementing POA&M items related to critical and high severity level deficiencies are 30 and 60 days, respectively. Although FCC developed a remedial action process and maintained a management system to document and track the status of POA&M items, it did not complete remedial actions in a timely manner for the three systems we reviewed. Specifically, FCC did not remedy critical and high severity level deficiencies within the required time frames as stated in its policy. For example, FCC took an average of approximately 3 months to implement four critical severity level POA&M items for one system. FCC took an average of more than 1 year to remediate three critical and nine high severity level POA&M items for another system. Additionally, as of October 2018, this system had seven open critical and four open high severity level POA&M items that exceeded the remediation threshold on average by 1 year, 4 months, and 5 months, respectively. FCC took an average of more than 3 years to implement two critical and seven high severity level POA&M items for the third system. FCC officials attributed these delays to operational priorities and resource constraints, such as financial, personnel, and technological factors. However, such longstanding delays in remediating weaknesses pose a significant threat to the overall security posture of the commission, since the delays could allow intruders to exploit critical and high severity level deficiencies to gain access to FCC’s information resources. As of November 2019, FCC stated that it planned to address security program deficiencies related to remediating weaknesses in a timely manner by October 2020. FCC Developed Contingency Plans, but Had Not Developed Restoration Procedures or Conducted Annual Disaster Recovery Testing The recover core security function is intended to support timely recovery of system operations to reduce the impact from a cybersecurity event. Controls associated with this function include developing and testing contingency plans to ensure that, when unexpected events occur, critical operations can continue without interruption or can be promptly resumed, and that information resources are protected. In September 2019, we reported that, as part of its information security program, FCC had developed contingency plans for selected systems and established priorities for application disaster recovery. However, two information security program-related deficiencies diminished the effectiveness of the controls to recover the systems we reviewed. Specifically, the commission did not document detailed procedures for restoring two of the three systems conduct an annual test of its disaster recovery plan for the three selected systems in fiscal year 2018. FCC Established Contingency Plan Restoration Procedures for One System, but Had Not Fully Documented Restoration Procedures for Two Other Systems Reviewed NIST SP 800-34 Contingency Planning Guide for Federal Information Systems states that an information system contingency plan should provide detailed procedures to restore the information system or components to a known state. In addition, FCC’s Policy for Contingency Planning states that system contingency plans should reflect the restoration activities required for information systems to recover after an incident. FCC developed and documented a contingency plan for one system that specified detailed procedures for restoring system operations, data, and supporting applications. However, FCC did not include detailed procedures for restoring the other two systems we reviewed in their respective contingency plans—both of which are major application systems. For example, the contingency plans for these two systems did not specify procedures for restoration activities such as restoring critical operating system, application software, and system data to a known state. According to Information Technology Center officials, they did not consider the two systems as supporting mission essential functions, which would necessitate the inclusion of the applications in the detailed restoration procedures. However, both of the systems are major application systems and support mission essential functions at FCC. Subsequent to our September 2019 report, FCC documented detailed restoration procedures in the two other systems’ contingency plans that included activities associated with restoring critical operating system, application software, and system data to a known state. By doing so, FCC increased the likelihood that it will be able to restore operations to its mission essential functions in the event of a disaster. FCC Had Not Tested Disaster Recovery Capabilities on an Annual Basis NIST SP 800-84 states that a disaster recovery test should assess the ability of an agency to restore IT processing capabilities in the event of a disruption. Moreover, FCC’s policy for contingency planning states that all information system and facility disaster recovery plans should be tested annually to determine the effectiveness of the plan and the organizational readiness to execute the plan. In September 2019, we reported that FCC did not conduct test exercises of the disaster recovery plans for the three systems we reviewed during fiscal year 2018, nor did it test system backup, recovery, restoration, and reconstitution procedures for these systems. According to FCC officials, the test exercise did not take place in fiscal year 2018 because other business operation activities took precedence over the exercise since the test exercise requires all mission-essential function applications to be unplugged. As a result, FCC had limited assurance that it would be able to recover from unexpected disruptions in a timely and efficient manner. While it did not complete the exercise in fiscal year 2018, FCC did subsequently conduct a disaster recovery exercise at the beginning of fiscal year 2019. By doing so, FCC increased its assurance that it would be able to recover use of its systems from unexpected disruptions in a timely and efficient manner. FCC Has Implemented Most Recommendations in Our September 2019 Report and Plans to Implement the Remainder In our September 2019 report, we made 136 recommendations to FCC to bolster its agency-wide information security program and strengthen its technical security controls. Specifically, we recommended that FCC take nine actions to improve its information security program by, among other things, authorizing systems to operate, documenting operating procedures, resolving known vulnerabilities and reporting security incidents in a timely manner, and testing disaster recovery plans. We also recommended that FCC take 127 actions to address technical control deficiencies by implementing stronger access controls, encrypting sensitive data, configuring network devices securely, strengthening firewall rules, implementing audit and monitoring controls more effectively, among other actions. Since the issuance of our September 2019 report, FCC has made significant progress in implementing the recommendations we made to improve its information security program and resolve the technical control deficiencies in the information systems we reviewed. Specifically, as of November 2019, FCC had implemented 85 (63 percent) of the 136 recommendations we made in the September 2019 report and had effectively resolved the underlying deficiencies associated with the recommendations. The commission also had partially, but not fully, implemented 10 recommendations. In these instances, FCC provided evidence that it had resolved a portion of the underlying control deficiency, but had not completed all of the actions necessary to fully resolve the underlying control deficiencies. FCC did not provide any evidence that it had begun implementing the remaining 41 (30 percent) recommendations. The status of our recommendations to FCC is illustrated in figure 4. Table 2 provides additional details on the status of FCC’s actions to implement our recommendations to improve its information security program and the technical controls for the systems we reviewed. By implementing 85 recommendations, FCC (as of November 2019) had reduced risks associated with certain key activities. Specifically, FCC’s actions to implement four information security program-related recommendations included conducting a disaster recovery test exercise, documenting detailed system restoration procedures, and updating risk assessments to reflect the commission’s current computing environment. Regarding the technical controls, the commission had implemented 81 of our recommendations to rectify technical control-related deficiencies. For example, FCC strengthened firewall rules and access controls on its information system servers and internal networks—that we highlighted in our September 2019 report as being particularly vulnerable and requiring the commission to take immediate corrective actions. FCC also had developed a POA&M for each of the identified information security program-related and technical control deficiencies that remained open as of November 2019. The POA&M items contained required elements, such as severity levels (i.e., high, medium, and low) for identified weaknesses; identified estimated costs; designated points of contact; and established time frames for resolving those weaknesses and fully implementing the related recommendations. The commission’s plans called for it to implement the majority of the remaining information security program and technical control-related recommendations by May 1, 2020, and all recommendations by April 30, 2021, as shown in figure 5. Fully implementing the remaining recommendations is essential to ensuring that the commission’s systems and sensitive information are adequately protected from cyber threats. Key actions that remain include: documenting operational procedures, applying security patches and software updates, and enhancing network monitoring capabilities. Until FCC fully implements all of our recommendations and resolves the associated deficiencies, its information systems and information will remain at increased risk of misuse, improper disclosure or modification, and loss. Agency Comments We received written comments on a draft of this report from FCC. In its comments, which are reprinted in appendix IV, the commission expressed its commitment to protecting the confidentiality, integrity, and availability of its information systems. FCC noted our evaluation of its efforts to implement 85 of the 136 recommendations made in our September 2019 report and stated that it had also addressed nine additional recommendations. The commission further stated that it plans to address the remaining recommendations over the next 14 months with full mitigation anticipated by April 2021. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees. We are sending copies of this report to the appropriate congressional committees, the Federal Communications Commission, the commission’s Office of the Inspector General, and interested congressional parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, our primary point of contact is Vijay A. D’Souza at (202) 512-6240 or dsouzav@gao.gov. You may also contact Seto J. Bagdoyan at (202) 512-4749 or bagdoyans@gao.gov. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology Our objectives were to determine 1) the actions FCC took to respond to the May 2017 event that affected the Electronic Comment Filing System (ECFS), and 2) the extent to which FCC implemented security controls to effectively protect the confidentiality, integrity, and availability of selected systems. In September 2019, we issued a report which detailed the findings from our work in response to these two objectives. In the report, we made 127 recommendations to FCC to resolve the technical security control deficiencies in the information systems we reviewed and nine additional recommendations to improve its information security program. We designated that report as “limited official use only” (LOUO) and did not release it to the general public because of the sensitive information it contained. This report publishes the findings discussed in our September 2019 report, but we have removed all references to the sensitive information. Specifically, we deleted the names of the information system software, network devices, and resource tools that we examined, disassociated identified control deficiencies from named systems, deleted certain details about information security controls and control deficiencies, and omitted an appendix that was contained in the LOUO report. The appendix contained sensitive details about the technical security control deficiencies in the FCC’s information systems and computer networks that we reviewed, and the 127 recommendations we made to mitigate those deficiencies. We also provided a draft of this report to FCC officials to review and comment on the sensitivity of the information contained herein and to affirm that the report can be made available to the public without jeopardizing the security of the commission’s information systems and networks. In addition, this report addresses a third objective that was not included in the September 2019 report. Specifically, this objective was to determine the extent to which FCC had taken corrective actions to address the previously identified security program and technical control deficiencies and related recommendations for improvement that we identified in the earlier report. To address the first objective, we reviewed FCC’s security and incident response policies and procedures, examined related reports prepared by the commission and its Office of Inspector General, reviewed an internal assessment of the May 2017 event that was performed by the FCC Information Technology Center, and reviewed artifacts associated with system enhancement and performance such as change requests and email. We also extracted comment submission data derived from the data.gov application programming interface between May 1, 2017 and December 31, 2017 to identify the peak periods of increased comment submissions during and after the May 2017 event. In addition, we examined the aforementioned documents to assess whether the updated incident response policy and procedures, along with system enhancement and performance artifacts, were directly related to changes made subsequent to the May 2017 event. Lastly, we interviewed FCC Information Technology Center officials, including system and security staff, and Office of Inspector General officials to identify FCC’s actions to respond to the May 2017 event. To address the second objective, we reviewed FCC’s overall network environment, identified interconnectivity and control points, and examined controls for the commission’s networks and facilities. We performed this work at FCC facilities located in West Virginia, Pennsylvania, and Washington, D.C. As noted in our September 2019 report, we determined the extent to which FCC had implemented security controls to effectively protect the confidentiality, integrity, and availability of selected systems. To do so, we selected three of the commission’s information systems for review. We selected these systems because they (1) are essential to FCC’s mission and (2) were assigned a Federal Information Processing Standards Publication 199 rating of moderate or high impact. The results of our review of these systems is not generalizable to the commission’s other systems. To evaluate FCC’s controls for its information systems, we used GAO’s Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information. We based our assessment of controls on requirements of the Federal Information Security Modernization Act of 2014 (FISMA), which establishes key elements for an effective agency-wide information security program; National Institute of Standards and Technology (NIST) guidelines and standards; FCC policies and procedures; and standards and guidelines from relevant security organizations, such as the National Security Agency, and the Center for Internet Security. For reporting purposes, we categorized the security controls that we assessed into the five core security functions described in the NIST cybersecurity framework. The five core security functions are: Identify: Develop the organizational understanding to manage cybersecurity risk to systems, assets, data, and capabilities. Protect: Develop and implement the appropriate safeguards to ensure delivery of critical infrastructure services. Detect: Develop and implement the appropriate activities to identify the occurrence of a cybersecurity event. Respond: Develop and implement the appropriate activities to take action regarding a detected cybersecurity event. Recover: Develop and implement the appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired due to a cybersecurity event. These core security functions are described in more detail in appendix II. For each of the five core security functions, we examined selected FCC security controls and related documentation: For the identify core security function, we examined FCC’s reporting for its hardware and software assets; analyzed risk assessments for the three selected systems to determine whether threats and vulnerabilities were being identified; analyzed FCC policies and procedures to determine their effectiveness in providing guidance to personnel responsible for securing information and information systems; and analyzed security plans for the three selected systems to determine if those plans had been documented and updated according to federal guidance. For the protect core security function, we examined access controls for the three systems. These controls included the password complexity and settings to determine if password management was being enforced; administrative users’ system access permissions to determine whether their authorizations exceeded the access necessary to perform their assigned duties; and firewall configurations, among other things, to determine whether system boundaries had been adequately protected. We also examined configurations for providing secure data transmissions across the network to determine whether sensitive data were being encrypted. In addition, we examined configuration settings for routers, network management servers, switches, and firewalls to determine if settings adhered to configuration standards, and we inspected key servers and network devices to determine if critical patches had been installed and/or were up to date. For the detect core security function, we analyzed security control assessments, and centralized logging and network traffic monitoring capabilities for key assets connected to the network. For the respond core security function, we reviewed FCC’s implementation of incident response practices, including an examination of incident tickets for 10 incidents the commission considered most significant from January 1, 2017 to May 29, 2018; and examined the commission’s process for correcting identified deficiencies for the three selected systems. For the recover core security function, we examined contingency and disaster recovery plans for the three selected systems to determine whether those plans had been developed and tested. For the core security functions, as appropriate, we evaluated elements of FCC’s information security program. For example, we analyzed risk assessments, security plans, remedial action plans, and contingency plans for each of the three selected systems. We also evaluated FCC’s security policies and procedures. In assessing FCC’s controls associated with these core functions, we interviewed FCC’s Information Technology Center personnel, chief information officer, chief information security officer, general counsel, inspector general, and Public Safety and Homeland Security Bureau officials, as needed. To determine the reliability of FCC’s computer-processed data for incident response records, we evaluated the materiality of the data to our audit objective and assessed the data by various means, including reviewing related documents, interviewing knowledgeable FCC officials, and reviewing internal controls. Through a combination of these methods, we concluded that the data were sufficiently reliable for the purposes of our work. To accomplish our third objective—on FCC’s actions to address the previously identified security program and technical control deficiencies and related recommendations—we requested that the commission provide a status report of its actions to implement each of the recommendations. For each recommendation that FCC indicated it had implemented as of November 2019, we examined supporting documents, observed or tested the associated security control or procedure, and/or interviewed the responsible agency officials to assess the effectiveness of the actions taken to implement the recommendation or otherwise resolve the underlying control deficiency. Based on this assessment and FCC status reports, we defined the status of each recommendation according to three categories: fully implemented—FCC had implemented the recommendation (i.e., the commission provided evidence showing that it had effectively resolved the underlying control deficiency); partially implemented—FCC had made progress toward, but had not completed implementing the recommendation (i.e., the commission provided evidence showing that it had effectively resolved a portion of the underlying control deficiency); and not started—FCC did not provide evidence that it had acted to implement the recommendation (i.e., the commission provided no evidence showing that it had effectively resolved the underlying control deficiency). We conducted the performance audit for the first two objectives from February 2018 through September 2019 in accordance with generally accepted government auditing standards. We conducted work supporting the third objective and, where applicable, included updates to our work in the second objective, from October 2019 through March 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings. Appendix II: National Institute of Standards and Technology’s Cybersecurity Framework The National Institute of Standards and Technology’s cybersecurity framework consists of five core functions: identify, protect, detect, respond, and recover. Within the five functions are 23 categories and 108 subcategories of security-related controls (see table 3). Appendix III: Timeline of May 2017 Event Involving the FCC Electronic Comment Filing System Below is a timeline of the Federal Communications Commission’s (FCC) May 2017 Electronic Comment Filing System (ECFS) event and subsequent related events: On April 27, 2017, FCC issued the Restoring Internet Freedom Notice of Proposed Rulemaking in the Federal Register. The notice directed interested parties to submit comments via FCC’s ECFS. On the evening of May 7, 2017, a late night talk show aired a segment on the Restoring Internet Freedom notice and encouraged viewers to submit comments via ECFS. On the evening of May 7, 2017, according to a report by the FCC Office of Inspector General (IG), ECFS experienced a significant increase in the level of comment traffic attempting to access the system, resulting in the disruption of system availability. A contractor providing web performance and cloud security solutions to FCC identified a 3,116 percent increase in traffic to ECFS between May 7 and May 8, 2017. In the early morning of May 8, 2017, ECFS became unavailable to commenters. FCC’s vendor sent automated alerts indicating a spike in network traffic, in addition to preliminary network statistical data, to FCC. During the mid-morning of May 8, 2017, FCC’s Information Technology Center responded to the alerts from the vendor and initiated stabilization efforts to ECFS. During the afternoon of May 8, 2017, FCC issued a press release in which FCC’s chief information officer (CIO) at that time provided a statement about the cause of delays experienced by commenters trying to file comments on the ECFS. The CIO’s statement said that FCC was subjected to multiple distributed denial-of-service attacks. He further stated that, “these were deliberate attempts by external actors to bombard the FCC’s comment system with a high amount of traffic.” During May 9-10, 2017, FCC restored ECFS but still experienced response-time problems relating to system performance. On May 10, 2017, FCC’s Information Technology Center responded to inquiries from the Federal Bureau of Investigations and FCC OIG via email and phone. On June 21, 2017, the FCC OIG opened a full investigation into the event because of, according to the OIG, the importance of FCC’s cybersecurity posture and the possibility that cybercrimes had been committed that had the potential of being ongoing threats to the integrity of FCC’s computer systems. On January 4, 2018, FCC OIG referred the investigation to the Justice Department. On August 7, 2018, the FCC OIG published an investigative report on the ECFS event. According to the OIG report, the allegations of multiple distributed denial-of-service attacks alleged by the FCC CIO at that time were not substantiated. The FCC OIG concluded that the spikes in web traffic to ECFS had coincided exactly with the timing of the late night television show where the host discussed the FCC’s Restoring Internet Freedom proceeding and encouraged viewers to visit the commission’s website and file comments. The FCC OIG’s report also indicated that the commission did not define the event (i.e., any observable occurrence in a network or system) as a cybersecurity incident (i.e., an imminent threat or violation of computer security policies, or security practices). Therefore, according to the OIG report, FCC did not take actions to: refer the matter to the United States Computer Emergency Readiness Team (US-CERT) in accordance with federal policy, implement internal incident handling procedures in accordance with its incident handling policy, or conduct a thorough analysis before or after the event to determine if it was an incident. On August 16, 2018, the FCC Chairman testified at a Senate Committee on Commerce, Science and Transportation oversight hearing on the conclusions of the FCC OIG investigative report on the ECFS event. Appendix IV: Comments from the Federal Communications Commission Appendix V: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contacts named above, Gary Austin, David Bruno, Tammi Kalugdan, Duc Ngo, and Christopher Warweg (assistant directors); David Hong (analyst-in-charge); Breanne Cave; Chris Businsky, Jr.; Saar Dagani; Marshall Williams, Jr.; Corey Evans; Andrew Howard; Elizabeth Kowalewski; Priscilla Smith; Henry Sutanto; and April Yeaney made significant contributions to this report.
Why GAO Did This Study FCC relies extensively on information systems to accomplish its mission of regulating interstate and international communications in the United States. FCC uses one such system, ECFS, to receive public comments about proposed changes in FCC regulations. In May 2017, a surge in comments caused a service disruption of ECFS during a public comment period. GAO was requested to review ECFS and the reported disruption. In September 2019, GAO issued a limited official use only report on the actions FCC took to respond to the May 2017 event, and the extent to which FCC had effectively implemented security controls to protect the confidentiality, integrity, and availability of selected systems. This current report is a public version of the September 2019 report with sensitive information removed. In addition, for this public report, GAO determined the extent to which FCC has taken corrective actions to address the previously identified security program and technical control deficiencies and related recommendations for improvement. In the prior report, GAO compared FCC's policies, procedures, and reports to federal cybersecurity laws and policies. GAO examined logical access controls and security management controls for three systems selected based on their significance to FCC. For this report, GAO examined supporting documents regarding FCC's actions on previously identified recommendations, observed controls in operation, and interviewed personnel at FCC. What GAO Found As GAO reported in September 2019, the Federal Communications Commission (FCC) bolstered the capacity and performance of the Electronic Comment Filing System (ECFS) to reduce the risk of future service disruptions. FCC also implemented numerous information security program and technical controls for three systems that were intended to safeguard the confidentiality, integrity, and availability of its information systems and information. systems from threats and vulnerabilities, detecting and responding to cyber security events, and recovering system operations. GAO made 136 recommendations to address these deficiencies (see table). As of November 2019, FCC had made significant progress in resolving many security deficiencies by fully implementing 85 (about 63 percent) of the 136 recommendations GAO made in September 2019. FCC had also partially implemented 10, but had not started to implement the remaining 41 recommendations (see figure). Additionally, FCC has created remedial action plans to implement the remaining recommendations by April 2021. Until FCC fully implements these recommendations and resolves the associated deficiencies, its information systems and information will remain at increased risk of misuse, improper disclosure or modification, and loss.
gao_GAO-20-151
gao_GAO-20-151_0
Background DOD acquires new weapon systems for its warfighters through a management process known as the Defense Acquisition System. This system is implemented by two key acquisition policies: DOD Directive 5000.01, which establishes the overarching framework for the Defense Acquisition System; and DOD Instruction 5000.02, which provides detailed procedures for the operation of the Defense Acquisition System and the management of acquisition programs. These policy documents establish the guiding principles for all aspects of the DOD acquisition process. Additionally, each of the military services has its own acquisition policies which incorporate and enhance the DOD acquisition guidance. Figure 2 depicts DOD’s acquisition process beginning with Milestone A in general terms. Several entities in the Office of the Secretary of Defense and the military departments play a role in the oversight of DOD weapon system acquisition programs, including the following: The Under Secretary of Defense for Research and Engineering is responsible for establishing policies on and supervising all aspects of defense research and engineering, technology development, technology transition, prototyping, experimentation, and developmental testing activities and programs, including the allocation of resources for defense research and engineering. DOD’s Reliability and Maintainability Engineering lead reports to this Under Secretary. The Under Secretary of Defense for Acquisition and Sustainment is responsible for establishing policies on and supervising all matters relating to acquisition (including (1) system design, development, and production; and (2) procurement of goods and services) and sustainment (including logistics, maintenance, and materiel readiness). This organization has certain oversight responsibilities for major defense acquisition programs throughout the acquisition process, such as collecting and distributing performance data. The Under Secretary is the Defense Acquisition Executive and serves as the milestone decision authority for certain major defense acquisition programs, meaning the Under Secretary authorizes these programs to proceed through the DOD acquisition process’s major milestones. At the military department level, the service acquisition executive, also known as the component acquisition executive, is a civilian official within a military department who is responsible for all acquisition functions within the department and can serve as the milestone decision authority. Congress has recently devolved much of the decision making authority for major defense acquisition programs from OSD to these service acquisition executives. According to a DOD official the service acquisition executive will normally assign a relevant program manager who will then assign a chief engineer or lead systems engineer and team members with responsibility for the engineering effort of a program, including the reliability engineering effort. The following officials serve as the service acquisition executive for the military departments: the Assistant Secretary of the Air Force (Acquisition, Technology, and Logistics); the Assistant Secretary of the Army (Acquisition, Logistics and Technology); and the Assistant Secretary of the Navy (Research, Development and Acquisition) for both the Navy and the Marine Corps. Major defense acquisition program managers, who can be either civilian or military, are tasked with developing and delivering new weapon systems while balancing factors that influence cost, schedule, and performance and ensuring that systems are high quality, supportable, reliable, and effective. DOD’s Approach to Reliability According to DOD guidance, reliability is the probability of an item to perform a required function under stated conditions for a specified period of time. DOD’s acquisition environment has changed over time and this has affected the way the Department addresses reliability. Until the late 1990s, DOD’s goal was to achieve good reliability by focusing on specific reliability engineering tasks during design and manufacturing, and early testing to prevent, detect, and correct design deficiencies. In the late 1990s, in response to various NDAAs, DOD implemented certain acquisition reforms, eliminating and consolidating acquisition functions, and reducing the number of personnel assigned to the remaining functions. According to the Defense Science Board Task Force on Developmental Test & Evaluation, these reforms altered several aspects of the military acquisition process and DOD’s acquisition workforce. As a result, DOD lost experienced acquisition management and technical personnel. DOD officials stated this loss included reliability personnel who contributed to developmental testing and evaluation. DOD also canceled the Military Standard pertaining to reliability at this time. DOD officials explained that, after acquisition reform in the late 1990s, the department shifted much of the responsibility for reliability issues to contractors, and government personnel primarily focused on how systems performed during operational tests at the end of their development program. In the mid to late 2000s, Congress and DOD took actions to increase the focus on reliability engineering during weapon system design and development. Both Congress and DOD took steps to elevate the importance of reliability, which has continued through 2019. Figure 3 depicts selected laws related to reliability and DOD reliability efforts over time. Impacts of Poor Reliability on Warfighters Poor reliability can negatively affect the warfighters through low operational availability; that is, the amount of time a system is available to execute its mission. For example, the MV-22 aircraft was less reliable than intended, and required many more spare parts than expected. When the Marine Corps deployed to Iraq, MV-22 maintainers had to cannibalize parts from some MV-22s to keep other MV-22s flying, and as a result, the Marine Corps had fewer aircraft available to fly missions. Impacts of Poor Reliability on Operating and Support Costs Reliability can significantly influence a weapon system’s operating and support costs, which we have previously reported account for approximately 70 percent of a weapon system’s total life-cycle cost. Operating and support costs are a reflection of how programs achieve operational availability for weapon systems. Programs can achieve operational availability by building highly reliable weapon systems or, if the systems are not highly reliable, supporting them with an extensive logistics system that can ensure spare parts and other support items are available when needed. DOD has previously reported that deficiencies in DOD weapon systems—such as high failure rates and an inability to make significant improvements in reliability—have historically limited program performance and increased operating and support costs. Impacts of Poor Reliability on Commercial Companies In the commercial world, the manufacturer carries most of the risks that would result from developing a product with poor reliability. Such risks include increased warranty expenses that decrease profits. For example, reliability personnel from Ford, Cummins, and Thermo Fisher Scientific explained that more reliable products cost their companies less because they do not have to dedicate as many resources to fixing systems that fail, which would lead to warranty claims. In addition to increased costs, poor reliability can also negatively influence a company’s reputation. Ford representatives said that failures and product recalls are not just financial costs; recalls are highly publicized. A Thermo Fisher Scientific product manager explained that a customer’s bad experience can be shared in the media and negatively influence a company’s reputation. This may alter future buying behavior, especially in industries with relatively small customer bases in closely linked professional communities. This person shared a prior experience at a different company, where a design risk was identified during development. Instead of addressing the risk effectively, a standard cycle test was done to prove or disprove the risk. However, the test did not apply the stress necessary to cause the failure. The product was released to the market based on this successful but inadequate test. In the field, the components failed, and the company had to remove the product from the market. This damaged the company’s reputation and sales. We have previously reported that poor reliability is a concern for commercial companies because their customers demand products that work, or are reliable and do not experience failure, and the companies must develop and produce high-quality products to sustain their competitive position in the marketplace. Commercial Companies Proactively Address Reliability In the commercial sector, reliability engineers told us their companies proactively address reliability from the beginning of the development process. We reviewed documentation from these companies and the 2019 Reliability and Maintainability Symposium and found engineers strive to identify reliability issues at the component and sub-system level early in the development process to avoid expensive rework after producing an entire system. We identified the following key practices in the commercial sector: leveraging reliability engineers early and often, establishing realistic reliability requirements—for example, not expecting a product to operate twice as long as its predecessor before failing, emphasizing reliability with their suppliers, and employing reliability engineering activities to improve a system’s design throughout development. Figure 4 shows some of the activities involved with these key practices. Leverage Reliability Engineers Early and Often We found commercial companies in our review include reliability engineers as part of their development teams. In this role, reliability engineers implement reliability tools and methods that integrate statistics, physics, and engineering principles to help develop a reliable product. For example, HBM Prenscia identified that reliability engineers from several commercial companies said it was important to initiate their assessments early in the development life cycle when there is greatest opportunity to influence product design. According to leading reliability engineers, engineering activities can add value to decision-making by providing direction and feedback that helps development teams refine designs that lead to more reliable and cost effective systems. Researchers have reported reliability engineers should be empowered to influence decisions, such as delaying overall project schedule or negotiating for more resources when necessary. In addition, our analysis of reliability engineers’ documentation from the Reliability and Maintainability Symposium and commercial companies found it important that management provide sufficient resources and time dedicated specifically to improving reliability by discovering failures, implementing corrective actions, and verifying their effectiveness. Our analysis found that cost and schedule constraints can negatively influence reliability testing, which can limit development teams’ ability to discover failures and improve designs through corrective actions. Our analysis of documentation from the Symposium also highlighted the importance of having experienced reliability engineers. For example, Ford representatives told us they have a dedicated reliability engineering community that coaches the members of the company’s different product development teams. Ford’s reliability engineers said they focus on teaching development team members to ask the right questions at the right point in time with the right people in the room. Establish Realistic Reliability Requirements Based on Proven Technologies We found companies in our review emphasize that reliability requirements should be realistic, be based on proven technologies, and reflect customer usage and the operating environment. To determine feasibility of meeting a requirement, reliability engineers we spoke with at Cummins and Thermo Fisher Scientific recommend conducting comparative analysis with historical data and assessing risk due to new, unique, or difficult technology. In addition, an independent reliability engineer with over 40 years of experience told us programs should provide justifications for how reliability requirements were established to demonstrate they are within the realm of technological possibility. If the reliability requirement turns out not to be technically feasible, it could have broad implications for the intended mission, life-cycle costs, and other aspects of the system. We have previously reported on the importance of making informed trade-offs when considering requirements to reduce program risk or total ownership costs. HBM Prenscia representatives told us the commercial companies they work with regularly make trade-offs involving capability, reliability, and cost requirements. Reliability representatives at Ford told us it is important to have the right people involved in these trade-off decisions, and that they work with user representatives and reliability engineers to define their systems’ reliability requirements. Emphasize Reliability with Suppliers Systems produced by commercial companies in our review include parts or components produced by suppliers, and reliability engineers repeatedly told us the reliability of those parts or components directly impacts the reliability of the overall system. According to a leading reliability engineer, vendor quality can affect a part’s reliability, so it is critical that the reliability of vendors’ parts be evaluated before being approved for use. To emphasize reliability with suppliers, commercial companies in our review engage with suppliers early, clearly specify requirements with the supplier, and evaluate and monitor the supplier. Cummins representatives stated engaging the supplier early is critical. They explained that they engage the supplier early, during concept development, and ask the supplier to demonstrate it can meet requirements. According to Cummins representatives, this is to ensure the supplier is able to meet quality standards and to ensure there is enough lead time and testing of components. Reliability engineers at the Reliability and Maintainability Symposium also emphasized that reliability requirements must be clearly specified with suppliers, and product teams must actively monitor suppliers and assess their deliverables. Cummins representatives explained their engineers work directly with the supplier and hold it responsible for meeting reliability requirements. Ford representatives told us they evaluate and monitor the supplier to ensure the components it is providing are reliable. For example, they visit their suppliers’ testing facilities and evaluate their testing programs, focusing specifically on their failure analysis and reliability activities. We have previously reported that leading commercial companies use disciplined quality management practices to hold suppliers accountable for high quality parts through such activities as regular supplier audits and performance evaluations. A Thermo Fisher Scientific product manager provided a scenario where relying on an external supplier’s quality assurances would be insufficient. For example, a compressor is a critical – and commonly outsourced— component in complex industrial equipment. The product manager recommended in-house testing for critical components like a compressor rather than relying on a supplier’s testing that may not factor in real-world operating conditions. In house testing is recommended to avoid finding a failure after the product is brought to market. Post-sale failures result in dissatisfied customers, reputation damage, warranty claims and similar issues. The Thermo Fisher Scientific product manager said, in some cases, a company should establish a dedicated test facility for vital outsourced components provided by suppliers. Employ Reliability Engineering Activities to Improve a System’s Design throughout Development Based on our review of commercial sector practices, we found companies use reliability engineering activities to identify potential product failures and their causes. They also use these activities to improve a system’s design early and often throughout development to avoid surprises that lead to expensive rework or excessive repairs after integrating components and subsystems. For example, HBM Prenscia representatives told us that failures should be identified early, and that identification should be viewed as an opportunity to improve the design and make the product better. According to leading reliability engineers, the earlier changes are made to designs, the less costly they are to the program. It is expensive, time consuming, and risky to make changes late in development, as late changes jeopardize product reliability. The commercial company representatives we spoke with also emphasized the need to conduct reliability engineering activities iteratively until the design is optimized. For example, HBM Prenscia has identified that a common mistake is establishing a reliability plan but not actively utilizing it throughout development. Reliability engineers use various reliability engineering activities to increase system reliability, and generally refer to these activities as design for reliability tools. These tools can be tailored to meet the specific needs of a particular development project, and can complement one another and increase reliability prior to any testing. These tools can help identify how long a part or component will work properly, how a part or component’s failure will affect a system, and what actions are needed to correct failures. See table 2 for some examples of design for reliability tools that can be used to help meet reliability goals. We have previously reported that leading commercial companies use a knowledge-based development process that enables decision makers to be reasonably certain that product quality, reliability, and timeliness are assured. Our analysis of documentation from reliability engineers found that reliability engineering activities should be integrated into the product development process, and their outputs should be reviewed at development milestones. These reviews can help ensure that reliability is a robust process rather than a paper exercise by providing an opportunity to assess data from reliability analysis or testing. For example, Cummins incorporates reliability reviews into its product development processes to ensure products meet reliability goals prior to moving to the next phase of development. This helps ensure the company is on track to fulfill its reliability commitments and will be able to deliver the promised product reliability to customers. The leading commercial practices we reviewed highlight the importance of consistently collecting, sharing, and analyzing data from reliability engineering activities to inform development efforts. Commercial companies we spoke with recognized the value of reliability data. For example, Cummins representatives stated they capture reliability data and share it across different product development teams to help inform estimates of reliability for new product development efforts. In addition, Cummins representatives noted that they are moving to an interactive database that personnel throughout the entire company can access. Similarly, HBM Prenscia representatives told us that failures and lessons learned from previous projects should be captured and shared within a company, and that doing so could help inform future product development efforts. Selected Major Defense Acquisition Programs Did Not Consistently Reflect Key Reliability Practices We reviewed seven major defense acquisition programs and found they often reactively addressed reliability after identifying issues later in development. As shown below, these programs did not consistently reflect key practices we identified in the commercial sector, and instead prioritized other activities intended to have positive acquisition cost and schedule impacts. However, DOD officials noted that there has recently been a greater emphasis on reliability, and the three programs that started development in 2012 and 2014 reflected more of the key practices than the older programs. See figure 5, which notes a distinction between commercial companies’ suppliers and DOD contractors. For more detailed information on each program, see appendix I. Two of the Seven Selected Programs Did Not Leverage Government Reliability Engineers in Decision Making Early The Expeditionary Fighting Vehicle (EFV) and F-22 programs did not involve reliability engineers early during system development. Instead, these programs leveraged engineers after reliability problems arose, including after they integrated components and subsystems and during system-level testing. At the end of system development, the programs brought in additional engineers and established more concerted reliability growth efforts. In one example, the EFV program did not have an overall systems engineer. Marine Corps acquisition officials stated that reliability was not a priority during the original system development process, and we have previously reported the program was instead focused on achieving other performance parameters, including water speed, survivability, and lethality. Prime contractor representatives identified some of their design engineers who lacked experience and did not comply with engineering standards as a root cause for problems discovered late in the development process. We also reported the lack of early systems engineering discipline and knowledge undermined the EFV program’s ability to develop informed and reasonable reliability requirements, delayed the identification of potential failures until integration, and contributed to poor vehicle reliability. In addition to frequent hydraulic system failures, leaks, and pressure problems, the EFV also suffered main computer failures that froze steering while operating in water. As we have previously reported, the EFV program was subsequently restructured. The program office hired additional engineers and consulted with Army reliability engineers to institute a reliability growth program. This program was intended to mitigate previously identified vehicle design issues related to reliability and other risks before proceeding into a second development and demonstration phase. However, the EFV program never got to fully realize the benefits of its new reliability approach, as less than 3 years after restarting development it was canceled due to continuing technology problems, development delays, and affordability concerns. For the F-22 program, officials stated that at points during development the program did not have a leadership position focused on reliability, and the official who oversaw reliability was also responsible for supply chain management. The officials noted that at the time these were not focus areas because the Air Force expected the contractor to conduct the needed reliability engineering. In 2004, we reported that, as early as low- rate initial production, however, the Air Force identified 68 parts that had a high rate of failure and needed to be removed or replaced, requiring additional contractor work. We also reported the F-22 canopy also experienced failures during testing, allowing it to achieve only about 15 percent of its expected lifetime. In 2014, we reported that later reliability maturation projects intended to address reliability deficiencies had a positive effect on availability over time, but as of 2018 the F-22 still had not met its availability target. Four of the Seven Selected Programs Initially Pursued Unrealistic Operational Requirements for Reliability As we have found in our prior reports as well as in this review, the EFV, F-22, F-35, and V-22 programs set unrealistic operational requirements for reliability. These requirements were, therefore, unachievable during development and before fielding the systems to warfighters. As we have previously reported, when programs overpromise a weapon’s prospective performance and deliver systems that cannot achieve their requirements, such as reliability goals, the warfighter receives less capability than originally promised. In one example, as we reported in 2019, more than 11 years after the start of F-35 production, none of the three aircraft variants (Air Force, Marine Corps, and Navy) had met the minimum targets for two of the program’s five reliability metrics. These include mean flight hours between part removals for replacement and mean flight hours between critical failures. We found that only the Navy variant had achieved the minimum target for a third goal, mean flight hours between maintenance events. As we reported, while the program has instituted an effort intended to improve reliability, the effort does not align improvement projects with the F-35’s reliability requirements. That is, the reliability improvement projects being funded may not improve the F-35’s performance against its reliability metrics. Ultimately, the program does not expect to achieve the unmet reliability metrics by full aircraft maturity, and program officials have acknowledged that the requirements should be reevaluated. As a result, the warfighter may not receive an aircraft that is as reliable as was expected. In a review of the V-22 program, DOD found that the program integrated complex technologies and unprecedented capabilities into its weapon system without accounting for unknown reliability risks. Specifically, these capabilities included a conceptually new design and multiple service and mission needs. However, officials stated that the program derived its reliability requirements from antecedent helicopters, systems that were not representative of the V-22 given its increased complexity. With a limited understanding of the V-22’s mission profile, program officials stated that they also underestimated the amount of time the system would be used in helicopter mode and its operating time on the ground. Subsequently, when the Marine Corps variant of the V-22—the MV-22—was deployed in Iraq from 2007 to 2009, a number of components experienced high rates of failure, affecting systems such as the engines and engine housing. This situation, combined with an immature parts supply chain, reduced the system’s availability significantly below minimum levels. At the time, as we reported in May 2009, the MV-22 had a stated minimum mission capability rate of 82 percent, but the three MV-22 squadrons in Iraq demonstrated an average of 62 percent. The development and integration of new technologies on the F-22— stealth, supersonics, and integrated avionics—were critical to achieving operational success, but also presented significant reliability risks. Officials told us that the F-22 was initially expected to cost less to acquire and operate than one of its predecessors, the F-15, and be more reliable as well. However, they also stated this was an unrealistic expectation. We have previously reported that the immaturity of technologies at the start of and throughout development weaken a system’s ability to achieve reliability requirements. Since 2005, when full rate production of the F- 22 began, the program has made substantial additional investments in increasing the system’s reliability through various improvement programs. But the program also changed its mean time between maintenance reliability requirement to an operational availability metric, a target that as of 2018 it had yet to meet and may need to reevaluate, according to program officials. If the F-22 cannot achieve its current reliability requirement, warfighters will have to execute their missions with a less capable aircraft than expected. Four of the Seven Selected Programs Did Not Effectively Emphasize Reliability with Contractors The AMPV, EFV, F-35, and V-22 programs did not effectively emphasize reliability with DOD contractors. Specifically, according to DOD, the AMPV, EFV, and V-22 did not effectively incentivize reliability with the contractor and one program, the F-35, did not include all of the program’s reliability metrics in the contract. Each F-35 aircraft variant is measured against five reliability metrics, two of which are in part of the contract. Contractors are not responsible for achieving reliability requirements if programs do not include them in contracts. As of August 2018, two of the F-35’s three variants had not met minimum targets for any of the three metrics that are not in the contract. The last variant (Navy) has met the minimum target for only one of the three metrics. As we have previously reported, the warfighter may have to accept F-35 aircraft that are less reliable and more costly than originally expected. As we have reported, the F-35 program tried to encourage the aircraft’s manufacturer to improve reliability through an incentive fee in sustainment contracts. These contracts, for sustainment services, included incentives for meeting aircraft availability. Reliability of parts is one of the factors that influences aircraft availability, because broken parts prevent aircraft from flying. Program officials told us they hoped the incentive fee in the sustainment contract would incentivize the contractor to invest in and implement additional reliability activities, which would help improve aircraft availability, but according to the program office, the incentive has not been effective. Program officials told us the contractor has not pursued the incentive fee in the sustainment contract through efforts to improve aircraft reliability because it would have to invest significant resources to design and incorporate changes into production aircraft in order to do so. F-35 aircraft, especially early production aircraft, continue to face challenges related to parts that are failing more often than planned and are in short supply. For example, we have previously reported that DOD found the special coating on the F-35 canopy that helps maintain the aircraft’s stealth failed more frequently than expected and that the manufacturer could not produce enough canopies to meet demand, ultimately degrading system capability. According to program officials, to ensure that reliability growth was on track, the AMPV program offered an incentive fee of up to $16 million if the contractor could demonstrate at least 80 percent of the system’s reliability before low rate production. But officials stated that the AMPV contractor did not achieve the goal. The AMPV was a derivative system of the Army’s Bradley Fighting Vehicle with an accelerated development schedule, and officials stated that for this reason the contractor assumed the government would accept much of the Bradley’s initial design and changes to the AMPV’s performance resulting from legacy reliability issues. As a result of these expectations, officials stated that the contractor did not put enough resources, including a robust reliability team, toward the work that was eventually needed to improve reliability, and the contractor understaffed in this area. Five of the Seven Selected Programs Deferred Key Reliability Engineering Activities until Later in Development The AMPV, EFV, F-22, F-35, and V-22 programs deferred key reliability engineering activities, intended to improve system designs, until later in development. As a result, they missed opportunities to identify, understand, and mitigate reliability issues early in the development process. After realizing reliability shortfalls late in development, some programs initiated expensive redesign efforts that continued well into production and deployment, while others accepted degraded performance. Based on our prior reporting, we found the EFV program did not implement a proactive reliability approach, which would include identifying challenges early and designing reliability into the system in a cost- effective manner. Instead, the program used a test-fix-test approach that relied on identifying failure modes after the system-integration phase. Early in the acquisition process, officials noted in program documentation that the program had conducted little reliability growth planning before starting development, and officials stated that the EFV program did not plan for or conduct dedicated reliability testing. Then, the program prematurely conducted its critical design review, a key review during the development phase which confirms the system’s design is stable and is expected to meet system performance requirements, before the EFV prototype’s system-integration work was complete. The program did not have the time necessary to demonstrate design maturity as scheduled and officials stated that they did not schedule long enough corrective action periods to allow for proper failure mitigation. As a result, during a 2006 operational assessment, the EFV demonstrated very low reliability and failed to complete amphibious, gunnery, and land mobility tests. F-22 program officials stated that many of the aircraft’s components and subsystems had to be tested as part of an integrated system. This limited the discovery of reliability issues early in the development phase. DOD reliability experts told us programs should not use integrated system testing to demonstrate individual component reliability, and should instead use it to focus on how components work together and identify more complex system failure modes. F-22 officials also stated the program office frequently continued with development and other testing before implementing corrective actions for critical reliability issues. As we have previously reported, the F-22 program started a program to improve its reliability in 2005, near the start of full rate production, to mitigate hundreds of known reliability issues deferred from earlier in development. Nonetheless, we reported in 2012—nearly 3 years after DOD announced the end of F-22 production—that reliability deficiencies had increased support costs, and continued to prevent the aircraft from meeting its reliability requirement. According to program officials, the Army selected a derivative of the Bradley Fighting Vehicle to meet the AMPV requirements, even though that vehicle’s transmission had known reliability problems. According to AMPV program officials, the Army selected this vehicle because it had prioritized controlling costs and accelerating schedule. Program officials stated that the focus on cost and schedule caused the contractor to underestimate the necessary reliability work at the start of development and led to a backlog of test incident reports and deferred corrective actions. According to 2018 program documentation we reviewed, the AMPV’s reliability growth did not track to targets during development and the vehicle did not achieve its pre-production reliability goal. Moreover, some of the AMPV’s deferred work may need to be addressed during a future corrective action period that could continue through fiscal year 2021. DOD Acquisition Policy and Guidance Documents Identify, but Do Not Emphasize, Key Reliability Practices Although there are differences between the DOD and commercial sector stemming from the statutory and regulatory structures that govern DOD’s acquisition processes, DOD has had long-established policy and guidance at both the department and service level that recognize the four key reliability practices we found in the commercial sector. For example, the Defense Acquisition Guidebook encourages acquisition programs to involve reliability engineers early and often, and DOD Instruction 5000.02 identifies the need for establishing realistic reliability requirements. Additionally, the 2005 DOD Guide for Achieving Reliability, Availability, and Maintainability addresses the importance of emphasizing reliability with contractors, and the service-level policies at all three military departments establish the importance of reliability engineering activities. However, most of these documents cover a wide range of acquisition issues or many aspects of reliability engineering, and they do not specifically emphasize the four key practices we identified in our review of the commercial sector. For example, the DOD Instruction 5000.02 is an overarching policy document covering the entire acquisition life cycle at a high level, from concept development to live fire test and evaluation, and only one section provides significant detail and direction on reliability. The service level instructions and Defense Acquisition Guidebook similarly cover the entire acquisition life cycle, and reliability is one of dozens of characteristics addressed in each document. The DOD Guide for Achieving Reliability, Availability, and Maintainability is largely focused on achieving reliability, but the reliability proponents at OSD, the Army, and the Navy said the guide is not consistently used throughout DOD, noting that it was issued in 2005 and has not been updated since. DOD policy provides decision makers flexibility to tailor regulatory activities that acquisition programs perform when developing weapon systems. The process is inherently complex, and these decision makers must balance many factors when overseeing and executing the programs. In the absence of an emphasis on the key reliability practices we identified, we found decision makers for the programs we reviewed prioritized other activities intended to have positive acquisition schedule and cost impacts. For example, AMPV program officials told us the program eliminated 7,500 miles of contractor reliability testing in order to proceed to the next development phase more quickly, believing that there would be sufficient time later to complete corrective actions. Recently, DOD has begun employing the Middle Tier Acquisition pathway—an alternative acquisition pathway with an objective of beginning production within 6 months and completing fielding within 5 years. This emphasis may encourage decision makers to prioritize activities that promise to reduce schedule. We found that for the programs we reviewed, however, such an approach can come at the expense of other activities, such as implementing effective reliability practices. DOD has recently taken steps that could introduce more balance when decision makers consider trade-offs between schedule and reliability. Specifically, DOD has highlighted the importance of one of the four key reliability practices we identified: emphasizing reliability with contractors, and Congress has passed legislation related to reliability. The NDAA for fiscal year 2018 included a provision mandating DOD program managers to include certain reliability requirements in weapon system engineering and manufacturing development and production contracts. In January 2019, the USD(A&S) implemented the NDAA by issuing a policy memorandum to Service Acquisition Executives and other DOD Directors echoing this key practice. However, USD(A&S) has not similarly emphasized the three other key reliability practices we identified in the commercial sector, nor have the Secretaries of the Air Force, Army, and Navy, who now have ultimate responsibility for most of DOD’s major acquisition programs. Specifically, these senior leaders have not emphasized the value of leveraging reliability engineers early and often, establishing realistic reliability requirements, and employing reliability engineering activities to improve a system’s design throughout development. As a result, it is less likely that acquisition programs will take the actions necessary to recognize and address potential reliability problems early in the development process. Without senior leadership emphasis on a broader range of key reliability practices, DOD runs the risk of delivering less reliable systems than promised to the warfighter and spending more than anticipated on rework and maintenance of major weapon systems. This risk is exacerbated in an environment where decision makers are striving to deliver systems in an accelerated manner. Conclusions The best opportunity to influence the reliability of a weapon system is early on during the design of the system. Decisions and tradeoffs made at that time can increase the weapon system’s reliability, help warfighters execute their missions, and decrease operating costs for years to come. However, these decisions and tradeoffs are not easy, as acquisition decision makers are tasked with managing competing priorities such as cost, schedule, and performance. Many of the DOD acquisition program examples in this report illustrate what can happen when reliability is not prioritized. The programs often approached reliability in a reactive manner, discovered problems late in the development process, and then tried to fix them through costly and time-consuming rework. The programs did not consistently adhere to key practices we identified in the commercial sector: reliability engineers were not leveraged early in the development process, reliability requirements were not realistic, reliability was not emphasized with contractors, and reliability engineering activities were not utilized throughout design and development. Recent DOD actions have highlighted the importance of emphasizing reliability with contractors. DOD senior leaders can help improve reliability by highlighting the importance of the three other key reliability practices we identified in the commercial sector. In light of the current focus on accelerating the acquisition process, balancing the desire for speed with reliability considerations is critical. Given the delegation of acquisition decision authority to the military services, the Secretaries of the Air Force, Army, and Navy are in the best position to do so. Recommendations for Executive Action We are making a total of three recommendations: one each to the Air Force, the Army, and the Navy. We recommend the Secretary of the Air Force issue policy emphasizing the following three key reliability practices when planning and executing acquisition programs: leveraging reliability engineers early and often, establishing realistic reliability requirements, and employing reliability engineering activities to improve a system’s design throughout development. (Recommendation 1) We recommend the Secretary of the Army issue policy emphasizing the following three key reliability practices when planning and executing acquisition programs: leveraging reliability engineers early and often, establishing realistic reliability requirements, and employing reliability engineering activities to improve a system’s design throughout development. (Recommendation 2) We recommend the Secretary of the Navy issue policy emphasizing the following three key reliability practices when planning and executing acquisition programs: leveraging reliability engineers early and often, establishing realistic reliability requirements, and employing reliability engineering activities to improve a system’s design throughout development. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. DOD’s written comments are reprinted in appendix II. DOD stated that the Air Force, Army, and Navy concur with our recommendations to their respective Departments. The comments also state that the Air Force and Navy plan to update their policies in response to our recommendations. As for the Army, the comments state that the Army Acquisition Executive will issue direction emphasizing the three key reliability practices and highlight an existing Army regulation focused on reliability engineering. In addition to the responses to our recommendations, DOD’s written comments included technical comments that we addressed as appropriate. For example, we provided additional detail on an existing DOD policy, and clarified how a program engaged with a contractor. We are sending copies of this report to the appropriate congressional committees and the Secretary of Defense. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or mackinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Key Characteristics of Selected Major Defense Acquisition Programs’ Approach to Reliability This appendix summarizes key characteristics of seven selected major defense acquisition programs’ approach to reliability. The four key characteristics are categorized as: did not leverage government reliability engineers in decision making initially pursued unrealistic operational requirements for reliability; did not effectively emphasize reliability with contractors; and, deferred key reliability engineering activities until later in development. These summaries do not address all the reliability actions taken by each program; rather they focus on key characteristics we identified in our review of commercial companies and associated deficiencies. See figure 6, which notes a distinction between commercial companies’ suppliers and DOD contractors. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Nathan Tranquilli (Assistant Director), Julie A. Clark (Analyst-in-Charge), Lori Fields, Laura Greifner, Brendan K. Orino, LeAnna Parkey, Christine Pecora, Andrew N. Powell, Timothy Saunders, and Michael J. Sullivan made key contributions to this report.
Why GAO Did This Study DOD invests tens of billions of dollars each year in major defense acquisition programs, designing and developing technologically advanced weapon systems that warfighters expect will meet specific performance requirements, including reliability requirements. Systems that are not reliable make it more difficult for warfighters to perform their missions. GAO was asked to examine DOD weapon system reliability. This report addresses (1) how selected companies in the commercial sector address reliability, (2) how selected DOD acquisition programs addressed reliability, and (3) the extent to which DOD leadership has highlighted key reliability practices. GAO collected information on leading commercial practices at the 2019 Reliability and Maintainability Symposium and from four commercial companies known for delivering reliable products. GAO also assessed how seven DOD acquisition programs—both older and newer, and representing all the military services—addressed reliability; reviewed key documents and interviewed knowledgeable officials; and reviewed reliability-related guidance and policy from senior DOD leaders. What GAO Found The commercial companies GAO reviewed proactively address reliability. They strive to identify reliability issues at the component level early in the development process to avoid expensive rework after producing an entire system. GAO found these companies focus on the following key practices: 1. Leveraging reliability engineers early and often 2. Establishing realistic reliability requirements 3. Emphasizing reliability with their suppliers 4. Employing reliability engineering activities to improve a system's design throughout development GAO found that the seven Department of Defense (DOD) acquisition programs it reviewed did not consistently adhere to these key practices (see figure). These programs often prioritized schedule and cost over incorporating the key reliability practices, and these systems generally were not as reliable as promised. In 2019, DOD highlighted in a policy memorandum the importance of emphasizing reliability with contractors. However, the other three key practices have not been similarly highlighted. DOD has taken steps to accelerate weapon system development, and decision-making authority has been delegated to the military services. In an environment emphasizing speed, without senior leadership focus on a broader range of key reliability practices, DOD runs the risk of delivering less reliable systems than promised to the warfighter and spending more than anticipated on rework and maintenance of major weapon systems. What GAO Recommends GAO recommends the Secretaries of the Air Force, Army, and Navy highlight the importance of three key reliability practices: leveraging reliability engineers, establishing realistic reliability requirements, and employing reliability engineering activities to improve a system's design throughout development. DOD agreed with GAO's recommendations.
gao_GAO-19-350
gao_GAO-19-350_0
Background Medical Center Performance: SAIL VHA began using the SAIL system in 2012 to measure, evaluate, and benchmark the quality, efficiency, and productivity of medical centers, and to highlight successful strategies of high-performing medical centers. SAIL includes 29 performance measures (27 quality measures and two measures of overall efficiency and capacity) in areas such as acute-care mortality, access to care, and employee satisfaction. (See appendix I for the full list of SAIL measures.) SAIL is a diagnostic tool that allows VHA to assess medical centers’ performance relative to their peers, and determine how much absolute improvement they have made in the past year based on relevant clinical data. VHA publishes SAIL results quarterly to provide information to network and medical center officials regarding improvement opportunities at each medical center. SAIL data are also available on VHA’s intranet site. VHA staff can view a wide range of detailed information about their medical center, compare performance to other medical centers, and (for those staff with medical-record-level access) view information on patients with a particular medical condition. Network and Medical Center Director Performance Appraisal Process VHA conducts annual performance appraisals for all network and medical center officials. The appraisal process begins when officials from VHA’s office of Workforce Management and Consulting transmit a performance plan template to the network directors. The template identifies performance priorities and expectations for the upcoming appraisal period and criteria to be used to measure performance outcomes and ratings for each performance element. Network directors use the template to develop performance plans that include targets and time frames—the schedule of when performance targets are to be achieved during the year—with each of the medical center directors in their network. According to VA policy, performance plans resulting from the template should be finalized within 30 days of the start of the appraisal period. After expectations have been set for a medical center director, the director, in turn, sets performance expectations for the department heads within the medical center. VHA Primarily Uses SAIL and Its Associated Star Ratings to Assess and Manage Medical Center Performance VHA Primarily Uses SAIL and Its Star Ratings to Assess Medical Center Performance VHA officials told us they primarily use the SAIL system to assess the performance of medical centers. Specifically, VHA uses SAIL data to calculate and assign each medical center an annual star rating of 1 (lowest) to 5 (highest) stars as an assessment of overall quality. SAIL documentation states that the goal of the star ratings is for low-performing medical centers to learn from the best practices of high-performing ones, although all medical centers have the opportunity to improve. VHA applies a weighting and calculation methodology to each of SAIL’s 27 quality measures to determine a single composite score for each medical center. The scores are then ranked and grouped by percentile and the associated medical centers are assigned initial star ratings based on their relative ranking. For example, the lowest performing 10 percent of medical centers as determined by SAIL’s 27 quality measures are assigned a 1-star rating, and the next lowest performing 20 percent of medical centers are assigned a 2-star rating. (See fig. 1.) After the initial star rating is determined by SAIL measures each year, VHA officials can make changes to the rating if a medical center meets certain conditions. For example, SAIL documentation states that a medical center that initially received a 5-star rating will be reduced to a 4- star rating if it has a high mortality rate. In addition, VHA officials told us they can decide to increase a 1-star medical center’s rating to a 2-star rating if the medical center outperforms the bottom 10 percent of U.S. hospitals in certain criteria as measured by external systems such as the Centers for Medicare & Medicaid Services’ Hospital Compare website. We found that the percentage of medical centers that received a final 1- star rating ranged from 4 percent to 10 percent from fiscal years 2013 through 2018. VHA officials publish the final annual star ratings for each medical center both internally and externally. See figure 2 for the number of medical centers that received each final star rating for fiscal years 2013 through 2018. Although the specific medical centers within each star-rating category could change from year to year, we found that the fiscal year 2018 star ratings for 110 of the 127 medical centers (87 percent) that received star ratings in fiscal year 2013 did not differ by more than 1 star from their fiscal year 2013 star rating. For example, eight of the 10 1-star medical centers in fiscal year 2013 received either a 1- or 2-star rating in fiscal year 2018. (See fig. 3.) In addition, 44 of the 127 medical centers had the same rating in fiscal year 2018 as they did in fiscal year 2013. At the end of the 6-year period of our review, only one medical center differed by more than 2 stars from its fiscal year 2013 star rating, decreasing from 5 stars to 2. VHA Uses Tools from the SAIL System to Manage Medical Center Performance VHA officials told us they use SAIL tools on VHA’s intranet when conducting site visits to medical centers and for other performance management efforts. The SAIL system includes several performance management tools that present data in greater detail than SAIL’s quarterly data release and enable officials to identify areas for improvement. VHA, network, and medical center officials we interviewed mentioned three in particular: Opportunity matrix – This matrix shows how a medical center ranks compared to others on all SAIL performance measures based on quarterly data. Each performance measure is labeled by quintile, with the first quintile comprising the top 20 percent of medical centers and the fifth quintile comprising the bottom 20 percent. Officials told us they use this tool to focus improvement efforts by examining specific measures for which a medical center needs improvement. Geometric control charts – These charts, referred to as G-Charts, allow officials to monitor on a daily basis what VHA considers to be rare occurrences. For example, one G-Chart allows VHA to monitor patient safety indicators that contain information on occurrences of specific medical conditions, such as cardiac arrest, pneumonia, and sepsis. Medical center officials can use these charts to examine the occurrence of events over time, analyze patient-level data, and quickly detect changes in the frequency of these events. Other events that the charts allow VHA to monitor include inpatient complications and deaths. Symphony action triggers – Symphony is an online tool that tracks over 100 performance measures daily, related to medical center access, outcomes, and productivity, and includes an early warning system to notify network and medical center officials of results that require attention. Officials can use Symphony to view patient-level information to understand the details of particular events and determine solutions. VHA officials also told us that they use these tools to manage medical center performance as part of their ongoing support of lower performing medical centers. Specifically, officials who oversee SAIL identify lower performing medical centers using SAIL and conduct site visits as part of VHA’s Strategic Action for Transformation initiative. This initiative utilizes a four-tiered, escalating approach based on the severity of concern at a medical center. In order of increasing severity, the four levels are watch, high-risk, critical, and VA receivership. One-star medical centers are automatically placed on the high-risk list, along with some 2-star medical centers with decreasing performance. If performance continues to decrease, medical centers are considered critical, and can be escalated to VA “receivership,” at which point VHA officials may step in to correct ongoing problems and replace network or medical center leadership officials. As of January 2019, VHA officials told us no medical center had entered VA receivership since the initiative began. VHA officials told us that they may also conduct site visits or hold calls with medical center leadership by request, although their focus is on lower performing medical centers. In addition to the SAIL tools, which report data on performance measures across the entire medical center, VHA officials told us that they may also use other data sources as part of medical center performance management. For example, several program offices—such as primary care, mental health, and surgery—have dashboards that track performance and quality of care specific to those offices. In addition, VA’s Inpatient Evaluation Center focuses on mortality data, including estimates of expected patient mortality. VHA’s Appraisal Process for Assessing Network and Medical Center Directors’ Performance Relies Heavily on Medical Center Performance Information We found that VHA relies heavily on medical center performance information to assess the performance of its network and medical center directors. VA’s Senior Executive Service Part V. Performance Appraisal System handbook states that directors are assessed using five appraisal elements established by the Office of Personnel Management: (1) Results Driven, (2) Leading People, (3) Leading Change, (4) Business Acumen, and (5) Building Coalitions. The five elements are included in VHA’s performance plan template, which forms the basis for network and medical center directors’ performance plans. The handbook designates a relative weight for each element used to calculate a director’s rating. (See fig. 4.) The handbook states that a director is rated in each element on a scale of level 1 to level 5, with 5 being the highest level. Each rating is then multiplied by the weight for its corresponding element, and the results are added to generate a summary score. According to the handbook, the summary score is used to identify potential recipients of pay increases and monetary awards. The most heavily weighted appraisal element in the handbook, Results Driven, represents 40 percent of a director’s overall performance and is based entirely on medical center performance information. Specifically, for fiscal year 2018, SAIL results comprised 25 percent of the overall rating and included measures such as patient mortality, length of stay, and readmissions. Other medical center performance information comprised the remaining 15 percent of the overall rating. (See fig. 5.) Medical center performance information is also used when assessing directors’ performance across other appraisal elements. For example, in VHA’s fiscal year 2018 performance plan template, the Leading Change appraisal element included the implementation of suicide prevention initiatives, using medical center performance in the SAIL mental health domain as criteria. In addition, the Leading People element included performance information from VA’s All Employees’ Survey, which included medical center staff. Although medical center performance information plays a prominent role in the performance assessment process, VHA officials told us that there are other considerations that may result in medical center directors receiving a rating that is higher than that indicated by the star rating of the medical center. For example, VHA officials told us that when calculating a medical center director’s rating for the Results Driven element, they consider whether the medical center’s overall performance improved or deteriorated compared to the previous year’s performance. These officials also stated that they take into consideration the length of a director’s tenure, such as cases where a director started at a low- performing medical center partway through the rating year and would not have had sufficient time to improve the medical center’s performance from the previous year. In our review, we also found that the release of VHA’s performance plan template is often delayed, which can limit its effectiveness as a means of assessing directors’ performance. Specifically, in fiscal years 2016, 2017, and 2018, VHA released the performance plan template to network directors in November or December, close to the end of the first quarter of the performance appraisal period. Directors at two of the medical centers in our review expressed frustration with the delay and not having a full year to meet performance expectations, but directors at the two other medical centers stated that they find the process clear and are able to anticipate performance expectations. Officials from VHA’s office of Workforce Management and Consulting, which sends out the template, told us that they have been working in recent years to shorten the template’s development and review process within VHA; however, the delays may continue because of late changes from VA or the Office of Personnel Management. In our December 2016 review of human resource management practices at VHA, we also reported on delays in the release of VHA’s performance plan template. We reported that the delay limited medical center officials’ ability to use the template as a tool to align expectations and performance, which is inconsistent with leading practices on employee performance management. We recommended that VHA accelerate its efforts to develop a modern, credible, and effective performance management system, including the timely release of the performance plan template. VA partially concurred with our recommendation and has made limited progress in implementing it. As of December 2018, this recommendation remains open and we reiterate the need for VHA to implement it. VHA Has Not Assessed for Implementation Previous Recommendations Made to Ensure SAIL’s Effectiveness in Assessing Medical Center Performance Although SAIL is used in the assessment of both medical centers’ and directors’ performance, VHA officials have not assessed and implemented as appropriate the recommendations from previous evaluations of the SAIL system to ensure its effectiveness. This is inconsistent with federal standards for internal control, which state that management should remediate identified internal control deficiencies on a timely basis. This remediation may include assessing the results of reviews to determine appropriate actions, and, once decisions are made, completing and documenting corrective actions on a timely basis. VHA officials told us that since it was established in 2012, there have been two evaluations of SAIL: The first evaluation was an internal review, which VHA officials told us was completed in February 2014 and submitted to the director of VHA’s Office of Analytics and Business Intelligence and reviewed by the then Under Secretary for Health and Principle Deputy Under Secretary for Health. The internal review, which had 22 recommendations, found issues related to the validity and reliability of SAIL as a tool for measuring performance and fostering accountability. For example, it included a recommendation that VHA no longer use aggregate star ratings for accountability, or for presenting medical center quality and efficiency information to stakeholders. Rather, the recommendation called for VHA to work to identify valid and reliable approaches for presenting this information. The second evaluation was an external review, which VHA officials told us was submitted to the Office of the Principal Deputy Under Secretary for Health in April 2015. The external review included 19 recommendations for short- and long-term improvements to SAIL, such as a recommendation to examine the potential for misclassifying medical centers—i.e., assigning star ratings that do not reflect medical centers’ pattern of performance on the underlying measures. The review noted two ways such misclassification could occur: (1) two medical centers with summary scores that are close together could receive different star ratings, or (2) two medical centers with widely different summary scores could receive the same star rating. The findings of the previous SAIL evaluations are similar to concerns that officials from the four networks and four medical centers in our review expressed about SAIL’s effectiveness, including whether the star ratings were an accurate reflection of medical center performance. For example, officials from one medical center told us that, because the mortality measure has a higher weight relative to other SAIL measures, it can amplify the importance of a small difference between medical centers. As a result, a 1-star medical center may appear to be performing much more poorly on this measure than it is in practice; and officials from two medical centers told us that the length-of-stay measure may not be an accurate reflection of quality of care, as there are valid clinical reasons why some veterans need a longer length of stay that may not be reflected in the diagnostic and procedure codes for that stay. Therefore, the difference in performance on the length of stay measure between two medical centers may be the result of how data were entered into the medical record and coded, rather than actual differences in quality of care. In addition, VHA officials also expressed concerns about SAIL and how it is currently being used to assess medical center performance. For example, VHA officials who oversee SAIL told us it was designed to be an internal performance improvement tool, but is now also being used as a performance accountability tool. The external review included a recommendation that VHA consider whether the primary purpose of SAIL is improvement or accountability, as SAIL would need to be redesigned to do both. One VHA official told us that SAIL is being used in punitive ways through the Strategic Action for Transformation initiative. For example, at one medical center, officials told us that they received a letter from VHA’s Executive in Charge about the medical center’s low performance only a few months after its star level increased from 1 to 2 stars. Officials said the letter warned them that medical center leadership could be removed if performance does not improve. Medical center officials described this as counterproductive for their improvement efforts, as it was demoralizing while not identifying any specific areas for improvement. VHA officials confirmed that, other than their routine reviews to determine the need for annual adjustments to SAIL measures and other minor adjustments to the system, they have not assessed or implemented as appropriate the recommendations from the internal and external SAIL evaluations. In addition, although the Under Secretary for Health received a response to the internal review’s recommendations from an individual program office, VHA officials told us no action was taken on the response or to formally assess the recommendations from the internal review. Officials noted that two reasons for the lack of action taken to assess recommendations for implementation were leadership turnover and attention diverted to other issues, such as concerns about extended wait times for medical appointments at VHA medical facilities. In addition, officials stated that the evaluations were not widely distributed within VHA. As a result, officials we spoke with from several VHA offices were unaware that SAIL had ever been evaluated. To address the federal internal control standard for timely remediation of identified deficiencies, federal agencies assign responsibility and authority for carrying out and documenting corrective actions. VHA officials told us they did not formally assign responsibility to an office to assess recommendations from previous evaluations of SAIL. As a result, when the officials who received both evaluations left VHA, there were no other individuals or offices responsible for ensuring that recommendations were acted on. VHA officials who oversee SAIL told us that they are planning to use the 2015 external review as part of their plans to make changes to SAIL and its measures. However, there is no documentation available describing the planned changes to SAIL or how those planned changes will incorporate the results of the external review. If changes made to SAIL run counter to the evidence, it could potentially diminish the integrity of the system to effectively evaluate performance. Conclusions VHA primarily uses the SAIL system to assess and compare the performance of medical centers. Veterans can also view SAIL data to compare medical center performance when making health care decisions. Officials from the networks and medical centers in our review expressed concerns about how SAIL is being used and whether star ratings are an accurate reflection of medical center quality. SAIL has been evaluated twice, and both evaluations have found similar concerns with SAIL. However, VHA has yet to use the results of those evaluations to address identified concerns and make evidence-based improvements to the SAIL system. Specifically, VHA has not taken action to ensure that officials assess the recommendations from SAIL evaluations, document their decisions, and implement recommendations as appropriate. If changes to SAIL are implemented without this assessment of existing evaluations, VHA may make changes that run counter to the evidence, potentially diminishing the integrity of the system to effectively evaluate performance. Recommendations We are making the following two recommendations to VA: The Under Secretary for Health should assess recommendations from two previous evaluations of SAIL. This assessment should include the documentation of decisions about which recommendations to implement and assignment of officials or offices as responsible for implementing them. (Recommendation 1) The Under Secretary for Health should implement, as appropriate, recommendations resulting from the assessment of the two previous SAIL evaluations. (Recommendation 2) Agency Comments We provided VA with a draft of this report for review and comment. VA provided written comments, which are reprinted in appendix II. In its written comments, VA concurred with both of the report’s recommendations, and identified actions it is taking to implement them. We are sending copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, the Under Secretary for Health, and other interested parties. In addition, the report is available at no charge on the GAO web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: VHA Strategic Analytics for Improvement and Learning (SAIL) Performance Measures Number of measures 2 Risk adjusted complication Index In‐Hospital risk adjusted mortality (SMR) Measure 30‐day risk adjusted mortality (SMR30) Desired Direction of Measure Lower 3 Severity adjusted average length of stay (ALOS) %Acute admission reviews met InterQual criteria %Acute continued stay reviews met InterQual criteria Inpatient core measures mean percentage (ORYX) HEDIS outpatient core measure mean percentage (chart abstract) HEDIS outpatient core measure mean percentage (population based) 5 HCAHPS score (patient rating of hospital) Rating of primary care provider Rating of specialty care provider Care Transition (inpatient) Stress discussed (PCMH) 2 Best Places to Work score Hospital-wide all conditions 30-day readmission rate 5 Timely Appointment, Care and Information – PCMH Timely Appointment, Care and Information – SC Same Day Appointment When Needed – PCMH Call center speed in picking up calls 3 Mental health population coverage Mental health continuity of care Mental health experience of care 2 Stochastic frontier analysis (= 1/SFA) The acronyms VHA used in the table are as follows: SMR=standard mortality ratio; HEDIS=Healthcare Effectiveness Data and Information Set; HCAHPS= Hospital Consumer Assessment of Healthcare Providers and Systems; PCMH=patient-centered medical home; ACSC=ambulatory care sensitive conditions; SC=specialty care. Appendix II: Comments from the Department of Veterans Affairs Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Debra A. Draper, (202) 512-7114 or draperd@gao.gov. Staff Acknowledgments In addition to the contact named above, Janina Austin, Assistant Director; Sarah Harvey and Malissa G. Winograd, Analysts-in-Charge; Jennie F. Apter; Frederick Caison; and Alexander Cattran made key contributions to this report. Also contributing were Vikki Porter and Jennifer Whitworth. Related GAO Products VA Health Care: Actions Needed to Improve Oversight of Community- Based Outpatient Clinics. GAO-18-375. Washington, D.C.: Apr. 12, 2018. Veterans Health Care: Additional Actions Could Further Improve Policy Management. GAO-17-748. Washington, D.C.: Sept. 22, 2017. Veterans Affairs: Improved Management Processes Are Necessary for IT Systems That Better Support Health Care. GAO-17-384. Washington, D.C.: June 21, 2017. VA Health Care: Improvements Needed in Data and Monitoring of Clinical Productivity and Efficiency. GAO-17-480. Washington, D.C.: May 24, 2017. Veterans Health Administration: Management Attention Is Needed to Address Systemic, Long-standing Human Capital Challenges. GAO-17-30. Washington D.C.: Dec. 23, 2016. Veterans Health Care: Improvements Needed in Operationalizing Strategic Goals and Objectives. GAO-17-50. Washington D.C.: Oct. 21, 2016. VA Health Care: Processes to Evaluate, Implement, and Monitor Organizational Structure Changes Needed. GAO-16-803. Washington D.C.: Sept. 27, 2016.
Why GAO Did This Study VHA anticipates that it will provide care to more than 7 million veterans in fiscal year 2019. The majority of veterans using VHA health care services receive care in one or more of the 172 medical centers or at associated outpatient facilities. VHA collects an extensive amount of data that can be used to assess and manage the performance of medical centers. Many measures are publicly reported on VA web pages, allowing veterans the ability to compare medical centers' quality of care. GAO was asked to assess VHA's management of medical center performance. This report examines (1) the tools VHA uses to assess medical center performance; (2) VHA's use of medical center performance information to assess medical center directors; and (3) the extent to which VHA has evaluated the effectiveness of the SAIL system. GAO reviewed VHA policies, guidance, and performance information for medical centers and their associated directors. GAO also interviewed officials from VHA as well as from four VA medical centers, selected for variation in performance and geographic location. What GAO Found Department of Veterans Affairs' (VA) Veterans Health Administration (VHA) officials told GAO they primarily use the Strategic Analytics for Improvement and Learning (SAIL) system to assess VA medical center performance. SAIL includes 27 quality measures in areas such as acute care mortality and access to care. VHA officials use SAIL to calculate and assign each medical center an annual star rating of 1 (lowest) to 5 (highest) stars as an assessment of overall quality. For the 146 medical centers that received star ratings in fiscal year 2018, the distribution of star ratings was as follows: 6 percent, 1 star; 24 percent, 2 stars; 38 percent, 3 stars; 19 percent, 4 stars; and 12 percent, 5 stars. Although the specific medical centers within each star-rating category could change from year to year, GAO found that the fiscal year 2018 star ratings for 110 of the 127 medical centers (87 percent) that received star ratings in fiscal year 2013 did not differ by more than 1 star from their fiscal year 2013 rating. Changes in VHA Strategic Analytics for Improvement and Learning Star Ratings, Fiscal Year 2013 Compared to Fiscal Year 2018 GAO found that VHA's appraisal process for assessing medical center director performance relies heavily on medical center performance information, including SAIL. For example, the most heavily weighted appraisal element (40 percent of the overall rating) is made up entirely of medical center performance information. SAIL was evaluated in 2014 and 2015, but VHA has not assessed the recommendations from those evaluations, or taken action on them. The evaluations, which found issues related to the validity and reliability of SAIL and its star ratings for measuring performance and fostering accountability, together included more than 40 recommendations for improving SAIL. The findings are similar to concerns expressed by officials GAO interviewed from VHA, networks, and medical centers about SAIL's effectiveness and how it is currently being used to assess medical center performance. VHA officials told GAO the findings and recommendations of the previous SAIL evaluations were not assessed because the evaluation reports were not widely distributed within VHA due to leadership turnover, as well as attention that was diverted to other concerns such as extensive wait times for medical appointments. Without ensuring that the recommendations resulting from these previous evaluations are assessed and implemented as appropriate, the identified deficiencies may not be adequately resolved, and VHA's ability to hold officials accountable for taking the necessary actions may be diminished. What GAO Recommends GAO recommends that the Under Secretary for Health: (1) assess recommendations from previous evaluations of SAIL for implementation; and (2) implement, as appropriate, recommendations resulting from the assessment. VA concurred with GAO's recommendations and identified actions it is taking to implement them.
gao_GAO-20-33
gao_GAO-20-33_0
Background Responsibilities of the Coast Guard’s Deployable Specialized Forces The Coast Guard has 11 statutory missions, which are divided into homeland security and non-homeland security missions (see appendix II). The Coast Guard’s units that conduct operations to achieve its statutory missions are organized into shore-based forces such as boat stations, maritime patrol forces such as cutters and icebreakers, and Specialized Forces—the latter of which can serve as a force multiplier for the other units, such as by deploying for added capacity during homeland security missions, including port security, drug interdiction, and defense readiness. Table 1 details Specialized Forces teams, types of operations they conduct, and an example of an operation. Specialized Forces units deploy from their home locations, such as major U.S. port areas, to conduct operations in U.S. coastal waters and internationally. For example, some units such as MSRTs, MSSTs and PSUs deploy with specialized boats on trailers that can be towed or air lifted to the site of an antiterrorism patrol or defense readiness operation. Other Specialized Forces units do not maintain the vessels, such as cutters, or air assets, such as helicopters, from which they carry out operations. TACLETs, for example, do not maintain any boats and rely on and deploy via U.S. Navy or Allied vessels, as well as Coast Guard cutters, to conduct drug interdiction operations. Figure 1 shows Coast Guard personnel conducting a drug interdiction operation that included a TACLET member boarding a foreign, semi-submersible vessel, which resulted in seizing 17,000 pounds of cocaine. The Coast Guard is the lead federal maritime law enforcement agency on waters beyond 12 nautical miles offshore of the U.S. coast. The Coast Guard shares responsibility for patrolling the U.S. maritime borders and territorial sea (i.e., maritime approaches 12 nautical miles seaward of the U.S. coast) to interdict drugs and foreign nationals illegally entering the United States with U.S. Customs and Border Protection’s Air and Marine Operations and Border Patrol. Outside of DHS, the Department of Defense (DOD) is the lead federal agency for the detection and monitoring of the aerial and maritime transit of illegal drugs into the United States, and it operates systems, such as radar, that can be used in support of DHS and other federal, state, and local law enforcement activities. Figure 2 depicts the geographic areas in which Specialized Forces operate and resources, such as vessels or aircraft, used to support their operations. Coast Guard Reorganized its Deployable Specialized Forces in 2007 and 2013 In July 2007, the Coast Guard reorganized the command structure of its Specialized Forces and aligned them as an independent Coast Guard command—the Deployable Operations Group. The Deployable Operations Group was intended to enhance operational effectiveness and interagency coordination in responding to a range of national emergencies and events, such as terrorist threats or natural disasters. Prior to the Deployable Operations Group, Specialized Forces aligned geographically under Atlantic Area and Pacific Area commands. In 2010, we found that the unified command structure achieved its intended benefits of standardized training and centrally managing assets. We also reported in 2010 that the Deployable Operations Group faced human resource challenges such as selecting qualified candidates and achieving and maintaining qualifications to perform certain high-skill techniques, such as vertical insertion from a helicopter onto the deck of a ship during maritime interdiction missions. Because of the ongoing program changes at that time, we did not make recommendations. In 2010, a DHS Inspector General report recommended a systematic review and analysis of the MSST program to determine, in part, the optimal staffing levels, training, and competency mix needed. The Coast Guard agreed and cited planned MSST program changes in its response to the Inspector General report. The Coast Guard’s Fiscal Year 2011 budget included a proposal to close five MSSTs and consolidate those forces to achieve cost savings, among other things. A 2011 Coast Guard report recommended that the Coast Guard integrate Specialized Forces units across the Coast Guard, balance the capacity of the Specialized Forces with proficiency and safety levels, and manage risk. In April 2013, the Coast Guard disbanded the Deployable Operations Group, and Specialized Forces units returned to regional commands. Figure 3 shows the evolution of Coast Guard Specialized Forces units since 1970. The April 2013 reorganization of the Coast Guard’s Specialized Forces units under regional commands more closely aligns with its original command structure that existed prior to the 2007 creation of the Deployable Operations Group. Figure 4 details the three command structures—pre-Deployable Operations Group, Deployable Operations Group, and Post-Deployable Operations Group. Coast Guard Applied Some Key Practices when Reorganizing its Deployable Specialized Forces, but Has Not Fully Analyzed Workforce Needs or Operational Capabilities The Coast Guard generally applied three of five key practices for agency reform and partially applied two of five when developing its report that recommended the 2013 reorganization of its Specialized Forces units. Table 2 identifies the extent to which the Coast Guard’s reorganization applied key practices and considerations. Coast Guard Generally Applied Three of Five Key Practices for Reorganizing its Deployable Specialized Forces Coast Guard generally applied three of five key practices for Specialized Forces reorganization, including establishing goals and outcomes, involving employees and key stakeholders, and addressing high risk areas and longstanding management challenges. Establish Goals and Outcomes of Reforms Establishing goals and outcomes of reforms can help decision makers determine what problems genuinely need to be fixed, how to balance differing objectives, and what steps need to be taken to create long-term gains. The Coast Guard generally applied this key practice in its analysis of Specialized Forces units. For example, the Coast Guard’s 2011 report cites personnel safety as a main reason to reform Specialized Forces operations. According to Coast Guard officials, part of the rationale for this focus was because of a training mishap and a problem with equipment requirements. Specifically, officials stated that in 2010, a Coast Guard member drowned while training when he entered the water without self-inflating flotation equipment. Coast Guard officials told us that at the time of the incident, members of Specialized Forces units would carry in excess of 100 pounds of specialized gear and equipment. Officials also noted that at that time there were concerns that members’ self-inflating flotation devices could inflate onboard aircraft, which in the event of a crash in the water could result in personnel being unable to exit the aircraft. The Coast Guard subsequently established a goal for its reorganization to mitigate this safety risk by decreasing gear weight and personal flotation devices. Further, the report recommended reducing or eliminating inconsistencies between the Specialized Forces units and the rest of the Coast Guard. For example, a Coast Guard official told us that integration between the Deployable Operations Group and the rest of the Coast Guard was inconsistent, training programs were not standardized, and training took place at 15 different locations. This resulted in difficulties sharing assets, such as aircraft and boats, for use during training sessions. As a result of the report findings and its recommendation, the 2013 reorganization realigned Specialized Forces units under regional operational commands to integrate its logistics with the rest of the Coast Guard (fig 2.). Involve Employees and Key Stakeholders in Developing Reforms Involving employees and key stakeholders in the process of developing reforms is part of an integrated approach that helps facilitate the development of reform goals and objectives, as well as incorporate insights from a frontline perspective and increase customer acceptance of any changes. The Coast Guard generally applied this key practice because it involved senior officials representing the agency to develop the goals of the reorganization, how to address them, and to make reform recommendations to improve the efficiency and effectiveness of Specialized Forces operations. The Coast Guard’s 2011 report included and incorporated input from a broad range of subject matter experts including high level officers representing a comprehensive mix of Coast Guard units, with a diverse mix of experience, and it reflected different programs throughout the Coast Guard to ensure a comprehensive review. During site visits, Coast Guard officials told us the reorganization from the Deployable Operations Group to Specialized Forces had a positive effect by helping to ensure tactics, training, and techniques became standardized and ensuring better cooperation within the Coast Guard as well as with other agencies. For example, Coast Guard officials told us that because their area of responsibility is large and busy, they use MSSTs to augment their local capabilities and to apply the MSST’s specialized capabilities that the local unit does not have. Coast Guard officials also emphasized an increase in safety, particularly with a decrease in a risk of drowning while in tactical gear. Address High Risk Areas and Longstanding Management Challenges Addressing long standing weaknesses in how some federal programs and agencies operate is a key practice, which can improve the effectiveness and responsiveness of the federal government. The Coast Guard generally applied this key practice because it considered high risk areas when considering Specialized Forces reorganization. Specifically, the Coast Guard addressed retention and training, which it identified in its 2011 report to be high risk areas and longstanding management challenges. For example, the Coast Guard’s 2011 report identified the need for additional subject matter expertise and made recommendations to implement training standardization across the Specialized Forces. Our work has also identified retention and training as challenges. We found in 2010 that the Coast Guard was unable to retain qualified Specialized Forces personnel, in part because of additional training requirements. For example, while personnel working on a cutter may need a boat driver certification, an MSST or MSRT member would need an additional tactical boat driver course. The Coast Guard subsequently developed detailed guidance for Specialized Forces units that includes standardized training, requirements, and qualifications to be followed regardless of the unit location and to be applied consistently across organizational commands. During site visits to units in the Pacific and Atlantic Areas, we observed that equipment was standardized across Specialized Forces, and officials we spoke with described the benefits of the standardized training and equipment. Figure 5 shows Coast Guard MSST personnel conducting standardized training, which officials said has the added benefit of providing potential deterrence of illegal activities, such as drug smuggling, in the geographic area of the training. Coast Guard officials also told us that, prior to the 2007 reorganization to the Deployable Operations Group, Coast Guard personnel working in Specialized Forces units could not remain in those units and be competitive for promotions. Coast Guard officials told us that this was because the Coast Guard has certain requirements for career progression, including personnel working in various assignments within a given career path. In 2010, we reported that the Coast Guard had developed a career path for maritime law enforcement personnel—who are part of operations that generally address the Coast Guard’s homeland security missions. Coast Guard officials told us that this change was a response to challenges the agency faced retaining law enforcement personnel. Officials said this change created a maritime law enforcement career path within the Specialized Forces community. Coast Guard officials we spoke with also told that us the career path has helped them retain qualified Specialized Forces personnel. Coast Guard Has Not Fully Used Data to Assess Workforce Needs or Evaluated Potential Overlap or Gaps in the Capabilities of its Deployable Specialized Forces Use Data and Evidence to Assess Workforce Needs Coast Guard partially applied two of five key practices for agency reorganization, including using data and evidence, and considering to some extent the possibility of fragmentation, duplication, and overlap. However, it has not used data and evidence to fully assess Specialized Forces workforce needs and has not comprehensively evaluated the potential for overlaps or gaps in the capabilities among them. We have reported that agencies are better equipped to address management and performance challenges when managers use reliable data and evidence, such as evidence from program evaluations and performance data that provide information on how well a program or agency is achieving its goals. We have previously reported that when reforming a given program, the use of data and evidence is critical for setting program priorities and allocating resources. The Coast Guard used some data and evidence related to a specific management challenge—training mishaps—but did not use data and evidence to fully assess Specialized Forces workforce needs. As previously mentioned, the Coast Guard analyzed equipment weight data and scenarios and made recommendations based on the results of these analyses to reduce the risk of drowning. The 2011 report affirmed the locations of the Specialized Forces units to ensure that unit capabilities were geographically distributed, but it recommended additional analyses of some unit locations, such as TACLETs. The Coast Guard found that the geographic distribution of the Specialized Forces, at the time of the analysis, provided coverage for their tactical law enforcement and waterside operations and did not recommend changes to the geographic locations of these units. The Coast Guard partially applied this key practice because, when it reorganized its Specialized Forces command structure in 2013, it did not assess Specialized Forces workforce needs with regard to the number of personnel required to conduct its operations. The Coast Guard’s 2011 report identified some capability and capacity shortfalls, including inadequate capacity to conduct certain security operations, and recommended an analysis of staffing levels for all Specialized Forces units. Similarly, a 2012 Homeland Security Studies and Analysis Institute peer review of the Coast Guard’s 2011 report on its Specialized Forces noted the need for a more comprehensive analysis of all of the units to ensure the effective use of their specialized capabilities. In the eight years since the Coast Guard study recommended workforce needs analyses, the Coast Guard has not assessed the overall Specialized Forces workforce needs or established such an analysis as a priority. The Coast Guard conducted a unit level analysis of its PSUs in January 2014, but it did not use the results because the analysis focused on non-deployed personnel. Officials stated the analysis identified gaps in personnel and recommended that the Coast Guard expand the size of the units to be able to fulfill mission requirements. However, Coast Guard officials said they did not act on the recommendations of the study to request different resource levels. Officials told us that leadership changes among Specialized Forces can result in units, such as PSUs, getting study results based on scope decisions with which the new leader disagrees. We found that the Coast Guard might not have the right mix and number of personnel relative to the mix and number of operations Specialized Forces conduct to meet mission demands. Our analysis of Specialized Forces data for fiscal years 2016 through 2018 and planned for 2019 found variation in the number of operations requested of some units during this period, even though the number of personnel remained relatively constant. For example, our analysis of Coast Guard data found that PSU requests—and the number of operations carried out—changed from three operations in 2016 to six in 2018, with two operations planned in 2019, spread among a constant of approximately 1,000 personnel. In another example, our analysis of Coast Guard data found that the of number operations requested for MSSTs varied from 85 in 2016 to 67 in 2018, and 39 planned for 2019. Our analysis of Coast Guard data found that the number of MSST operations carried out was 152 in 2016, 141 in 2018, and 379 planned operations in 2019, while the number of personnel assigned to MSSTs decreased from 562 in 2016 to 547 planned for 2019. Such variations may affect the extent to which Specialized Forces units are used efficiently. Officials from some units we interviewed indicated that they experienced periods of underutilization, while other similar units turned down operations for lack of available personnel. For example, an official at one unit described efforts to increase the number of operations carried out by the unit, with officials describing outreach efforts to other Coast Guard units to encourage those units to call on them for specialized assistance. Officials at another unit conducted similar outreach, including passing out flyers describing Specialized Forces capabilities and contact information should the other Coast Guard units need assistance. In contrast, officials from a different Specialized Forces unit described instances where they had to decline operations because they did not have enough personnel to meet the demand. Further, an official from one Area Command responsible for assigning some of the Specialized Forces operations stated approximately 5 percent of requests for Specialized Forces assistance went unfulfilled. Without an analysis of the Specialized Forces units as a whole, the Coast Guard does not have the assurance that it has the requisite number of personnel in the right units to conduct the required missions. Such an analysis would better position the Coast Guard to identify capability gaps between mission requirements and mission performance caused by deficiencies in the numbers of personnel available, as required by the Coast Guard Authorization Act of 2015. Coast Guard officials from Specialized Forces units we interviewed in 2019 acknowledged that an analysis of each unit would be useful and in August 2019, officials from headquarters affirmed this and stated the Coast Guard aims to conduct analyses of the individual units. We found that these analyses consider each unit individually and do not comprehensively consider similar units, such as Specialized Forces. Therefore, without analyzing the Specialized Forces program as a whole, the Coast Guard may miss opportunities to optimize the allocation of personnel among Specialized Forces units, as well as the number of units. Using data and evidence to comprehensively assess workforce needs across Specialized Forces units would better position the Coast Guard to prioritize its Specialized Forces efforts to more effectively achieve desired outcomes. Address Fragmentation, Overlap, and Duplication, If Any Exists and is Unnecessary As we have reported since 2011, agencies may be able to achieve greater efficiency or effectiveness by reducing or better managing programmatic fragmentation, overlap, and duplication. We have also reported that these issues should be considered during agency reform efforts. We found that the Coast Guard partially considered how to reduce potential duplication and overlap when reorganizing the Specialized Forces units. The 2011 Coast Guard report identified some duplication of one specialized unit and challenges associated with uncoordinated training and fragmented guidance. The Coast Guard recommended the elimination of one Specialized Forces unit with that specialized capability, and to change training requirements to reduce the duplication of roles within one specific Specialized Forces unit. Further, the report recommended training standardization and associated guidance, which the Coast Guard subsequently addressed by updating its guidance and standardizing training requirements. In addition, the Coast Guard report recommended changes to the capabilities maintained by some units, such as MSSTs. Specifically, the report recommended that the Coast Guard focus MSSTs on waterside security capabilities and eliminate law enforcement roles, among others, to reduce duplicative training costs. Further, according to officials, in response to the Coast Guard Authorization Act of 2010, the Coast Guard eliminated the MSST in San Diego, California and replaced it with MSRT West, a second MSRT. The Coast Guard also placed all regional dive lockers under MSRT West. According to Coast Guard officials, structuring regional dive lockers under a single command in a single geographic location is safer and more efficient, because dive operations require a high level of subject matter expertise in the command as well as personnel actually participating in the dives. However, the Coast Guard partially applied this practice because it has not conducted the analyses necessary to fully identify potential overlap and the extent to which it could be unnecessarily duplicative. The Coast Guard categorizes Specialized Forces missions, such as drug interdiction or defense readiness, as primary, secondary or collateral, and assigns different levels of capabilities according to these categories. Specifically, multiple Specialized Forces are used to support the same Coast Guard missions, which often require similar capabilities from the units, such as the ability to perform enhanced law enforcement boardings. Figure 6 provides a visual representation of the Specialized Forces missions, the capabilities to carry out operations in support of those missions, and the units that address the mission areas. As shown in figure 6, MSSTs and PSUs primary and secondary missions overlap, as do the capabilities necessary to conduct three of the same missions—Ports, Waterways, and Coastal Security; Defense Readiness; and Search and Rescue. MSSTs and PSUs have operational differences, but there may be benefits to assessing when to use PSUs in place of MSSTs or vice versa, such as when one Specialized Force can be deployed more rapidly, or because Specialized Forces are located in close proximity. For example, MSSTs maintain the ability to deploy almost immediately to carry out an operation, while PSUs generally require around 24 months to deploy. PSUs generally have a deployment preparation cycle of at least 24 months and up to 48 months. Moreover, the variance in Specialized Forces utilization and the overlapping capabilities units maintain underscores a challenge and an opportunity, particularly given the close proximity of Specialized Forces units. For example, given that there are certain instances where Specialized Forces units appear to be substitutable, assessing the extent to which co-located units could be better leveraged could help the Coast Guard more efficiently manage its resources. Figure 7 shows the locations of Coast Guard Specialized Forces units and the close proximity of some units, such as co-located MSSTs and PSUs, which have overlapping primary and secondary defense readiness and ports waterways and coastal security missions (fig. 6) and related capabilities. In March 2019, as previously noted, Coast Guard leadership again called for a review of PSUs, citing overlap, personnel shortages, and excessive distance to training areas (such as waterways and weapon ranges). The challenge this new PSU study seeks to address underscores the importance of a contemporary and comprehensive assessment of these units’ workforce needs. It also presents the Coast Guard with an opportunity to consider whether it could more effectively use its co- located Specialized Forces. For example, instead of deploying a PSU within commuting distance of an operation occurring in San Francisco, CA that required surge capacity, the Coast Guard deployed an MSST from Seattle, WA for 7 days, even though both Specialized Forces are to maintain the same capabilities needed for the operation. Coast Guard officials stated that they decided to send the MSST to meet a surge capacity instead of the local PSU because Ports, Waterways and Coastal Security is a secondary mission for PSUs and PSUs do not bring law enforcement capability of boarding officers, among other things. According to Coast Guard officials, each PSU costs around $1 million a year to operate when not deployed to Guantanamo Bay, and two of eight PSUs are deployed annually. Assessing Specialized Forces workforce needs to determine the optimal mix of units and analyzing trade-offs, such as eliminating underutilized units, could identify opportunities for the Coast Guard to save millions of dollars over time. Elimination of even one PSU could save around a million dollars annually. Because the exact amount of savings would depend on the outcomes of those analyses and currently available cost data is not available for making estimates, we cannot precisely estimate the value of potential savings. However, given that the Coast Guard has begun an assessment of PSUs, it is reasonable to expect that a comprehensive analysis of Specialized Forces could find unnecessary duplication and could recommend PSU closures. Coast Guard officials did not state that they are considering this review as part of a comprehensive review of Specialized Forces that would include assessing the overlapping capabilities of other Specialized Forces units. In August 2019, Coast Guard officials told us that overlap or gaps in Specialized Forces’ capabilities could exist. Coast Guard officials also stated that some overlapping capability could be beneficial. While overlap may be beneficial, overlapping capabilities, if unnecessary, could indicate inefficiencies, such as excess capacity in some areas, including geographic areas, to the detriment of others where there may be capability gaps. The Coast Guard is not currently positioned to take action to reduce the risk of some potentially unnecessary overlap or duplication among the Specialized Forces units because it has yet to comprehensively assess the Specialized Forces program. Specifically, as reported above, Coast Guard officials stated that the Coast Guard has conducted some staffing analyses of standalone Specialized Forces units, but has not evaluated the Specialized Forces’ workforce or operations as a whole. Until the Coast Guard comprehensively assesses Specialized Forces’ needs, the Coast Guard will lack a complete picture of the extent to which overlapping capabilities are necessary or appropriate, or where there are capability gaps or areas where certain Specialized Forces units could be better leveraged to meet mission requirements. Assessing the extent to which unnecessary overlap or duplication exists among Specialized Forces’ capabilities, would better position the Coast Guard to identify capability gaps and reallocate resources, as needed, to use them more efficiently. Conclusions The Coast Guard’s Specialized Forces units include a range of specialized capabilities that are vital to the agency’s ability to fulfill its mission, and they constitute a significant force multiplier to maintain readiness throughout major U.S. ports and cities. The Coast Guard faces the difficult decision of determining how best to invest its limited resources. Without having assessed its operational needs and mix of personnel for Specialized Forces units, the Coast Guard does not have the information it needs to ensure that it is investing its resources efficiently. GAO’s key practices and considerations provide a framework for agency reorganization and a decision-making approach that can help ensure that resources are allocated efficiently and do not result in unnecessary overlap or duplication. The Coast Guard did not fully apply these practices when reorganizing the Specialized Forces. By comprehensively assessing Specialized Forces’ workforce needs and determining the extent to which overlapping capabilities are necessary, or whether capability gaps may exist, the Coast Guard may be able to more efficiently allocate resources for its Specialized Forces. Recommendations The Coast Guard should conduct a comprehensive analysis of its Deployable Specialized Forces’ workforce needs. (Recommendation 1) The Coast Guard should assess the extent to which unnecessary overlap or duplication exists among Deployable Specialized Forces’ capabilities. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to DHS for comment. DHS provided technical comments, which we incorporated as appropriate. On November 5, 2019, DHS also provided comments, reproduced in full in appendix III. DHS concurred with one of our two recommendations, and described actions planned to address it, but did not concur with the other. DHS concurred with our first recommendation that the Coast Guard should conduct a comprehensive analysis of its Specialized Forces’ workforce needs. DHS stated in its comments that the Coast Guard will conduct individual unit analyses, prioritizing for units that were not previously examined. Initial requests, according to the comments, will be submitted to staff responsible for the analyses by January 31, 2020, and estimated completion dates for the analyses are expected to be determined after assessing the availability of funding to support the analyses. These actions, if fully implemented, should address the intent of the recommendation. DHS did not concur with our second recommendation that the Coast Guard assess the extent to which unnecessary overlap or duplication exists among Specialized Forces’ capabilities. In its comments, DHS stated that when the priority of the missions, capabilities, and subsequent geographic operating areas are appropriately considered for each DSF unit type, unnecessary overlap or duplication does not exist among DSF capabilities. DHS further stated that our conclusions illustrate a fundamental misunderstanding of the corresponding missions of DSF units. We note in our report that the way in which the Coast Guard deploys certain Specialized Forces units may not result in overlap, but overlapping capabilities amongst units could indicate inefficiencies in how they are used, such as excess capacity in some areas, including geographic areas, and missed opportunities for use in others. As noted in our report, the Coast Guard has not conducted the analyses necessary to fully identify potential overlap amongst units’ capabilities and the extent to which opportunities may exist to use the units more efficiently. The Coast Guard categorizes Specialized Forces missions, such as drug interdiction or defense readiness, as primary, secondary, or collateral, and assigns different levels of capabilities according to these categories. We found that multiple Specialized Forces are used to support the same Coast Guard missions, which often require similar capabilities from the units, such as the ability to perform enhanced law enforcement boardings. Further, as stated in our report, in August 2019, Coast Guard officials told us that overlap or gaps in Specialized Forces’ capabilities could exist and that some overlapping capability could be beneficial. While overlap may be beneficial, overlapping capabilities, if unnecessary, could indicate inefficiencies, such as excess capacity in some areas, including geographic areas. Also in its comments, DHS stated that we have not identified any substantive examples of unnecessary overlap or duplication nor provided any other compelling reasons for how implementing this recommendation could enhance Coast Guard mission effectiveness. DHS cited our use of MSST and PSU potential overlap as an example of misunderstanding DSF unit missions and active versus reserve personnel. However, MSST and PSU potential overlap is a prime example of why potential unnecessary overlap should be examined by the Coast Guard. Specifically, as noted in our report, MSST and PSU primary and secondary missions overlap, as do the capabilities necessary to conduct three of the same missions—Ports, Waterways, and Coastal Security; Defense Readiness; and Search and Rescue. MSSTs and PSUs have operational differences due to active versus reserve personnel status, but there may be benefits to assessing when to use PSUs in place of MSSTs or vice versa, such as when one Specialized Force can be deployed more rapidly, or when Specialized Forces are located in close proximity. Beyond the MSST and PSU potential overlap, active duty units such as MSSTs and MSRTs provide an additional example. As shown in Figure 6 of our report, MSRTs and MSSTs share the primary missions of Ports, Waterways, and Coastal Security, as well as common secondary missions, including Drug and Migrant Interdiction. Additionally, MSRT- San Diego and MSST-Long Beach are within close proximity to one another, offering an opportunity to examine potential overlap. Also, as noted in our report, officials from some units we interviewed indicated that they experienced periods of underutilization, while other similar units turned down operations for lack of available personnel. For example, an official at one unit described efforts to increase the number of operations carried out by the unit, with officials describing outreach efforts to other Coast Guard units to encourage those units to call on them for specialized assistance. Officials at another unit conducted similar outreach, including passing out flyers describing Specialized Forces capabilities and contact information should the other Coast Guard units need assistance. In contrast, officials from a different Specialized Forces unit described instances where they had to decline operations because they did not have enough personnel to meet the demand. Given that there are certain instances where Specialized Forces units appear to be substitutable, assessing the extent to which units could be better leveraged could help the Coast Guard more efficiently manage its resources. In addition, in March 2019, Coast Guard leadership called for a review of PSUs, citing overlap, personnel shortages, and excessive distance to training areas (such as waterways and weapon ranges). As noted in our report, the challenge this new PSU study seeks to address underscores the importance of a contemporary and comprehensive assessment of these units’ workforce needs and presents the Coast Guard with an opportunity to consider whether it could more effectively use its co- located Specialized Forces. According to Coast Guard officials, each PSU costs around $1 million a year to operate when not deployed to Guantanamo Bay, and two of eight PSUs deploy annually. Assessing Specialized Forces workforce needs to determine the optimal mix of units and analyzing trade-offs, such as eliminating or moving underutilized units, could identify opportunities for the Coast Guard to save millions of dollars over time. As noted in our report, because the exact amount of savings would depend on the outcomes of those analyses and cost data that is not currently available for making estimates, we cannot precisely estimate the value of potential savings. However, given that the Coast Guard has begun an assessment of PSUs, it is reasonable to expect that a comprehensive analysis of Specialized Forces could provide either a defensible basis for the existing structure or find unnecessary duplication and could recommend changes to the number and location of Specialized Forces. In summary, as we state in our report, a comprehensive assessment of Specialized Forces’ needs would, among other things, help the Coast Guard have a more complete picture of the extent to which certain Specialized Forces units could be better leveraged to meet mission requirements. Assessing the extent to which unnecessary overlap or duplication exists among Specialized Forces’ capabilities would better position the Coast Guard to identify capability gaps and reallocate resources, as needed, to use them more efficiently. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or AndersonN@gao.gov. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Coast Guard Deployable Specialized Force Cost and Operations Data from 2016 to 2019 and Unit Locations This appendix provides Coast Guard data on Deployable Specialized Force (Specialized Forces) personnel, operations, costs, and resource hours showing a mix of operational tempos, including variation in the number of operations requested of some units such as Tactical Law Enforcement Teams (TACLETs) and Port Security Units (PSUs), but relatively constant numbers of personnel assigned to them. Table 3 provides operational and cost details for Coast Guard Specialized Forces units for fiscal years 2016 through 2018 and planned for 2019. Appendix II: The Coast Guard Missions This appendix details the Coast Guard’s 11 missions, which are characterized as non-homeland security and homeland-security missions (see Table 4). Appendix III: Comments from the Department of Homeland Security Appendix IV: GAO Contact and Staff Acknowledgements GAO Contact Nathan J. Anderson, (202) 512-3841 or AndersonN@gao.gov. Staff Acknowledgements In addition to the contact above, Ben Atwater (Assistant Director), Andrew Curry (Analyst-in-Charge), Chuck Bausell, David Dornisch, Michele Fejfar, Tracey King, and Calaera Powroznik made key contributions to this report. Also contributing were: Jason Berman, Dominick Dale, Eric Hauswirth, and Jan Montgomery.
Why GAO Did This Study The U.S. Coast Guard, within the Department of Homeland Security (DHS), is the principal federal agency charged with ensuring the security and safety of the waters under U.S. jurisdiction. To help carry out its missions, the Coast Guard maintains Specialized Forces units with the capabilities needed to handle drug interdiction, terrorism, and other threats to the U.S. maritime environment. The Coast Guard reorganized the command structure of these units in 2007 and again in 2013. The Maritime Security Improvement Act of 2018 included a provision for GAO to evaluate Specialized Forces units and provide a report to Congress. This report examines the extent to which the Coast Guard addressed key practices and considerations for assessing reorganization of its Specialized Forces units. GAO assessed the Coast Guard report and associated workforce planning documentation and data used for its 2013 reorganization and analyzed the extent to which the agency applied key practices. GAO also analyzed guidance and data on Specialized Forces capabilities and operations to identify potential overlap or gaps and interviewed agency officials. What GAO Found In reorganizing its Deployable Specialized Forces (Specialized Forces) in 2013, the Coast Guard generally applied three of five key practices for agency reorganization, including establishing goals and outcomes, engaging stakeholders, and addressing longstanding management challenges, such as training shortfalls. However, the Coast Guard did not fully apply the other two key practices—using data and evidence and addressing potential overlap and duplication within the Specialized Forces workforce. For example: The Coast Guard has not assessed the overall Specialized Forces workforce needs, as this practice recommends. Officials from some units stated that they experienced periods of underutilization, while other units with the same or similar capabilities turned down operations for lack of available personnel. GAO identified some overlap among the capabilities of the different Specialized Forces units and the Coast Guard missions they support—in some cases Specialized Forces units were co-located with other Specialized Forces units with many of the same capabilities and similar missions. In August 2019, Coast Guard officials acknowledged that the 2013 reorganization did not conduct an analysis of potential overlap or duplication of capabilities and agreed that overlap or gaps in Specialized Forces capabilities could exist. Assessing workforce needs and the extent to which unnecessary overlap or duplication may exist among Specialized Forces would help ensure that the agency effectively allocates resources and uses them efficiently. What GAO Recommends GAO makes two recommendations to DHS. First, GAO recommends that the Coast Guard conduct an analysis of its Specialized Forces' workforce needs, with which DHS concurred. Second, GAO recommends that the Coast Guard assess the extent to which unnecessary overlap or duplication exists. Although DHS did not concur, GAO continues to believe the findings documented in the report support the recommendation.
gao_GAO-19-342T
gao_GAO-19-342T_0
Background Social Security has been the foundation of retirement security in the United States. Enacted in 1935, Social Security provides for the general welfare of older Americans by, among other things, establishing a system of federal old-age benefits, including a retirement program. Officially titled Old-Age and Survivors Insurance (OASI), the Social Security retirement program provides benefits to retired workers, their families, and survivors of deceased workers. About 51 million retirees and their families received $798.7 billion in Social Security retirement benefits in 2017, according to Social Security Administration (SSA), which is responsible for administering the program. About 40 years after the creation of Social Security, landmark legislation was enacted in 1974 that has played a major role in establishing the structure for private sector employers’ involvement in sponsoring retirement plans for their workers: the Employee Retirement Income Security Act of 1974 (ERISA). ERISA is a complex law administered by multiple federal agencies including the Department of Labor (DOL), the Internal Revenue Service (IRS) within the Department of the Treasury (Treasury), along with the Pension Benefit Guaranty Corporation (PBGC), and has evolved with many significant amendments over the years (see app. II). ERISA was enacted, in part, to address public concerns about the security of pension benefits, including the prominent failure of a couple of large, private sector pension plans. The act, as amended, does not require any employer to establish a retirement plan, but those who do must meet certain requirements and minimum standards. For example, ERISA establishes certain requirements for all employer-sponsored plans, including responsibilities for plan fiduciaries (those who manage and control plan assets, among others), as well as minimum funding standards for defined benefit (DB) plans, which traditionally promise to provide a monthly payment to retirees for life. ERISA also established the PBGC, the government corporation responsible for insuring the pension benefits of nearly 37 million American workers and retirees who participate in nearly 24,800 private sector defined benefit plans. Under ERISA, tax-qualified DB plans (or the employers who sponsor them) may have to pay insurance premiums to the PBGC, based on the funding level of their plans. The IRS also administers the Internal Revenue Code (IRC), which has provisions that affect pensions and retirement savings. While SSA administers the Social Security program, and the DOL, PBGC, and IRS each are generally responsible for administering aspects of ERISA, several other agencies also have important roles in various parts of the retirement system. For example, the Department of Health and Human Services oversees the Centers for Medicare and Medicaid Services (CMS), which administers the major health care programs that provide coverage for retirees, as well as the Administration on Aging, which encourages and assists state grantees that provide services for older adults. In addition, agencies such as the U.S. Department of Agriculture and the Department of Housing and Urban Development oversee food and housing programs for older adults. Other agencies also play a role in providing various services and supports for older adults. For example, the Department of Transportation administers a program that improves access and alternatives to public transportation for seniors and individuals with disabilities. The Consumer Financial Protection Bureau, as part of its mandate to provide financial literacy education, helps consumers navigate financial choices related to retirement. The Federal Trade Commission can have consumer protection and investor oversight roles and responsibilities related to individuals borrowing against their pensions. In addition, these federal agencies and others work together to help combat elder financial exploitation, which experts have described as an epidemic with society-wide repercussions. Citing our prior work on this topic, in October 2017, Congress enacted the Elder Abuse Prevention and Prosecution Act, calling on the Department of Justice to work with other federal, state, and local law enforcement agencies to improve data collection and provide technical assistance focused on combatting elder abuse. The need for government services and support for older adults in retirement will continue to grow as the proportion of older adults in the United States continues to rise significantly in the future. In 1970, those age 65 and over accounted for about 10 percent of the population, but by 2060, they are expected to account for about 23 percent (see fig. 1). This reflects long-term decreases in birth rates and increases in life expectancy. Main Pillars of the U.S. Retirement System Face Fiscal Risks and Other Challenges The U.S. retirement system is supported by three main pillars—Social Security, employer-sponsored plans, and individuals’ savings—that serve as important sources of retirement income for Americans. Currently, each of these pillars faces various risks and other challenges. If left unchanged, these risks present the federal government with significant potential fiscal exposures, which may legally commit or create expectations for future federal spending. Pillar One: Social Security and Other Federal Programs The first pillar, Social Security (specifically, Social Security’s retirement program), is facing financial difficulties, as are other federal programs that provide essential supports to many older Americans, such as Medicare and the PBGC’s insurance programs (see fig. 2). In addition, multiple federal agencies help fund a broad array of home and community-based services for older adults. As the number of older adults needing assistance continues to grow, the pressure to increase federal funding for these services is likely to increase. As the foundation of retirement security in the United States, Social Security’s retirement program, financed primarily by payroll taxes, helps reduce poverty among beneficiaries, many of whom rely on Social Security for the majority of their income once they retire. Our analysis of data from the Federal Reserve Board’s most recent Survey of Consumer Finances (SCF) showed that in 2016, among households age 65 and over, the bottom 20 percent, ranked by income, relied on Social Security retirement benefits for 81 percent of their income, on average. But Social Security is facing financial difficulties that, if not addressed, will affect its long-term stability. During the many years that the revenue for Social Security’s retirement program exceeded costs, the program built up reserves in the trust fund. However, since 2010, Social Security has been paying out more in benefits than it received and has relied on interest income to help cover expenses. For 2018, the cost of the program was expected to exceed total income by $2 billion and, as a result, asset reserves were expected to decline. If no changes are made, current projections indicate that by 2034, the retirement program trust fund will only be sufficient to pay 77 percent of scheduled benefits. The underlying cause of Social Security’s financial difficulties is the aging population, driven by lower fertility rates and increased life expectancy, and accelerated by the ongoing retirement of the baby boom generation. The first baby boomers began receiving Social Security retirement benefits in 2008, and growing numbers will become eligible for Social Security benefits in coming years. Our analysis indicates that the number of baby boomers turning 65 is projected to increase from an average of about 10,200 per day in 2018 to more than 11,000 per day in 2029 (see fig. 3). As with the Social Security retirement program, reserves had also built up over time in the trust fund for Social Security’s disability program, but in 2005, the program began paying out more than it was taking in. To avoid benefit reductions, which were expected to begin in 2016, Congress passed a law in late 2015 that temporarily reallocated some payroll tax revenue from the retirement trust fund to the disability trust fund. Even with this added boost, if no further changes are made, reductions in disability benefits are projected to be needed beginning in 2032, according to SSA’s most recent report. For both the Social Security retirement and disability programs combined, the number of workers contributing to Social Security for each aged, disabled, dependent, or surviving beneficiary is declining, due to the aging population and other factors. While there are currently 2.8 workers contributing to Social Security per beneficiary, this ratio is expected to decline to 2.2 by 2035, and to 2.0 by 2095 (see fig. 4). It is difficult to predict exactly what would occur if either Social Security’s retirement or disability programs were to become insolvent because the Social Security Act does not provide for any procedure for paying less than full benefits. According to SSA, benefits could be reduced across the board by a set percentage, certain benefits could be prioritized, or benefits could be delayed. Medicare and Medicaid The major health care programs that include coverage for retirees, Medicare and Medicaid, also face increasing financial challenges due to program and demographic changes. For example, over the years, Congress has made changes to Medicare so that more people have become eligible, even if under age 65. Also, Congress has added two more parts to Medicare: one part allowing insurance under private plans approved by Medicare (Medicare Advantage), and another part providing prescription drug coverage. As of 2017, over 58 million people were enrolled in one or more parts under Medicare. Projections indicate that in the coming decade, as more members of the baby-boom generation become eligible for benefits, the number of Medicare beneficiaries will rise to 75 million in 2027. Similar to the challenges facing Social Security, spending for Medicare Part A (Hospital Insurance) is projected to outpace revenue over time, and the trust fund for Medicare Part A is projected to be unable to pay full benefits beginning in 2026. At that time, the Hospital Insurance trust fund will only be sufficient to pay 91 percent of hospital-related Medicare spending. Medicaid, which provides health care coverage and financing for millions of low-income individuals, including those age 65 or older, also faces financial challenges. Medicaid is the nation’s primary payer for long-term services and supports, and the elderly—along with those with disabilities—are among the highest cost Medicaid beneficiaries. The federal government and states share in the financing of the Medicaid program, with the federal government matching most state expenditures for Medicaid services using a statutory formula. Estimated Medicaid outlays for fiscal year 2017 were $592.2 billion, of which $370.6 billion was financed by the federal government and $221.6 billion by the states. Over the next 7 years, Medicaid expenditures are expected to increase significantly, reaching just over $1 trillion in 2026. The PBGC insures the pension benefits of most private sector DB plans through one of its two programs: the Single-Employer Insurance Program and the Multiemployer Insurance Program. The single-employer program is the larger of the two programs. As of the end of fiscal year 2018, the single-employer program insured about 26 million workers and retirees participating in about 23,400 private sector single-employer DB plans. As of the end of fiscal year 2018, the multiemployer program insured about 11 million workers and retirees in about 1,400 private sector DB plans created through a collective bargaining agreement between two or more employers and a union. Although PBGC is one of the largest of any federal government corporations, with over $110 billion in assets, its pension benefit guarantees are increasingly at risk due to its substantial liabilities. At the end of fiscal year 2018, PBGC’s net accumulated financial deficit was over $51 billion, and its exposure to potential future losses for underfunded retirement plans was estimated to be nearly $185 billion. We designated the single-employer program as high risk in July 2003 and added the multi-employer program to our high-risk list in January 2009. Concerns about PBGC’s financial future have kept both programs on GAO’s high-risk list. As long as PBGC’s long-term financial stability remains uncertain, the retirement benefits of millions of U.S. workers and retirees are at risk of greater reductions should their benefit plans be terminated below PBGC’s current guaranteed benefit levels. In contrast to Social Security, PBGC is not funded by tax revenues, but by the premiums paid by plans or their sponsors, the assets acquired from terminated plans, and investment returns on these funds. The primary drivers of the government’s fiscal exposure related to PBGC’s deficit are the collective financial risk of the many underfunded pension plans insured by PBGC and the long-term decline in the number of participants covered by traditional DB plans. Since 1985, there has been a 78 percent decline in the number of plans insured by PBGC and more than 13 million fewer workers actively participating in PBGC-insured plans. There has also been a recent trend of single-employer plan sponsors transferring the liability for some of their participants to insurance companies via group annuity “buy-outs,” further reducing the number of participants in PBGC-covered plans. As a result of these trends, even though PBGC premium rates have increased significantly in recent years, PBGC’s premium base has been eroding over time as fewer sponsors are paying premiums for fewer participants. In addition, more recently, PBGC’s net accumulated financial deficit has escalated dramatically due to the critical and declining status of a number of large multiemployer pension plans. As we previously reported, PBGC’s multiemployer plan is projected to become insolvent in approximately 6 years, and if that happens, participants in the insolvent multiemployer plans who rely on PBGC guarantees will receive only a small fraction of current statutory guarantees. According to PBGC, most participants would receive less than $2,000 a year, and in many cases less. Social Safety Net Programs Our prior work has found that federally-funded services for older Americans were not reaching many older adults who may need them, and that the funding for these programs had decreased while the number of older adults had increased. The federal government helps provide state and local governments with funding for a broad array of home and community-based services for older adults through multiple federal agencies and programs. In addition to long-term care services funded by Medicaid, these programs also include services funded under the Older Americans Act of 1965, as amended, which provides grants to states for such services as home-delivered and congregate meals, home- based care, transportation, and housing. In our 2015 report, we recommended that the Department of Health and Human Services (HHS) should facilitate development of a cross-agency federal strategy to help ensure that federal resources are used effectively and efficiently to support a comprehensive system of home and community-based services and related supports for older adults. While HHS agreed with our recommendation, the agency has yet to develop a cross-agency strategy involving all five agencies that fund these services. As the number of older adults needing assistance continues to grow, the gap in services can only be expected to widen. Absent any changes, state and local governments are facing—and will continue to face—a gap between receipts and expenditures in the coming years, putting greater pressure on the federal government to increase funding. Pillar Two: Employer- Sponsored Retirement Plans The second pillar of the U.S. retirement system, employer-sponsored retirement plans, is also an important source of income relied upon by many Americans in their retirement. However, not everyone has access to employer-sponsored plans, and among those who do, certain provisions and requirements of the plans can make it difficult for individuals to accumulate savings over time. Bureau of Labor Statistics data indicate that about one-third of private sector workers in the United States did not have access to an employer- sponsored retirement plan in 2016, and about two-thirds did. Of those with access, the vast majority (about 76 percent) participated in the plan, either because they were automatically enrolled by the plan sponsor or they chose to participate. Although individuals without access to an employer-sponsored plan can save for retirement on their own, having access to an employer- sponsored retirement plan makes it easier to save, and more likely that an individual will have another source of income in retirement beyond Social Security. Our prior work found that employees working for smaller firms and in certain industries, such as leisure and hospitality, are significantly less likely to have access to an employer-sponsored plan compared with those working in larger firms and in certain other industries, such as information services. Also, we found that low-income workers are much less likely than high-income workers to have access to an employer-sponsored plan. Among those individuals who have access to employer-sponsored plans in the private sector, the structure of plans has changed over time, with a shift from traditional DB pension plans to defined contribution (DC) plans, such as 401(k)s, as the primary type of retirement plan (see fig. 5). DB plans are traditional retirement plans that generally promise to provide a benefit for the life of the participant, based on a formula specified in the plan that typically takes into account factors such as an employee’s salary, years of service, and age at retirement. DC plans are employer- sponsored account-based retirement plans, such as a 401(k) plan, that allow individuals to accumulate tax-advantaged retirement savings in an individual account based on employee and/or employer contributions, and the investment returns (gains and losses) earned on the account. The amount of assets held in individual retirement accounts (IRA) also has increased significantly. Most of the assets in IRAs are funded by assets rolled over from DC plans, and sometimes DB plans, when individuals change jobs or retire. With DB plans, participants can accumulate retirement savings simply by continuing to work for the employer offering the plan, and the employer is responsible for ensuring that the amount in the plan is sufficient to pay promised benefits at retirement. However, even when DB plans were more prevalent, many workers did not have access, and those with access to DB plans could still face challenges under certain circumstances. For example, when DB plan participants change employers, their accrued benefits are less portable than accrued savings in a DC plan. If the change in employers takes place before they have met vesting requirements, DB plan participants can lose all the benefits accumulated from employer contributions to that point, which in the private sector, generally means everything. Also, for DB plans that base benefits on final average salary, benefit accruals are significantly “backloaded.” As a result, if a DB plan participant changes employers mid-career, it could result in missing out on the time when the biggest benefit accruals would have occurred. In addition, when entering retirement, although those with DB plans can generally rely on receiving a set monthly benefit for life, they may still face challenges. For example, participants in certain financially troubled plans—such as those in the multiemployer plans discussed earlier—could see their benefits being suspended or cut. In addition, if a DB plan participant is offered and accepts a lump-sum payment in place of a lifetime annuity, the participant may face challenges similar to those with DC accounts in terms of managing the spend down of their retirement savings. With DC plans, responsibility for planning and managing retirement savings is shifted from employers to employees. Participants in DC plans are often required to make complex financial decisions—decisions that generally require financial literacy and that could have significant consequences for their financial security throughout retirement. For example, workers with DC plans have to decide whether to participate in the plan, how much to contribute to their accounts and how to manage their investments to strike the right balance between risk and returns. One way DC plan enrollment and contribution levels can be encouraged is by putting automatic mechanisms in place. For example, DC plan sponsors can encourage participation in the plan by adopting auto- enrollment, whereby eligible workers are enrolled into a plan automatically, unless they choose to opt out. DC plan sponsors can also encourage increases in contribution rates by adopting auto-escalation, whereby the employee’s contributions are automatically increased to a predetermined level on a set schedule, unless they choose to opt out. Participants in DC plans also have to decide whether to borrow from their accounts if other needs arise, or cash out their accounts when they change jobs. When leaving an employer, those with DC accounts may be allowed to transfer their accumulated balances into a new employer plan or an individual retirement account (IRA), but they may also be tempted to cash out their accounts, even though they may face associated tax consequences. Similarly, when entering retirement, those with DC accounts may decide to transfer the account balance into an IRA, or they may decide to receive the funds in a lump-sum payment. While some DC plans also offer monthly payments through an annuity, most do not provide lifetime income options or other options that can help participants draw down their retirement funds in a systematic way. Findings from the most recent SCF indicate that an individual’s ability to accumulate retirement savings depends on the individual’s income level. In addition, the disparities in average account balances by income level have increased markedly over time (see fig. 6). For example, according to SCF data, households in the top 10 percent of income level appeared to be substantially better prepared for retirement than most others, with an average account balance of more than $720,000 in 2016. In contrast, households with below average income, in the second quintile, had an average account balance of about $47,000. Among lower-income households, our prior work suggests that cashing out accounts when changing jobs may be a significant drain on retirement savings, along with unexpected events that may also cause them to withdraw funds from their accounts prior to retirement. Retirement experts have posited a variety of reasons for employers’ shift to DC plans. One oft-cited reason is that the structure of DC plans gives employers better control over how much they spend on wages and benefits packages. With DC plans, employers may choose whether to make contributions to participants’ individual accounts; in contrast, DB plans promise a certain future monthly benefit to employees in retirement, and the employer must bear the risk of making adequate contributions to the plan to make good on that promise. Another reason retirement experts cite for the shift to DC plans was the introduction of 401(k) accounts in the Internal Revenue Code in 1978, which they credit with fostering the adoption of account-based plans by sanctioning the use of salary deferrals as a source of contributions. Some retirement experts have also suggested that employees’ preferences and demands have changed over time, making DC plans more feasible and, in some respects, more appealing. For example, some analysts have noted that the portability of an account-based plan can be better suited to meet the needs of a more mobile workforce. Pillar Three: Individuals’ Savings and Other Resources The third pillar of the retirement savings system—individuals’ personal savings—is the remaining important source of retirement income, and it also faces certain risks and challenges. Personal savings can include a variety of assets, such as amounts saved from income or wages; contributions to accounts outside of a retirement plan; non-retirement financial wealth that is inherited or accumulated over time; and equity from tangible assets such as a home. These savings are expected to augment any income from the first two pillars: Social Security and employer-sponsored retirement plans. Over the past several decades, however, the personal saving rate—which is calculated as the proportion of disposable income that households save—has trended steeply downward, from a high of 14.2 percent in 1975, to a low of 3.1 percent in 2005, before recovering somewhat to 6.8 percent in 2018 (see fig. 7). While the specific implications of a historically low national saving rate on any current or future retiree are less clear, the decline in the U.S. personal savings rate over time is concerning and could have implications for retirement security, particularly when coupled with the recent trend of low wage growth. After accounting for inflation, average wages remain near the levels they were in the 1970s for most individuals (see fig. 8), adding to the difficulty of increasing their level of saving. In addition, many households have accumulated little wealth. SCF data show that among households in which the head of the household was working, the average value of all financial assets, excluding savings in retirement accounts, was $70,700 in 2016. For households in which the head was retired, this average was $89,700. For those who become home owners and build up equity in a home, this equity can serve as an important asset, providing a potential income source in retirement either by selling the home or obtaining a reverse mortgage. However, increased household debt levels may affect the amount of income available from this source, as well as from other assets. Data on the make-up of debt indicate that home ownership has been declining, while education debt has been rising, especially since 2013. Another challenge with implications for individuals’ ability to accumulate personal savings is that economy-wide, aggregate health care expenditures are projected to continue to grow as a percentage of the overall economy, and individuals have to contend with rising health care costs as they strive to save for retirement. CMS projections estimate that the annual growth rate of out-of-pocket health care spending for the U.S. population, per capita, will increase from 3.0 percent in 2018 to about 3.8 percent by 2026. While these costs are projected to rise for the population as a whole, individuals age 65 and over face the highest out-of-pocket health-related expenses. Further, health care expenses can be larger relative to other expenses for many retirees and hard to predict, making the amount of income retirees need to plan to spend on health care difficult to determine. Simultaneously, trends in longer life expectancy have the potential to increase economic vulnerability for retirees. Specifically, life expectancy for those age 65 or older has increased significantly over the past century and is projected to continue to increase. For example, a man turning 65 in 2030 is expected to live to age 85.0, on average, an additional 5.3 years compared to a man who turned 65 in 1980, who was only expected to live to age 79.7, on average. A woman turning 65 in 2030 is expected to live to age 87.2, on average, an additional 3.5 years compared to a woman who turned 65 in 1980, who was only expected to live to age 83.8, on average. Moreover, these life expectancies are averages, with some individuals living well beyond their life expectancy. As a result, people must now prepare for this greater longevity risk—that is, the risk that they will spend more years in retirement and potentially outlive their savings. For those who lack sufficient personal savings or other assets to augment their Social Security benefit or income from any employer-sponsored plan, the only option to maintain a desired standard of living may be to continue working past age 65. Our prior work has found that labor force participation among older workers has increased during the last decade and that, compared to current retirees, workers age 55 or older were more likely to expect to retire later and to work during retirement. Our prior work has also identified challenges maintaining retirement savings should older workers become unemployed. The Need to Re-evaluate the Nation’s Approach to Financing Retirement Over the past 40 years, the nation has taken an incremental approach to addressing the U.S. retirement system; however, such an approach may not be able to effectively address the interrelated foundational nature of the challenges facing the system today. Without a more comprehensive re-evaluation of the myriad challenges across all three pillars of the retirement system, identifying effective, enduring solutions may be difficult, and the consequences could be significant. Unless timely action is taken, many older Americans risk not having sufficient means for a secure and dignified retirement in the future. Retirement Issues Have Been Addressed with an Incremental Approach Congress has generally sought to address retirement-related issues and concerns one issue at a time. As highlighted in appendix II, at least 25 laws pertaining to retirement have been enacted since ERISA. Some laws—such as the Social Security Amendments of 1983 and the Pension Protection Act of 2006—made large changes to the retirement system. Other laws were more targeted. For example in 1984, Congress amended ERISA to address concerns that women were not receiving their share of private pension benefits by, among other things, permitting certain breaks in service without loss of pension credits, and changing treatment of pension benefits for widowed and divorced spouses. Similarly, in 1996, Congress created a simplified retirement savings vehicle for employers with 100 or fewer employees to help address concerns that smaller employers were not sponsoring plans. The number of agencies that play roles in the current retirement system has also contributed to the incremental approach to addressing concerns, with no single federal agency being responsible for taking a broad view of the system as a whole. As described earlier, there are at least 10 agencies that have a role in overseeing some part of the system, or that are involved in providing supports and services to older Americans. In addition to DOL, IRS, and PBGC, which are the agencies generally responsible for administering ERISA, SSA administers the Social Security program; and the Department of Health and Human Services oversees CMS, which administers the health care programs for retirees. In addition, various other agencies play a role in providing a range of services and supports to assist older adults through retirement. Having multiple agencies involved in the system has also contributed to a complex web of programs and requirements. For example, our prior work identified more than 130 reports and disclosures stemming from provisions of ERISA and the Internal Revenue Code. Although each plan sponsor is required to submit only certain of these reports and disclosures, determining which ones can be challenging, and we found that the agencies’ online resources to aid plan sponsors with this task were neither comprehensive nor up to date. We made several recommendations to address these issues that have not been fully implemented. Need for More Comprehensive Reform of the U.S. Retirement System While three federal commissions have focused on various retirement issues (see app. III), it has been nearly 40 years since the last comprehensive evaluation of the nation’s approach to financing retirement by a federal commission. The 1979 President’s Commission on Pension Policy conducted a broad study of retirement-related issues and made a series of over-arching recommendations, such as creation of a minimum universal pension system that would provide a portable benefit for all workers that would be a supplement to Social Security. Other recommendations included federal protections for participants in state and local government plans, more consistent tax treatment of pension plans and retirement savings vehicles, provisions to strengthen Social Security, as well as proposals regarding employment of older workers and disability programs. However, many of the commission’s recommendations were not implemented. The issues identified nearly 40 years ago by the 1979 commission’s comprehensive re-evaluation of the U.S. retirement system continue to be issues facing the nation today. In fact, these issues have only become more complex and more urgent due to fundamental changes that have occurred since 1979—especially the growing fiscal exposure to the federal government and the shift from DB to DC plans, with its associated increase in risks and responsibilities for individual workers. Taken together, these changes may make it harder for retirees to achieve financial security in retirement, especially for those without access to employer-sponsored plans and at the lower end of the income scale. A panel of 15 retirement experts convened by GAO in November 2016 agreed that there is a need for a new comprehensive evaluation of the U.S. retirement system. They noted weaknesses in the current system’s ability to help ensure that all individuals can provide for a secure retirement. They also discussed the burden that the current system’s complexity places on individuals, employers, and federal government. Although there was agreement among many panelists that a more comprehensive approach would be needed to provide a secure retirement for future retirees, opinions varied on the types of solutions needed. For example, some panelists suggested that a new government-sponsored savings vehicle should be created, while others supported modifying the existing employer-sponsored system to make any needed changes. In addition, several panelists commented on how the current system can be overly complex and confusing for employers, especially small employers. They discussed how the current private sector system poses financial and litigation risk for employers, especially with respect to investment decisions, fiduciary duty, and fees. For example, one panelist suggested that DC plan sponsors may welcome the federal government providing more guidance on the types of investments that would be regarded as prudent and safe as a way to reduce their litigation risk. Panelists also noted that the experiences of other countries can provide useful insights for ways to improve U.S. retirement programs and policies. For example, some panelists described the approach being taken by the United Kingdom (UK) as a potential model for expanding access to retirement savings plans. In the UK model, universal access for workers was implemented by mandating that all employers automatically enroll employees in either their own or the government-sponsored retirement savings plan, the National Employment Savings Trust. In our 2017 report, we suggested five policy goals for a reformed U.S. retirement system as a starting point for discussion: (1) promoting universal access to a retirement savings vehicle, (2) ensuring greater retirement income adequacy, (3) improving options for the spend down phase of retirement, (4) reducing complexity and risk for both participants and plan sponsors, and (5) stabilizing fiscal exposure to the federal government (see table 1 for more detail on these goals). Reforming the nation’s retirement system to create a system that meets all of these goals, or others identified by the Congress, will require a careful and deliberative approach. For example, some type of consensus about the goals would need to be established as a first step. Broad questions are likely to be raised about how each of the goals should be achieved. The examination of relevant issues by past federal commissions, the discussions at our November 2016 panel, as well as what we can learn from the experiences of other countries, further illustrate how complex any reform effort is likely to be. Also, we recognize that some of these goals may compete with each other—in particular, ensuring greater retirement security and minimizing fiscal exposure to the federal government. Therefore, a balanced approach will be required, which can only result from a more holistic examination of the issues by those representing a broad range of perspectives. As a result, we recommended that Congress consider establishing an independent commission to comprehensively examine the U.S. retirement system and make recommendations to clarify key policy goals for the system and improve the nation’s approach to promoting more stable retirement security. We suggested that such a commission include representatives from government agencies, employers, the financial services industry, unions, participant advocates, and researchers, among others, to help inform policymakers on changes needed to improve the current U.S. retirement system. Chairman Collins, Ranking Member Casey, and Members of the Committee, this concludes my prepared remarks. I would be happy to answer any questions that you may have. GAO Contacts and Staff Acknowledgments For further information regarding this testimony, please contact Charles A. Jeszeck at (202) 512-7215 or jeszeckc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact above, Margie K. Shields, Assistant Director; Jennifer Gregory, Analyst-in-Charge; Justine Augeri; and Gustavo O. Fernandez made key contributions to this publication. Also contributing to this report were Barbara D. Bovbjerg, Managing Director, Education, Workforce, and Income Security Issues; Oliver Richard, Chief Economist; Frank Todisco, Chief Actuary; James Bennett, Deborah Bland, Corinna Nicolaou, and Adam Wendel, with assistance from others who worked on our 2017 report. Appendix I: GAO’s Expert Panel on the State of Retirement We convened a panel of retirement experts in November 2016 to obtain their insights on the condition of retirement in the United States and various options for a new approach to help ensure that all individuals can provide for a secure retirement. This appendix provides a description of our methodology for selecting the panel. (See text box for final list of 15 experts participating in our panel.) To identify the experts to invite to this meeting, we compiled an initial list based on interviews with experts conducted during recent GAO retirement income security work and the organizations invited to participate in a 2005 GAO forum on the future of the defined benefit system and the Pension Benefit Guaranty Corporation. Potential experts were identified based on the following criteria: Organizational type: To ensure that we considered the unique roles or situations of various entities involved in retirement income policy, we selected panelists from the federal government, state or local government, research institutes or universities, advocacy or membership organizations, and financial services firms. Organizational reputation: To ensure that our panelists span political perspectives, we selected panelists from organizations known to be conservative, moderate, and liberal (to the extent the reputation for the organization could be easily identified). Subject matter expertise: To ensure that the discussion considered as many aspects of retirement income security as possible, we selected panelists with expertise across a range of areas, including defined benefit (DB) plans, defined contribution (DC) plans, individual retirement accounts (IRA), demographic trends, vulnerable populations, actuarial science, income in retirement, financial literacy, and behavioral finance. Range of views: To ensure that our discussion was inclusive of different philosophies regarding the role of government with regard to the population and the economy, we selected panelists to represent the viewpoints of individuals and business. Representation of diverse groups: To ensure that the discussion benefited from different viewpoints, we selected panelists to reflect gender, racial, and ethnic diversity. An initial list of 41 potential experts was shared with GAO management officials with expertise in retirement issues, actuarial science, and strategic planning, as well as GAO methodologists, for their comments and suggestions. From this, we developed a shorter list eventually arriving at our final group of 15, listed above. These final 15 panelists were also evaluated for conflicts of interest. A conflict of interest was considered to be any current financial or other interest that might conflict with the service of an individual because it (1) could impair objectivity and (2) could create an unfair competitive advantage for any person or organization. All potential conflicts were discussed by GAO staff. The 15 experts were determined to be free of conflicts of interest, and the group as a whole was judged to have no inappropriate biases. Panelists engaged in a day-long discussion about our nation’s approach to retirement policy (see text box). The discussion was guided by a list of questions developed in advance, and the meeting was conducted by a GAO moderator to ensure that all panelists had an opportunity to participate and provide responses. State of Retirement Expert Panel Agenda Session 1: How Well Is Our Current National Approach to Retirement Security Working? Preamble: Retirement income sources in the United States have often been referred to as a three-legged stool – Social Security, employer-sponsored retirement plans, and personal savings. 1. Can the U.S, retirement system today still be accurately described by these three retirement income sources? Why/why not? 2. Are there aspects of our nation’s approach to retirement income security that are working well? If so, are these aspects functioning well for all, or only for particular populations? 3. Are there aspects of our nation’s approach to retirement income security that are concerning? If so, what are your biggest concerns? 4. Are there any specific populations you are particularly concerned about? If so, which ones and why? Session 2: Reevaluating the Roles of the Federal Government, Employers, and Individuals Preamble: Key actors in assuring a secure retirement have traditionally included the federal government, employers, and individuals, but their roles have evolved over time. Are there ways roles could or should be adapted or modified to address the strengths and weaknesses that have been identified for: o Federal government? o Employers? o Individuals? Session 3: Reevaluating Our Nation’s Approach to Retirement Policy Preamble: Various proposals for a broader, more cohesive approach to retirement policy have been made over time. 1. Do you believe there is a need for some type of national retirement policy? 2. If such a policy were to be proposed-- 2a. What could or should be the primary goals of such a policy? 2b. What could or should be the roles of key actors in achieving those goals? 3. What do you believe could be the greatest benefits of a national retirement policy? 4. What do you believe could be the greatest risks or potential downsides of a national retirement policy? What barriers exist to creating a national retirement policy and how could the federal government best address these barriers? Appendix II: Selected Federal Legislation Related to Retirement Security from 1960- Present The chronology highlights below selected federal legislation related to retirement security in the United States since 1960. It is based on a larger chronology included in our prior special product on the nation’s retirement system (GAO-18-111SP). The chronology is intended to illustrate the incremental approach that the nation has taken to improving the U.S. retirement system and to convey the changes that the legislation enacted at the time. It is not intended to provide an exhaustive list of legislation that has impacted retirement in the United States, to make statements about current provisions of the law, or to provide comprehensive descriptions of each law. Chronology of Selected Federal Legislation Shaping Retirement in the United States (1960–Present) 1961 Social Security Amendments of 1961 Selected provision: Enacted a provision for men, comparable to the provision enacted for women in 1956, concerning early retirement at age 62. Self-Employed Individuals Tax Retirement Act of 1962 Selected provision: Imposed minimum distribution requirements for self-employed participants in a qualified plan generally beginning at age 70 ½. Social Security Amendments of 1965 Selected provisions: Enacted new titles to the Social Security Act for Medicare and Medicaid. Medicare provided hospital, post-hospital extended care, and home health coverage to almost all Americans age 65 or older; Medicaid provided states with the option of receiving federal funding for providing health care services to certain low-income and medically needy individuals. Age Discrimination in Employment Act of 1967 Selected provisions: Made it unlawful for an employer to discriminate against any individual with respect to compensation, terms, conditions, or privileges of employment because of age; and required the Secretary of Labor to carry on a continuing program of education and information, which could include research with a view to reducing barriers to the employment of older persons. Employee Retirement Income Security Act of 1974 (ERISA) Selected provisions: Regulated private sector employers who offer pension or welfare benefit plans for their employees. Title I: Imposed reporting and disclosure requirements on plans; imposed certain responsibilities on plan fiduciaries. Title II: Strengthened participation requirements for employees age 25 and over; established vesting rules; required that a joint and survivor annuity be provided; and established minimum funding standards. In addition, provided individual retirement accounts (IRAs) for persons not covered by pensions. Title IV: Required certain employers and plan administrators to fund an insurance system to protect certain kinds of retirement benefits (i.e., to pay premiums to the federal government’s Pension Benefit Guaranty Corporation (PBGC)). Revenue Act of 1978 Selected provisions: Established qualified deferred compensation plans called 401(k) plans after 26 U.S.C. § 401(k), which allowed for pre-tax employee contributions to such plans (known as elective deferrals). Multiemployer Pension Plan Amendments Act of 1980 Selected provisions: Strengthened the funding requirements for multiemployer pension plans; authorized plan preservation measures for financially troubled multiemployer plans; and revised the manner in which insurance provisions applied to multiemployer plans. Chronology of Selected Federal Legislation Shaping Retirement in the United States (1960–Present) 1982 Tax Equity and Fiscal Responsibility Act of 1982 Selected provisions: Reduced the maximum annual addition (employer contributions, employee contributions, and forfeitures) for each participant in a defined contribution (DC) plan; reduced the maximum annual retirement benefit for each participant in a defined benefit (DB) plan; introduced special rules for “top heavy” plans (i.e., plans in which more than 60 percent of the present value of the cumulative accrued benefits under the plan for all employees accrue to key employees, including certain owners and officers); and expanded minimum distribution requirements to all qualified plans. Social Security Amendments of 1983 Selected provisions: Gradually raised the normal retirement age from 65 to 67, depending on an individuals’ year of birth; expanded coverage; increased the self-employment tax for self-employed persons; subjected a portion of Social Security benefits to federal income tax for the first time; and changed how cost-of-living adjustments are calculated when trust funds are low. Deficit Reduction Act of 1984 Selected provisions: Amended nondiscrimination testing requirements for 401(k) plans and required minimum distribution rules, and restricted prefunding of certain employee post-retirement welfare benefits (such as disability and medical benefits). Retirement Equity Act of 1984 Selected provisions: Changed participation rules by lowering the minimum age that a plan may require for enrollment (from age 25 to 21), and permitted certain breaks in service without loss of pension credits. Also, strengthened treatment of pension benefits for widowed and divorced spouses. Single-Employer Pension Plan Amendments Act of 1986 Selected provisions: Raised the per-participant PBGC premium from $2.60 to $8.50; established certain distress criteria that a contributing sponsor or substantial member of a contributing sponsor’s controlled group must meet in order to terminate a single-employer plan under a distress termination; established certain criteria for PBGC to terminate a plan that does not have sufficient assets to pay benefits that are currently due (referred to as “involuntary terminations”); and created a new liability to plan participants for certain non-guaranteed benefits. Federal Employees’ Retirement System Act of 1986 Selected provisions: Established the Federal Employees’ Retirement System (FERS). Unlike the existing Civil Service Retirement System (CSRS), retirement and disability benefits under FERS were structured to be fully funded by employee and employer contributions and interest earned by the bonds in which the contributions were invested. The DB under FERS was lower than under CSRS, but FERS also included a DC plan component: the Thrift Savings Plan. Omnibus Budget Reconciliation Act of 1986 Selected provisions: Required employers that sponsor pension (DB plans) and retirement savings plans (DC plans such as a 401(k)) to provide benefit accruals or allocations for employees who work beyond their normal retirement age. Tax Reform Act of 1986 Selected provisions: Established faster minimum vesting schedules; adjusted limitations on contributions and benefits for qualified plans; limited the exclusion for employee elective deferrals to $7,000; and amended nondiscrimination coverage rules. Also, restricted the allowable tax-deductible contributions to IRAs for individuals with incomes above a certain level and who participate in employer-sponsored pension plans, and imposed an additional 10 percent tax on early distributions (before age 59 ½) from a qualified retirement plan. Omnibus Budget Reconciliation Act of 1987 Selected provisions: Strengthened funding rules for pension plans and the level and structure of PBGC premiums. Omnibus Budget Reconciliation Act of 1993 Selected provisions: Reduced compensation taken into account in determining contributions and benefits under qualified retirement plans, and expanded taxation of Social Security benefits. Chronology of Selected Federal Legislation Shaping Retirement in the United States (1960–Present) 1994 Retirement Protection Act of 1994 Selected provisions: Strengthened funding rules for pension plans. Small Business Job Protection Act of 1996 Selected provisions: Created a type of simplified retirement savings vehicle for small employers; added a nondiscrimination safe harbor for 401(k) plans; amended the definition highly compensated employee; and modified certain participation rules for DC plans. Taxpayer Relief Act of 1997 Selected provision: Established Roth IRAs, under which contributions are after-tax, but distributions after age 59½ are tax-free. Senior Citizens’ Freedom to Work Act of 2000 Selected provision: Amended the Social Security Act to eliminate the earnings limit for individuals who have reached their normal retirement age. Economic Growth and Tax Relief Reconciliation Act of 2001 (EGTRRA) Selected provisions: Increased the individual elective deferrals that may be made to a 401(k) plan; added “catch-up contributions” that allow individuals age 50 or older to make additional contributions; increased the maximum annual contributions to DC plans and individual retirement accounts; increased the maximum annual benefits under a DB plan; increased the compensation limit for qualified trusts; reduced the minimum vesting requirements for matching contributions; and changed the rules that permit plans to cash-out, without consent. Sarbanes-Oxley Act of 2002 Selected provision: Added a new requirement that individual account pension plans provide notice to participants and beneficiaries in advance of periods during which the ability of participants or beneficiaries to take certain actions with respect to their accounts will be temporarily suspended, limited, or restricted (referred to as “blackout periods”). Deficit Reduction Act of 2005 Selected provisions: For plan years that begin after December 31, 2005, set the PBGC flat-rate premium for multiemployer plans at $8.00; and, for each plan year that begins after 2006, indexed future premium levels to the national average wage index. Pension Protection Act of 2006 (PPA) Selected provisions: Strengthened the minimum funding requirements for DB plans; set certain benefit limitations for underfunded DB plans; enhanced the protections for spouses; amended plan asset diversification requirements; changed provisions concerning the portability of pension plans; allowed the adoption of automatic enrollment and target date funds for DC plans; and increased reporting and disclosure requirements for plan sponsors. Worker, Retiree, and Employer Recovery Act of 2008 Selected provision: Modified PPA’s funding requirements to grant relief for single-employer DB plans. Moving Ahead for Progress in the 21st Century Act (MAP-21) Selected provisions: Provided funding relief for single-employer DB plans by changing the interest rates used to reflect a 25-year historical average; increased premium rates for sponsors of single-employer and multiemployer DB plans; and included other provisions intended to improve the governance of PBGC. American Taxpayer Relief Act of 2012 Selected provisions: Extended the tax-free treatment of distributions from IRAs made for charitable purposes; allowed for certain in-plan transfers to a Roth account. Multiemployer Pension Reform Act of 2014 (MPRA) Selected provisions: Allowed severely underfunded multiemployer plans, under certain conditions and with the approval of federal regulators, the option to reduce the retirement benefits of current retirees to avoid plan insolvency; and expanded PBGC’s ability to intervene when plans are in financial distress. Chronology of Selected Federal Legislation Shaping Retirement in the United States (1960–Present) 2018 Bipartisan Budget Act of 2018 Selected provisions: Established a temporary Joint Select Committee on Solvency of Multiemployer Pension Plans. The goal of the Joint Select Committee was to improve the solvency of multiemployer pension plans and PBGC. Appendix III: Structure, Scope, and Recommendations of Three Past Federal Commissions on Retirement Issues Since the enactment of ERISA, there have been three federal commissions on retirement issues: The President’s Commission on Pension Policy, the National Commission on Social Security Reform, and the President’s Commission to Strengthen Social Security (see table 2). We examined these commissions to gain insights on possible structures for federal commissions, the scope of work these commissions can take on, and the types of recommendations they can make. Carter Commission (1979- 1981) In 1978, President Carter signed an executive order authorizing the Carter Commission, which was established when committee members were appointed in 1979. The commission was to conduct a 2-year sturdy of the nation’s pension systems and the future course of national retirement income policies. President Carter appointed all 11 commission members. The commission also had an executive director and 37 staffers. Its final report, Coming of Age: Toward a National Retirement Income Policy, was released in February 1981. Charge to the Carter Commission The commission was ordered to: Conduct a comprehensive review of retirement, survivor, and disability programs existing in the United States, including private, federal, state, and local programs. Develop national policies for retirement, survivor, and disability programs that can be used as a guide by public and private programs. The policies were to be designed to ensure that the nation had effective and equitable retirement, survivor, and disability programs that took into account available resources and demographic changes expected into the middle of the next century. Submit to the President a series of reports including the commission’s findings and recommendations on short-term and long-term issues with respect to retirement, survivor, and disability programs. The commission was charged with covering the following issues in its findings and recommendations: overlaps and gaps among the private, state, and local sectors in providing income to retired, surviving, and disabled persons; the financial ability of private, federal, state, and local retirement, survivor, and disability systems to meet their future obligations; appropriate retirement ages, the relationship of annuity levels to past earnings and contributions, and the role of retirement, survivor, and disability programs in private capital formation and economic growth; the implications of the recommended national policies for the financing and benefit structures of the retirement, survivor, and disability programs in the public and private sectors; and specific reforms and organizational changes in the present systems that may be required to meet the goals of the national policies. Carter Commission’s Recommendations In its final report, the Carter Commission prescribed a goal for retirement income policy and made numerous recommendations. According to the report, a desirable retirement income goal is the replacement of pre- retirement income from all sources. Recommendations focused on strengthening four areas: employer pensions, Social Security, “individual efforts” (personal savings, employment of older workers, and disability), and public assistance. Recommendations were also made regarding the administration of the U.S. retirement system. Examples of ways to strengthen each area follow: Strengthening Employer Pensions. The commission recommended establishing a Minimum Universal Pension System (MUPS) for all workers. MUPS was intended to provide a portable benefit that was supplemental to Social Security. It would have built upon existing employer plans and existing plans that did not meet the requirements would have needed to be amended. Another recommendation was to establish a Public Employee Retirement Income Security Act (i.e. a public sector version of ERISA) so that public and private sector employees would receive similar protections. Strengthening Social Security. The commission recommended mandatory universal coverage, raising the retirement age for workers who were not approaching retirement, re-examining or making adjustments to the special minimum benefit as well as the spousal benefit and other miscellaneous benefits. Strengthening Individual Efforts. The commission recommended that contribution and benefit limitations for all individuals should be treated more consistently for all types of retirement savings. The commission also recommended a refundable tax credit for low- and moderate-income individuals to encourage saving for retirement. For older workers, recommendations included improving unemployment benefits to provide short-term income maintenance and keep them in the labor force. The commission also recommended further in-depth study of the Disability Insurance program. Strengthening Public Assistance. The commission made recommendations to address inflation protection for retirement income and setting Social Security’s Supplemental Security Income at the poverty line level and eliminating its assets test. Administration. The commission recommended consolidating the administration of all federal retirement systems as well as consolidating ERISA administrative functions under one entity. It also recommended an interdepartmental task force to coordinate executive branch agencies dealing with retirement income. Greenspan Commission (1981-1983) In 1981, President Reagan signed an executive order establishing the Greenspan Commission. The President asked the commission to conduct a 1-year study and propose realistic, long-term reforms to put Social Security on sound financial footing and to reach bipartisan consensus so these reforms could be passed into law. The President, the Senate Majority Leader, and the Speaker of the House of Representatives each made five appointments, with no more than three of the five appointments coming from one political party to ensure a bipartisan commission. The President was responsible for appointing the commission’s chair. The commission had a staff of 23. The final report, Report of the National Commission on Social Security Reform, was issued on January 20, 1983. Charge to the Greenspan Commission The commission was ordered to Review relevant analyses of the current and long-term financial condition of the Social Security Trust Funds Identify problems that could threaten the long-term solvency of such funds Analyze potential solutions to such problems that would both assure the financial integrity of the Social Security system and appropriate benefits Provide appropriate recommendations to the Secretary of Health and Human Services, the President, and Congress. Greenspan Commission’s Recommendations In its final report, the Greenspan Commission found both short and long- term financing problems and recommended that action should be taken to strengthen the financial status of the Social Security program. Twelve commission members voted in favor of a consensus package with 13 recommendations to address Social Security’s short-term deficit, including, for example: Expand Social Security to include coverage for nonprofit and civilian federal employees hired after January 1, 1984, as well as prohibiting the withdrawal of state and local employees. Shift cost-of-living adjustments to an annual basis. Make the Social Security Administration its own separate, independent agency. Make adjustments to spousal and survivor benefits. Revise the schedule for Social Security payroll taxes. Establish the taxation of benefits for higher-income persons. In addition, these 12 commission members agreed that the long-range deficit should be reduced to approximately zero, and their recommendations were projected to meet about two-thirds of the long- range financial deficit. Seven of the 12 members agreed that the remaining one-third of the long-range financial deficit should be met by a deferred, gradual increase in the normal retirement age, while the other 5 members agreed that it should be met by an increase in future contribution rates starting in 2010. After the Greenspan Commission’s final report was issued, Congress enacted the Social Security Amendments of 1983. The amendments incorporated many of the Greenspan Commission’s recommendations and made comprehensive changes to Social Security coverage, financing, and benefit structure. These changes included addressing Social Security’s long-term financing problems by gradually increasing the retirement age from 65 to 67, among other things. President’s Commission to Strengthen Social Security (2001) In 2001, President Bush signed an executive order establishing the President’s Commission to Strengthen Social Security. The President asked the Commission to produce an interim report describing the challenges facing the Social Security system and the criteria by which the Commission would evaluate reform proposals, as well as a final report to set forth the Commission’s recommendations regarding how to strengthen Social Security with personal accounts. The commission had a staff of sixteen members appointed by the President, of which no more than eight members were of the same political party. The final report, Strengthening Social Security and Creating Personal Wealth for All Americans, was issued in December 2001. Charge to the President’s Commission to Strengthen Social Security The commission was asked to submit to the President bipartisan recommendations to modernize and restore fiscal soundness to the Social Security system according to the following principles: Modernization must not change Social Security benefits for retirees or The entire Social Security surplus must be dedicated to Social Social Security payroll taxes must not be increased; Government must not invest Social Security funds in the stock market; Modernization must preserve Social Security’s disability and survivors Modernization must include individually controlled, voluntary personal retirement accounts, which will augment the Social Security safety net. The President’s Commission to Strengthen Social Security Recommendations In its final report, the Commission offered three models for Social Security reform. All three models shared a common framework whereby voluntary individual accounts were established in exchange for a reduction in the Social Security defined portion of benefit. According to the report: Reform Model 1 would have established a voluntary personal account option, but did not specify other changes in Social Security’s benefit and revenue structure and was intended to achieve full long-term sustainability. Reform Model 2 would have enabled future retirees to receive Social Security benefits that would be at least as great as then current retirees and increased Social Security benefits paid to low-income workers. Model 2 would have established a voluntary personal account without raising taxes or requiring additional worker contributions. It was intended to achieve solvency and balanced Social Security revenues and costs. Reform Model 3 would have established a voluntary personal account option that generally enabled workers to reach or exceed then-current scheduled benefits and wage replacement ratios. It was intended to achieve solvency by adding revenues and by slowing benefit growth less than price indexing. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Strengthening the U.S. retirement system to be more accessible and financially sound is important to ensuring that all Americans can retire with dignity and security, and to managing the fiscal exposures to the federal government from various retirement-related programs. Currently, the U.S. retirement system, and many of the workers and retirees it was designed to help, face major challenges. This testimony discusses (1) the fiscal risks and other challenges facing the U.S. retirement system, and (2) the need to re-evaluate our nation's approach to financing retirement. It is based on a 2017 report, GAO-18-111SP , on the nation's retirement system, with updated statistics when more recent estimates from publicly available sources were available. What GAO Found Fundamental changes over the past 40 years have led to various risks and challenges for the three main pillars supporting the U.S. retirement system. For example, current projections indicate that by 2034, the Old-Age and Survivors trust fund for Social Security's retirement program—the first pillar—will only be sufficient to pay 77 percent of scheduled benefits, due in part to the aging of the population (see figure). Other federal government retirement-related programs also face financial uncertainty. For example, the Pension Benefit Guaranty Corporation, which insures the pension benefits of most private sector defined benefit plans, estimates a greater than 90 percent chance the multiemployer program will be insolvent by 2025. Meanwhile, employer-sponsored plans—the second pillar—have experienced a shift from traditional defined benefit (DB) plans that generally provide set monthly payments for life, to defined contribution (DC) account-based plans, like 401(k)s. DC plans provide greater portability of savings that can be better suited to the needs of a more mobile workforce, but also require individuals to assume more responsibility for planning and managing their savings. While DC plans can provide meaningful retirement security for many, especially higher earners, lower earners appear more prone to having little or no savings in their DC accounts. Further, individuals' savings—the third pillar—may be constrained by economic trends such as low real wage growth and growing out-of-pocket health care costs. Combined with increased longevity, these challenges can put individuals at greater risk of outliving their savings and fiscal pressures on government programs will likely grow. Congress generally has sought to address retirement-related issues in an incremental fashion. Also, no one agency is responsible for overseeing the U.S. retirement system in its entirety, so there is no obvious federal agency to lead a comprehensive reform effort. It has been nearly 40 years since a federal commission has conducted a comprehensive evaluation of the nation's approach to financing retirement. Without a more comprehensive re-evaluation of the challenges across all three pillars of the system, it may be difficult to identify effective, enduring solutions. Unless timely action is taken, many older Americans risk not having sufficient means for a secure and dignified retirement. What GAO Recommends In the 2017 report, GAO recommended that Congress should consider establishing an independent commission to comprehensively examine the US retirement system and make recommendations to clarify key policy goals for the system and improve how the nation promotes retirement security.
gao_GAO-19-685T
gao_GAO-19-685T_0
Background As shown in table 1 the cost of counting the nation’s population has been escalating with each decade. The 2010 Census was the most expensive in U.S. history at about $12.3 billion, and was about 31 percent more costly than the $9.4 billion 2000 Census (in 2020 dollars). According to the Bureau, the total cost of the 2020 Census in October 2015 was estimated at $12.3 billion and in October 2017 that cost estimate grew to approximately $15.6 billion, approximately a $3 billion increase. Additionally, Bureau officials told us that while the estimated cost of the census had increased to $15.6 billion, it was nevertheless managing the 2020 Census to a lower cost of $14.1 billion. Bureau officials explained that the $14.1 billion includes all program costs and contingency funds to cover risks and general estimating uncertainty. The remaining $1.5 billion estimated cost is additional contingency for “unknown unknowns”—that is, low probability events that could cause massive disruptions—and several what-if scenarios such as an increase in the wage rate or additional supervisors needed to manage field operations. Moreover, as shown in figure 1, the average cost for counting a housing unit increased from about $16 in 1970 to around $92 in 2010 (in 2020 constant dollars). At the same time, the return of census questionnaires by mail (the primary mode of data collection) declined over this period from 78 percent in 1970 to 63 percent in 2010. Declining mail response rates have led to higher costs because the Bureau sends temporary workers to each non-responding household to obtain census data. Achieving a complete and accurate census has become an increasingly daunting task, in part, because the population is growing larger, more diverse, and more reluctant to participate in the enumeration. In many ways, the Bureau has had to invest substantially more resources each decade to conduct the enumeration. In addition to these external societal challenges that make achieving a complete count a daunting task, the Bureau also faces a number of internal management challenges that affect its capacity and readiness to conduct a cost-effective enumeration. Some of these issues—such as acquiring and developing IT systems and preparing reliable cost estimates—are long-standing in nature. At the same time, as the Bureau looks toward 2020, it has faced emerging and evolving uncertainties. For example, on March 26, 2018, the Secretary of Commerce announced his decision to add a question to the decennial census on citizenship status which resulted in various legislative actions and legal challenges. Ultimately, the case was heard by the U.S. Supreme Court, which, in a June 26, 2019, ruling, prevented the addition of the question because the Court found that the evidence Commerce provided in the case did not match the Secretary’s explanation. In addition, the Fourth Circuit Court of Appeals remanded other legal challenges to the district court on June 24, 2019, for further legal action, which is yet to be resolved. According to Bureau officials, on June 28, 2019, Commerce asked the Bureau to put its scheduled July 1 start date for printing questionnaires on hold while it considered legal implications of the Supreme Court ruling. On July 2, 2019, Commerce told the Bureau to proceed with printing questionnaires and other materials without the citizenship question on them. On July 5, 2019, the Department of Justice (DOJ) indicated that, although printing was continuing without the citizenship question, DOJ was evaluating legal options to include the question. However, on July 11, 2019, the President signed Executive Order 13880 stating that the Attorney General and Secretary of Commerce had informed him that the logistics and timing necessary to carry out the census, combined with delays from litigation, left no practical mechanism for including the question on the 2020 Decennial Census. Instead of collecting this information from the census questionnaire, the Executive Order requires all federal agencies to provide data on citizenship status to Commerce using legally available federal records. On the same day, DOJ notified the District Court of the issuance of the Executive Order and the Attorney General’s prepared statement that “as a practical matter, the Supreme Court’s decision closed all paths to adding the question to the 2020 decennial census.” DOJ advised the court of its intent to confer with opposing counsel regarding appropriate next steps in the proceedings. We have not analyzed these recent developments or their implications, if any, for how the Bureau will tabulate its official counts. We will continue to monitor developments for Congress. The Bureau also faced budgetary uncertainties that, according to the Bureau, led to the curtailment of testing in 2017 and 2018. However, the Consolidated Appropriations Act, 2018 appropriated for the Periodic Censuses and Programs account $2.544 billion, which more than doubled the Bureau’s request in the President’s Fiscal Year 2018 Budget of $1.251 billion. According to the explanatory statement accompanying the act, the appropriation, which is available through fiscal year 2020, was provided to ensure the Bureau has the necessary resources to immediately address any issues discovered during operational testing, and to provide a smoother transition between fiscal year 2018 and fiscal year 2019. The availability of those resources enabled the Bureau to continue preparations for the 2020 Census during the 35 days in December 2018 to January 2019 when appropriations lapsed for the Bureau and a number of other federal agencies. Moreover, the Consolidated Appropriations Act, 2019 appropriated for the Periodic Censuses and Programs account $3.551 billion. According to Bureau officials, this level of funding for fiscal year 2019 is sufficient to carry out 2020 Census activities as planned. Importantly, the census is conducted against a backdrop of immutable deadlines. In order to meet the statutory deadline for completing the enumeration, census activities need to take place at specific times and in the proper sequence. Thus, it is absolutely critical for the Bureau to stay on schedule. Figure 2 shows some dates for selected decennial events. The Bureau Has Begun Opening Offices and Hiring Temporary Staff The Bureau has begun to open its area census offices (ACO) for the 2020 Census. It has signed leases for all 248 ACOs, of which 39 of the offices will be open for the address canvassing operation set to begin in August 2019 where staff verifies the location of selected housing units. The remaining 209 offices will begin opening this fall. In 2010 the Bureau opened 494 census offices. The Bureau has been able to reduce its infrastructure because it is relying on automation to assign work and to record payroll. Therefore there is less paper—field assignments, maps, and daily payroll forms—to manually process. For the 2020 Census, the Bureau is refining its recruiting and hiring goals, but tentatively plans to recruit approximately 2.24 million applicants and to hire over 400,000 temporary field staff from that applicant pool for two key operations: address canvassing, and nonresponse follow-up, where they visit households that do not return census forms to collect data in person. In 2010 the Bureau recruited 3.8 million applicants and hired 628,000 temporary workers to conduct the address canvassing and nonresponse follow-up field operations. According to Bureau officials, it has reduced the number of temporary staff it needs to hire because automation has made field operations more efficient and there is less paper. As of July 2019, the Bureau reported that for all 2020 Census operations it had processed just over 500,000 applicants. In addition, the Bureau was seeking to hire approximately 1,500 partnership specialists by the end of June 2019 to help increase census awareness and participation in minority communities and hard-to-reach populations. The Bureau reported that as of July 6, 2019, it had hired 903 partnership specialists, and as of July 17, 2019, another 872 applicants were waiting to have their background checks completed. According to Bureau officials, hiring data are based on payroll dates generated biweekly, while background check data are tracked internally and can be updated daily. The Bureau did not meet its June 30 hiring goal, and told us that it expected to have all partnership specialists on board by September 1, 2019. Among other things, partnership specialists are expected to either provide or identify partners to help provide supplemental language support to respondents locally in over 100 different languages. We will continue to monitor the Bureau’s progress in meeting its partnership specialist staffing goals and addressing any turnover that takes place. Hiring partnership specialists in a timely manner and maintaining adequate partnership specialist staffing levels are key to the Bureau’s ability to carry out its planned outreach efforts, especially to hard-to-count communities. Moreover, Bureau officials also stated that the current economic environment (i.e., the low unemployment rate compared to the economic environment of the 2010 Census) has not yet impacted their ability to recruit staff. The Bureau will continue to monitor the impact of low unemployment on its ability to recruit and hire at the local and regional levels. The Bureau Plans to Rely Heavily on IT for the 2020 Census For the 2020 Census, the Bureau is substantially changing how it intends to conduct the census, in part by re-engineering key census-taking methods and infrastructure, and making use of new IT applications and systems. For example, the Bureau plans to offer an option for households to respond to the survey via the internet and enable field-based enumerators to use applications on mobile devices to collect survey data from households. To do this, the Bureau plans to utilize 52 new and legacy IT systems, and the infrastructure supporting them, to conduct the 2020 Census. A majority of these 52 systems have been tested during operational tests in 2017 and 2018. For example, the Bureau conducted its 2018 End-to- End test, which included 44 of the 52 systems and was intended to test all key systems and operations in a census-like environment to ensure readiness for the 2020 Census. Nevertheless, additional IT development and testing work needs to take place before the 2020 Census. Specifically, officials from the Bureau’s Decennial Directorate said they expect that the systems will need to undergo further development and testing due to, among other things, the need to add functionality that was not part of the End-to-End test, scale system performance to support the number of respondents expected during the 2020 Census, and address system defects identified during the 2018 End-to-End test. To prepare the systems and technology for the 2020 Census, the Bureau is also relying on substantial contractor support. For example, it is relying on contractors to develop a number of systems and components of the IT infrastructure, including the IT platform that is intended to be used to collect data from households responding via the internet and telephone, and for non-response follow-up activities. Contractors are also deploying the IT and telecommunications hardware in the field offices and providing device-as-a-service capabilities by procuring the mobile devices and cellular service to be used for non-response follow-up. In addition to the development of technology, the Bureau is relying on a technical integration contractor to integrate all of the key systems and infrastructure. The contractor’s work is expected to include, among other things, evaluating the systems and infrastructure and acquiring the infrastructure (e.g., cloud or data center) to meet the Bureau’s scalability and performance needs; integrating all of the systems; and assisting with technical, performance and scalability, and operational testing activities. 2020 Census Identified by GAO as a High-Risk Area In February 2017, we added the 2020 Decennial Census as a high-risk area needing attention from Congress and the executive branch. This was due to significant risks related to, among other things, innovations never before used in prior enumerations, the acquisition and development of IT systems, and expected escalating costs. Among other things, we reported that the commitment of top leadership was needed to ensure the Bureau’s management, culture, and business practices align with a cost-effective enumeration. We also stressed that the Bureau needed to rigorously test census-taking activities; ensure that scheduling adheres to best practices; improve its ability to manage, develop, and secure its IT systems; and have better oversight and control over its cost estimation process. Our experience has shown that agencies are most successful at removal from our High-Risk List when leaders give top level attention to the five criteria for removal and Congress takes any needed action. The five criteria for removal that we identified in November 2000 are as follows: Leadership Commitment. The agency has demonstrated strong commitment and top leadership support. Capacity. The agency has the capacity (i.e., people and resources) to resolve the risk(s). Action Plan. A corrective action plan exists that defines the root causes and solutions, and that provides for substantially completing corrective measures, including steps necessary to implement solutions we recommended. Monitoring. A program has been instituted to monitor and independently validate the effectiveness and sustainability of corrective measures. Demonstrated Progress. The agency has demonstrated progress in implementing corrective measures and in resolving the high-risk area. These five criteria form a road map for efforts to improve, and ultimately address, high-risk issues. Addressing some of the criteria leads to progress, while satisfying all of the criteria is central to removal from the list. As we reported in the March 2019 high-risk report, the Bureau’s efforts to address the risks and challenges for the 2020 Census had fully met one of the five criteria for removal from the High-Risk List—leadership commitment—and partially met the other four, as shown in figure 3. Additional details about the status of the Bureau’s efforts to address this high-risk area are discussed later in this statement. The 2020 Census Remains High Risk Due to Challenges Facing the Enumeration The 2020 Census is on our list of high-risk programs because, among other things, (1) innovations never before used in prior enumerations are not expected to be fully tested, (2) the Bureau continues to face challenges in implementing IT systems, (3) the Bureau faces significant cybersecurity risks to its systems and data, and (4) the Bureau’s cost estimate for the 2020 Census was unreliable. If not sufficiently addressed, these risks could adversely impact the cost and quality of the enumeration. Moreover, the risks are compounded by other factors that contribute to the challenge of conducting a successful census, such as the nation’s increasingly diverse population and concerns over personal privacy. Key Risk #1: The Bureau Redesigned the Census to Control Costs, and Will Need to Take Several Actions to Better Manage Risks The basic design of the enumeration—mail out and mail back of the census questionnaire with in-person follow-up for non-respondents—has been in use since 1970. However, a lesson learned from the 2010 Census and earlier enumerations is that this traditional design is no longer capable of cost-effectively counting the population. In response to its own assessments, our recommendations, and studies by other organizations, the Bureau has fundamentally re-examined its approach for conducting the 2020 Census. Specifically, its plan for 2020 includes four broad innovation areas: re-engineering field operations, using administrative records, verifying addresses in-office, and developing an internet self-response option (see table 2). If they function as planned, the Bureau initially estimated that these innovations could result in savings of over $5 billion (in 2020 constant dollars) when compared to its estimates of the cost for conducting the census with traditional methods. However, in June 2016, we reported that the Bureau’s initial life-cycle cost estimate developed in October 2015 was not reliable and did not adequately account for risk. As discussed earlier in this statement, the Bureau has updated its estimate from $12.3 billion and now estimates a life-cycle cost of $15.6 billion, which would result in a smaller potential savings from the innovative design than the Bureau originally estimated. According to the Bureau, the goal of the cost estimate increase was to ensure quality was fully addressed. While the planned innovations could help control costs, they also introduce new risks, in part, because they include new procedures and technology that have not been used extensively in earlier decennials, if at all. Our prior work has shown the importance of the Bureau conducting a robust testing program, including the 2018 End-to-End test. Rigorous testing is a critical risk mitigation strategy because it provides information on the feasibility and performance of individual census-taking activities, their potential for achieving desired results, and the extent to which they are able to function together under full operational conditions. To address some of these challenges we have made numerous recommendations aimed at improving reengineered field operations, using administrative records, verifying the accuracy of the address list, and securing census responses via the internet. The Bureau has held a series of operational tests since 2012, but according to the Bureau, it scaled back its most recent field tests because of funding uncertainties. For example, the Bureau canceled the field components of the 2017 Census Test including non-response follow-up, a key census operation. In November 2016, we reported that the cancelation of the 2017 Census Test was a lost opportunity to test, refine, and integrate operations and systems, and that it put more pressure on the 2018 End-to-End test to demonstrate that enumeration activities will function under census-like conditions as needed for 2020. However, in May 2017, the Bureau scaled back the operational scope of the 2018 End-to-End test and, of the three planned test sites, only the Rhode Island site would fully implement the 2018 End-to-End test. The Washington and West Virginia sites would test just one field operation. In addition, due to budgetary concerns, the Bureau delayed ramp up and preparations for its coverage measurement operation (and the technology that supports it) from the scope of the test. However, removal of the coverage measurement operation did not affect testing of the delivery of apportionment or redistricting data. Without sufficient testing, operational problems can go undiscovered and the opportunity to improve operations will be lost, in part because the 2018 End-to-End test was the last opportunity to demonstrate census technology and procedures across a range of geographic locations, housing types, and demographic groups under decennial-like conditions prior to the 2020 Census. We reported on the 2018 End-to-End test in December 2018 and noted that the Bureau had made progress addressing prior test implementation issues but still faced challenges. As the Bureau studies the results of its testing to inform the 2020 Census, it will be important that it addresses key program management issues that arose during implementation of the test. Namely, by not aligning the skills, responsibilities, and information flows for the first-line supervisors during field data collection, the Bureau limited its role in support of enumerators within the re-engineered field operation. The Bureau also lacked mid-operation training or guidance, which, if implemented in a targeted, localized manner, could have further helped enumerators navigate procedural modifications and any commonly encountered problems when enumerating. It will be important for the Bureau to prioritize its mitigation strategies for these implementation issues so that it can maximize readiness for the 2020 Census. The Bureau Has Developed Hundreds of Risk Mitigation and Contingency Plans, but Those We Reviewed Were Missing Key Information To manage risk to the 2020 Census the Bureau has developed hundreds of risk mitigation and contingency plans. Mitigation plans detail how an agency will reduce the likelihood of a risk event and its impacts, if it occurs. Contingency plans identify how an agency will reduce or recover from the impact of a risk after it has been realized. In May 2019, we reported that the Bureau had identified 360 active risks to the 2020 census as of December 2018—meaning the risk event could still occur and adversely impact the census. Of these, 242 met the Bureau’s criteria for requiring a mitigation plan and, according to the Bureau’s risk registers, 232 had a plan (see table 3). In addition, 146 risks met the Bureau’s criteria for requiring one contingency plan and, according to the Bureau’s risk registers, 102 had a plan. Bureau guidance states that these plans should be developed as soon as possible after a risk is added to the risk register, but it does not establish a clear time frame for doing so. Consequently, some risks may go without required plans for extended periods. We found that, as of December 2018, some of the risks without required plans had been added to the Bureau’s risk registers in recent months, but others had been added more than 3 years earlier. We reviewed the mitigation and contingency plans in detail for six risks which the Bureau identified as among the major concerns that could affect the 2020 Census. These included cybersecurity incidents, late operational design changes, and integration of the 52 systems and 35 operations supporting the 2020 Census. We found that the plans did not consistently include key information needed to manage the risk. For example, the Bureau’s contingency plan for late operational design changes did not include activities specific to the three most likely late operational design changes—including removal of the citizenship question as a result of litigation or congressional action—that the Bureau could carry out to lessen their adverse impact on the enumeration. We found that gaps stemmed from either requirements that were missing from the Bureau’s decennial risk management plan, or that risk owners— the individuals assigned to manage each risk—were not fulfilling all of their risk management responsibilities. Bureau officials said that risk owners were aware of these responsibilities but did not always fulfill them given competing demands. Bureau officials also said that they are managing risks to the census, even if that is not always reflected in their mitigation and contingency plans. However, if such actions are reflected in disparate documents or are not documented at all, then decision makers are left without an integrated and comprehensive picture of how the Bureau is managing risks to the census. We made seven recommendations to improve the Bureau’s management of risks to the 2020 Census, including that the Bureau develop mitigation and contingency plans for all risks that require them, establish a clear time frame for plan development, and ensure that the plans have the information needed to manage the risk. Commerce agreed with our recommendations and said it would develop an action plan to address them. Key Risk #2: The Bureau Faces Challenges in Implementing IT Systems We have previously reported that the Bureau faces challenges in managing and overseeing IT programs, systems, and contractors supporting the 2020 Census. Specifically, we have noted challenges in the Bureau’s efforts to manage, among other things, the schedules and contracts for its systems. As a result of these challenges, the Bureau is at risk of being unable to fully implement the systems necessary to support the 2020 Census and conduct a cost-effective enumeration. The Bureau Has Made Initial Progress against Its Revised Development and Testing Schedule, but Risks Missing Near-term Milestones To help improve its implementation of IT for the 2020 Census, the Bureau revised its systems development and testing schedule. Specifically, in October 2018, the Bureau organized the development and testing schedule for its 52 systems into 16 operational deliveries. Each of the 16 operational deliveries has milestone dates for, among other things, development, performance and scalability testing, and system deployment. According to Bureau officials in the Decennial Directorate, the schedule was revised, in part, due to schedule management challenges experienced, and lessons learned, while completing development and testing during the 2018 End-to-End test. The Bureau has made initial progress in executing work against its revised schedule. For example, the Bureau completed development of the systems in the first operational delivery—for 2020 Census early operations preparations—in July 2018, and deployed these systems into production in October 2018. However, our current work has determined that the Bureau is at risk of not meeting several near-term systems testing milestones. As of June 2019, 11 systems that are expected to be used in a total of five operational deliveries were at risk of not meeting key milestones for completing system development, performance and scalability testing, and/or integration testing. These 11 systems are needed for, among other things, data collection for operations, business and support automation, and customer support during self-response. Figure 4 presents an overview of the status for all 16 operational deliveries, as of June 2019. The at-risk systems previously discussed add uncertainty to a highly compressed time frame over the next 6 months. Importantly, between July and December 2019, the Bureau is expected to be in the process of integration testing the systems in 12 operational deliveries. Officials from the Bureau’s integration contractor noted concern that the current schedule leaves little room for any delays in completing the remaining development and testing activities. In addition to managing the compressed testing time frames, the Bureau also has to quickly finalize plans related to its IT infrastructure. For example, as of June 2019, the Bureau stated that it was still awaiting final approval for its Trusted Internet Connection. Given that these plans may impact systems being tested this summer or deployed into production for the address canvassing operation in August 2019, it is important that the Bureau quickly addresses this matter. Our past reporting noted that the Bureau faced significant challenges in managing its schedule for system development and testing that occurred in 2017 and 2018. We reported that, while the Bureau had continued to make progress in developing and testing IT systems for the 2020 Census, it had experienced delays in developing systems to support the 2018 End-to-End test. These delays compressed the time available for system and integration testing and for security assessments. In addition, several systems experienced problems during the test. We noted then, and reaffirm now, that continued schedule management challenges may compress the time available for the remaining system and integration testing and increase the risk that systems may not function or be as secure as intended. The Bureau has acknowledged that it faces risks to the implementation of its systems and technology. As of May 2019, the Bureau had identified 17 high risks related to IT implementation that may have substantial technical and schedule impacts if realized. Taken together, these risks represent a cross-section of issues, such as schedule delays for a fraud- detection system, the effects of late changes to technical requirements, the need to ensure adequate time for system development and performance and scalability testing, contracting issues, privacy risks, and skilled staffing shortages. Going forward, it will be important that the Bureau effectively manages these risks to better ensure that it meets near-term milestones for system development and testing, and is ready for the major operations of the 2020 Census. Key Risk #3: The Bureau Faces Significant Cybersecurity Risks to Its Systems and Data The risks to IT systems supporting the federal government and its functions, including conducting the 2020 Census, are increasing as security threats continue to evolve and become more sophisticated. These risks include insider threats from witting or unwitting employees, escalating and emerging threats from around the globe, and the emergence of new and more destructive attacks. Underscoring the importance of this issue, we have designated information security as a government-wide high-risk area since 1997 and, in our most recent biennial report to Congress, ensuring the cybersecurity of the nation was one of nine high-risk areas that we reported needing especially focused executive and congressional attention. Our prior and ongoing work has identified significant challenges that the Bureau faces in securing systems and data for the 2020 Census. Specifically, the Bureau has faced challenges related to completing security assessments, addressing security weaknesses, resolving cybersecurity recommendations from DHS, and addressing numerous other cybersecurity concerns (such as phishing). The Bureau Has Made Progress in Completing Security Assessments, but Critical Work Remains Federal law specifies requirements for protecting federal information and information systems, such as those systems to be used in the 2020 Census. Specifically, the Federal Information Security Management Act of 2002 and the Federal Information Security Modernization Act of 2014 (FISMA) require executive branch agencies to develop, document, and implement an agency-wide program to provide security for the information and information systems that support operations and assets of the agency. In accordance with FISMA, National Institute of Standards and Technology (NIST) guidance, and Office of Management and Budget (OMB) guidance, the Bureau’s Office of the Chief Information Officer (CIO) established a risk management framework. This framework requires system developers to ensure that each of the Bureau’s systems undergoes a full security assessment, and that system developers remediate critical deficiencies. According to the Bureau’s risk management framework, the systems expected to be used to conduct the 2020 Census will need to have complete security documentation (such as system security plans) and an approved authorization to operate prior to their use. As of June 2019, according to the Bureau’s Office of the CIO: Thirty-seven of the 52 systems have authorization to operate, and will not need to be reauthorized before they are used in the 2020 Census Nine of the 52 systems have authorization to operate, and will need to be reauthorized before they are used in the 2020 Census Five of the 52 systems do not have authorization to operate, and will need to be authorized before they are used in the 2020 Census One of the 52 systems does not need an authorization to operate before it is used in the 2020 Census. Figure 5 summarizes the authorization to operate status for the systems being used in the 2020 Census, as reported by the Bureau in June 2019. As we have previously reported, while large-scale technological changes (such as internet self-response) increase the likelihood of efficiency and effectiveness gains, they also introduce many cybersecurity challenges. The 2020 Census also involves collecting personally identifiable information (PII) on over a hundred million households across the country, which further increases the need to properly secure these systems. Thus, it will be important that the Bureau provides adequate time to perform these security assessments, completes them in a timely manner, and ensures that risks are at an acceptable level before the systems are deployed. We have ongoing work examining how the Bureau plans to address both internal and external cyber threats, including its efforts to complete system security assessments and resolve identified weaknesses. The Bureau Has Identified a Significant Number of Corrective Actions to Address Security Weaknesses, but Has Not Always Been Timely in Completing Them FISMA requires that agency-wide information security programs include a process for planning, implementing, evaluating, and documenting remedial actions (i.e., corrective actions) to address any deficiencies in the information security policies, procedures, and practices of the agency. Additionally, the Bureau’s framework requires it to track security assessment findings that need to be remediated as a plan of action and milestones (POA&M). These POA&Ms are expected to provide a description of the vulnerabilities identified during the security assessment that resulted from a control weakness. As of the end of May 2019, the Bureau had over 330 open POA&Ms to remediate for issues identified during security assessment activities, including ongoing continuous monitoring. Of these open POA&Ms, 217 (or about 65 percent) were considered “high-risk” or “very high-risk.” While the Bureau established POA&Ms for addressing these identified security control weaknesses, it did not always complete remedial actions in accordance with its established deadlines. For example, of the 217 open “high-risk” or “very high-risk” POA&Ms we reviewed, the Bureau identified 104 as being delayed. Further, 74 of the 104 had missed their scheduled completion dates by 60 or more days. According to the Bureau’s Office of Information Security, these POA&Ms were identified as delayed due to technical challenges or resource constraints to remediate and close them. We previously recommended that the Bureau take steps to ensure that identified corrective actions for cybersecurity weaknesses are implemented within prescribed time frames. As of late May 2019, the Bureau was working to address our recommendation. Until the Bureau resolves identified vulnerabilities in a timely manner, it faces an increased risk, as continuing opportunities exist for unauthorized individuals to exploit these weaknesses and gain access to sensitive information and systems. The Bureau Is Working with DHS to Improve Its 2020 Census Cybersecurity Efforts, but Lacks a Formal Process to Address DHS’s Recommendations The Bureau is working with federal and industry partners, including DHS, to support the 2020 Census cybersecurity efforts. Specifically, the Bureau is working with DHS to ensure a scalable and secure network connection for the 2020 Census respondents (e.g., virtual Trusted Internet Connection with the cloud), improve its cybersecurity posture (e.g., risk management processes and procedures), and strengthen its response to potential cyber threats (e.g., federal cyber incident coordination). Federal law and related standards describe practices for strengthening cybersecurity by documenting or tracking corrective actions. As previously mentioned, FISMA requires executive branch agencies to establish a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in their information security policies, procedures, and practices. Standards for Internal Control in the Federal Government calls for agencies to establish effective internal control monitoring that includes a process to promptly resolve the findings of audits and other reviews. Specifically, agencies should document and complete corrective actions to remediate identified deficiencies on a timely basis. This would include correcting identified deficiencies or demonstrating that the findings and recommendations do not warrant agency action. Since January 2017, DHS has been providing cybersecurity assistance (including issuing recommendations) to the Bureau in preparation for the 2020 Census. Specifically, DHS has been providing cybersecurity assistance to the Bureau in five areas: management coordination and executive support, including a CyberStat Review; cybersecurity threat intelligence and information sharing enhancement through, among other things, a DHS cyber threat briefing to the Bureau’s leadership; network and infrastructure security and resilience, including National Cybersecurity Protection System (also called EINSTEIN) support; incident response and management readiness through a Federal Incident Response Evaluation assessment; and risk management and vulnerability assessments for specific high value assets provided by the Bureau. In the last 2 years, DHS has provided 42 recommendations to assist the Bureau in strengthening its cybersecurity efforts. Among other things, the recommendations pertained to strengthening cyber incident management capabilities, penetration testing and web application assessments of select systems, and phishing assessments to gain access to sensitive PII. Of the 42 recommendations, 10 recommendations resulted from DHS’s mandatory services for the Bureau (e.g., risk management and vulnerability assessments for specific high value assets). The remaining 32 recommendations resulted from DHS’s voluntary services for the Bureau (e.g., Federal Incident Response Evaluation assessment). Due to the sensitive nature of the recommendations, we are not identifying the specific recommendations or specific findings associated with them in this statement. In April 2019, we reported that the Bureau had not established a formal process for documenting, tracking, and completing corrective actions for all of the recommendations provided by DHS. Accordingly, we recommended that the Bureau implement a formal process for tracking and executing appropriate corrective actions to remediate cybersecurity findings identified by DHS. As of late May 2019, the Bureau was working to address our recommendation. Until the Bureau implements our recommendation, it faces an increased likelihood that findings identified by DHS will go uncorrected and may be exploited to cause harm to agency’s 2020 Census IT systems and gain access to sensitive respondent data. Implementing a formal process would also help to ensure that DHS’s efforts result in improvements to the Bureau’s cybersecurity posture. The Bureau Faces Several Other Cybersecurity Challenges in Implementing the 2020 Census The Bureau faces other substantial cybersecurity challenges in addition to those previously discussed. More specifically, we previously reported that the extensive use of IT systems to support the 2020 Census redesign may help increase efficiency, but that this redesign introduces critical cybersecurity challenges. These challenges include those related to the following: Phishing. We have previously reported that advanced persistent threats may be targeted against social media web sites used by the federal government. In addition, attackers may use social media to collect information and launch attacks against federal information systems through social engineering, such as phishing. Phishing attacks could target respondents, as well as Bureau employees and contractors. The 2020 Census will be the first one in which respondents will be heavily encouraged to respond via the internet. This will likely increase the risk that cyber criminals will use phishing in an attempt to steal personal information. According to the Bureau, it plans to inform the public of the risks associated with phishing through its education and communication campaigns. Disinformation from social media. We previously reported that one of the Bureau’s key innovations for the 2020 Census is the large-scale implementation of an internet self-response option. The Bureau is encouraging the public to use the internet self-response option through expanded use of social media. However, the public perception of the Bureau’s ability to adequately safeguard the privacy and confidentiality of the 2020 Census internet self-responses could be influenced by disinformation spread through social media. According to the Bureau, if a substantial segment of the public is not convinced that the Bureau can safeguard public response data against data breaches and unauthorized use, then response rates may be lower than projected, leading to an increase in cases for follow-up and subsequent cost increases. To help address this challenge, the Bureau stated that it plans to inform the public of the risks associated with disinformation from social media through its education and communication campaigns. Ensuring that individuals gain only limited and appropriate access to 2020 Census data. The Bureau plans to enable a public- facing website and Bureau-issued mobile devices to collect PII (e.g., name, address, and date of birth) from the nation’s entire population— estimated to be over 300 million. In addition, the Bureau is planning to obtain and store administrative records containing PII from other government agencies to help augment information that enumerators did not collect. The number of reported security incidents involving PII at federal agencies has increased dramatically in recent years. Because of these challenges, we have recommended, among other things, that federal agencies improve their response to information security incidents and data breaches involving PII, and consistently develop and implement privacy policies and procedures. Accordingly, it will be important for the Bureau to ensure that only respondents and Bureau officials are able to gain access to this information, and enumerators and other employees only have access to the information needed to perform their jobs. Ensuring adequate control in a cloud environment. The Bureau has decided to use cloud solutions as a key component of the 2020 Census IT infrastructure. We have previously reported that cloud computing has both positive and negative information security implications and, thus, federal agencies should develop service-level agreements with cloud providers. These agreements should specify, among other things, the security performance requirements—including data reliability, preservation, privacy, and access rights—that the service provider is to meet. Without these safeguards, computer systems and networks, as well as the critical operations and key infrastructures they support, may be lost; information—including sensitive personal information—may be compromised; and the agency’s operations could be disrupted. Commerce’s Office of the Inspector General recently identified several challenges the Bureau may face using cloud-based systems to support the 2020 Census. Specifically, in June 2019, the Office of the Inspector General identified, among other things, unimplemented security system features that left critical 2020 Census systems vulnerable during the 2018 End-to-End Test and a lack of fully implemented security practices to protect certain data hosted in the 2020 Census cloud environment. Officials from the Bureau agreed with all eight of the Office of Inspector General’s recommendations regarding 2020 Census cloud-based systems and identified actions taken to address them. Ensuring contingency and incident response plans are in place to encompass all of the IT systems to be used to support the 2020 Census. Because of the brief time frame for collecting data during the 2020 Census, it is especially important that systems are available for respondents to ensure a high response rate. Contingency planning and incident response help ensure that, if normal operations are interrupted, network managers will be able to detect, mitigate, and recover from a service disruption while preserving access to vital information. Implementing important security controls, including policies, procedures, and techniques for contingency planning and incident response, helps to ensure the confidentiality, integrity, and availability of information and systems, even during disruptions of service. Without contingency and incident response plans, system availability might be impacted and result in a lower response rate. The Bureau’s CIO has acknowledged these cybersecurity challenges and is working to address them, according to Bureau documentation. In addition, we have ongoing work looking at many of these challenges, including the Bureau’s plans to protect PII, use a cloud-based infrastructure, and recover from security incidents and other disasters. Key Risk #4: The Bureau Will Need to Control Any Further Cost Growth and Develop Cost Estimates That Reflect Best Practices Since 2015, the Bureau has made progress in improving its ability to develop a reliable cost estimate. We have reported on the reliability of the $12.3 billion life-cycle cost estimate released in October 2015 and the $15.6 billion revised cost estimate released in October 2017. In 2016 we reported that the October 2015 version of the Bureau’s life-cycle cost estimate for the 2020 Census was not reliable. Specifically, we found that the 2020 Census life-cycle cost estimate partially met two of the characteristics of a reliable cost estimate (comprehensive and accurate) and minimally met the other two (well-documented and credible). We recommended that the Bureau take specific steps to ensure its cost estimate meets the characteristics of a high-quality estimate. The Bureau agreed and has taken action to improve the reliability of the cost estimate. In August 2018 we reported that while improvements had been made, the Bureau’s October 2017 cost estimate for the 2020 Census did not fully reflect all the characteristics of a reliable estimate. (See figure 6.) In order for a cost estimate to be deemed reliable as described in GAO’s Cost Estimating and Assessment Guide and thus, to effectively inform 2020 Census annual budgetary figures, the cost estimate must meet or substantially meet the following four characteristics: Well-Documented. Cost estimates are considered valid if they are well-documented to the point they can be easily repeated or updated and can be traced to original sources through auditing, according to best practices. Accurate. Accurate estimates are unbiased and contain few mathematical mistakes. Credible. Credible cost estimates must clearly identify limitations due to uncertainty or bias surrounding the data or assumptions, according to best practices. Comprehensive. To be comprehensive an estimate should have enough detail to ensure that cost elements are neither omitted nor double-counted, and all cost-influencing assumptions are detailed in the estimate’s documentation, among other things, according to best practices. The 2017 cost estimate only partially met the characteristic of being well- documented. In general, some documentation was missing, inconsistent, or difficult to understand. Specifically, we found that source data did not always support the information described in the basis of estimate document or could not be found in the files provided for two of the Bureau’s largest field operations: Address Canvassing and Non- Response Follow-Up. We also found that some of the cost elements did not trace clearly to supporting spreadsheets and assumption documents. Failure to document an estimate in enough detail makes it more difficult to replicate calculations, or to detect possible errors in the estimate; reduces transparency of the estimation process; and can undermine the ability to use the information to improve future cost estimates or even to reconcile the estimate with another independent cost estimate. The Bureau told us it would continue to make improvements to ensure the estimate is well- documented. Increased Costs Are Driven by an Assumed Decrease in Self- Response Rates and Increases in Contingency Funds and IT Cost Categories The 2017 life-cycle cost estimate includes much higher costs than those included in the 2015 estimate. The largest increases occurred in the Response, Managerial Contingency, and Census/Survey Engineering categories. For example, increased costs of $1.3 billion in the response category (costs related to collecting, maintaining, and processing survey response data) were in part due to reduced assumptions for self- response rates, leading to increases in the amount of data collected in the field, which is more costly to the Bureau. Contingency allocations increased overall from $1.35 billion in 2015 to $2.6 billion in 2017, as the Bureau gained a greater understanding of risks facing the 2020 Census. Increases of $838 million in the Census/Survey Engineering category were due mainly to the cost of an IT contract for integrating decennial survey systems that was not included in the 2015 cost estimate. Bureau officials attribute a decrease of $551 million in estimated costs for Program Management to changes in the categorization of costs associated with risks. Specifically, in the 2017 version of the estimate, estimated costs related to program risks were allocated to their corresponding work breakdown structure (WBS) element. Figure 7 shows the change in cost by WBS category for 2015 and 2017. More generally, factors that contributed to cost fluctuations between the 2015 and 2017 cost estimates include: Changes in assumptions. Among other changes, a decrease in the assumed rate for self-response from 63.5 percent in 2015 to 60.5 percent in 2017 increased the cost of collecting responses from nonresponding housing units. Improved ability to anticipate and quantify risk. In general, contingency allocations designed to address the effects of potential risks increased overall from $1.3 billion in 2015 to $2.6 billion in 2017. An overall increase in IT costs. IT cost increases, totaling $1.59 billion, represented almost 50 percent of the total cost increase from 2015 to 2017. More defined contract requirements. Bureau documents described an overall improvement in the Bureau’s ability to define and specify contract requirements. This resulted in updated estimates for several contracts, including for the Census Questionnaire Assistance contract. However, while the Bureau has been able to better quantify risk; in August 2018 we also reported that the Secretary of Commerce included a contingency amount of about $1.2 billion in the 2017 cost estimate to account for what the Bureau refers to as “unknown unknowns.” According to Bureau documentation these include such risks as natural disasters or cyber attacks. The Bureau provides a description of how the risk contingency for “unknown unknowns” is calculated; however, this description does not clearly link calculated amounts to the risks themselves. Thus, only $14.4 billion of the Bureau’s $15.6 billion cost estimate has justification. According to Bureau officials, the cost estimate remains at $15.6 billion; however, they stated that they are managing the 2020 Census at a lower level of funding—$14.1 billion. In addition, they said that, at this time, they do not plan to request funding for the $1.2 billion contingency fund for unknown unknowns or $369 million in funding for selected discrete program risks for what-if scenarios, such as an increase in the wage rate or additional supervisors needed to manage field operations. Instead of requesting funding for these contingencies upfront the Bureau plans to work with OMB and Commerce to request additional funds, if the need arises. According to Bureau officials they anticipate that the remaining $1.1 billion in contingency funding included in the $14.1 billion will be sufficient to carry out the 2020 Census. In June 2016 we recommended the Bureau improve control over how risk and uncertainty are accounted for. This prior recommendation remains valid given the life-cycle cost estimate still includes the $1.2 billion unjustified contingency fund for “unknown unknowns”. Moreover, given the cost growth between 2015 and 2017 it will be important for the Bureau to monitor cost in real-time, as well as, document, explain and review variances between planned and actual cost. In August 2018 we reported that the Bureau had not been tracking variances between estimated life-cycle costs and actual expenses. Tools to track variance enable management to measure progress against planned outcomes and will help inform the 2030 Census cost estimate. Bureau officials stated that they already have systems in place that can be adapted for tracking estimated and actual costs. We will continue to monitor the status of the tracking system. According to Bureau officials, the Bureau planned to release an updated version of the 2020 Census life-cycle estimate in the spring of 2019; however, they released the update on July 15, 2019. We will review the documentation to see whether the revised estimate will address our recommendations. To ensure that future updates to the life-cycle cost estimate reflect best practices, it will be important for the Bureau to implement our recommendation related to the cost estimate. Continued Management Attention Needed to Keep Preparations on Track and Help Ensure a Cost- Effective Enumeration 2020 Challenges Are Symptomatic of Deeper Long-Term Organizational Issues The difficulties facing the Bureau’s preparation for the decennial census in such areas as planning and testing; managing and overseeing IT programs, systems, and contractors supporting the enumeration; developing reliable cost estimates; prioritizing decisions; managing schedules; and other challenges, are symptomatic of deeper organizational issues. Following the 2010 Census, a key lesson learned for 2020 that we identified was ensuring that the Bureau’s organizational culture and structure, as well as its approach to strategic planning, human capital management, internal collaboration, knowledge sharing, capital decision- making, risk and change management, and other internal functions are aligned toward delivering more cost-effective outcomes. The Bureau has made improvements over the last decade, and continued progress will depend in part on sustaining efforts to strengthen risk management activities, enhancing systems testing, bringing in experienced personnel to key positions, implementing our recommendations, and meeting regularly with officials from its parent agency, Commerce. Going forward, we have reported that the key elements needed to make progress in high-risk areas are top-level attention by the administration and agency officials to (1) leadership commitment, (2) ensuring capacity, (3) developing a corrective action plan, (4) regular monitoring, and (5) demonstrated progress. Although important steps have been taken in at least some of these areas, overall, far more work is needed. We discuss three of five areas below. The Secretary of Commerce has successfully demonstrated leadership commitment. For example, the Bureau and Commerce have strengthened this area with executive-level oversight of the 2020 Census by holding regular meetings on the status of IT systems and other risk areas. In addition, in 2017 Commerce designated a team to assist senior Bureau management with cost estimation challenges. Moreover, on January 2, 2019, a new Director of the Census Bureau took office, a position that had been vacant since June 2017. With regard to capacity, the Bureau has improved the cost estimation process of the decennial when it established guidance including: roles and responsibilities for oversight and approval of cost estimation processes, procedures requiring a detailed description of the steps taken to produce a high-quality cost estimate, and a process for updating the cost estimate and associated documents over the life of a project. However, the Bureau continues to experience skills gaps in the government program management office overseeing the $886 million contract for integrating the IT systems needed to conduct the 2020 Census. Specifically, as of June 2019, 14 of 44 positions in this office were vacant. For the monitoring element, we found to track performance of decennial census operations, the Bureau relied on reports to track progress against pre-set goals for a test conducted in 2018. According to the Bureau, these same reports will be used in 2020 to track progress. However, the Bureau’s schedule for developing IT systems during the 2018 End-to-End test experienced delays that compressed the time available for system testing, integration testing, and security assessments. These schedule delays contributed to systems experiencing problems after deployment, as well as cybersecurity challenges. In the months ahead, we will continue to monitor the Bureau’s progress in addressing each of the five elements essential for reducing the risk to a cost-effective enumeration. Further Actions Needed on Our Recommendations Over the past several years we have issued numerous reports that underscored the fact that, if the Bureau was to successfully meet its cost savings goal for the 2020 Census, the agency needed to take significant actions to improve its research, testing, planning, scheduling, cost estimation, system development, and IT security practices. As of July 2019, we have made 107 recommendations related to the 2020 Census. The Bureau has implemented 74 of these recommendations, 32 remain open, and one recommendation was closed as not implemented. Of the 32 open recommendations, 10 were directed at improving the implementation of the innovations for the 2020 Census. Commerce generally agreed with our recommendations and is taking steps to implement them. Moreover, in April 2019 we wrote to the Secretary of Commerce, providing a list of the 12 open 2020-Census-related recommendations that we designated as “priority.” Priority recommendations are those recommendations that we believe warrant priority attention from heads of key departments and agencies. We believe that attention to these recommendations is essential for a cost-effective enumeration. The recommendations included implementing reliable cost estimation and scheduling practices in order to establish better control over program costs, as well as taking steps to better position the Bureau to develop an internet response option for the 2020 Census. In addition to our recommendations, to better position the Bureau for a more cost-effective enumeration, on March 18, 2019, we met with OMB, Commerce, and Bureau officials to discuss the Bureau’s progress in reducing the risks facing the census. We also meet regularly with Bureau officials and managers to discuss the progress and status of open recommendations related to the 2020 Census, which has resulted in Bureau actions in recent months leading to closure of some recommendations. We are encouraged by this commitment by Commerce and the Bureau in addressing our recommendations. Implementing our recommendations in a complete and timely manner is important because it could improve the management of the 2020 Census and help to mitigate continued risks. In conclusion, while the Bureau has made progress in revamping its approach to the census, it faces considerable challenges and uncertainties in implementing key cost-saving innovations and ensuring they function under operational conditions; managing the development and testing of its IT systems; ensuring the cybersecurity of its systems and data; and developing a quality cost estimate for the 2020 Census and preventing further cost increases. For these reasons, the 2020 Census is a GAO high-risk area. Going forward, continued management attention and oversight will be vital for ensuring that risks are managed, preparations stay on track, and the Bureau is held accountable for implementing the enumeration, as planned. Without timely and appropriate actions, the challenges previously discussed could adversely affect the cost, accuracy, schedule, and security of the enumeration. We will continue to assess the Bureau’s efforts and look forward to keeping Congress informed of the Bureau’s progress. Chairman Raskin, Ranking Member Roy, and Members of the Subcommittee, this completes our prepared statement. We would be pleased to respond to any questions that you may have. GAO Contacts and Staff Acknowledgments If you have any questions about this statement, please contact Robert Goldenkoff at (202) 512-2757 or by email at goldenkoffr@gao.gov or Nick Marinos at (202) 512-9342 or by email at marinosn@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other key contributors to this testimony include Jon Ticehurst (Assistant Director); Kate Sharkey (Assistant Director); Ty Mitchell (Assistant Director); Lisa Pearson (Assistant Director); Andrea Starosciak (Analyst in Charge); Christopher Businsky; Jackie Chapin; Jeff DeMarco; Rebecca Eyler; Adella Francis; Scott Pettis; Kayla Robinson; Robert Robinson; Cindy Saunders; Sejal Sheth; Emmy Rhine Paule; and Umesh Thakkar. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The Bureau is responsible for conducting a complete and accurate decennial census of the U.S. population. The decennial census is mandated by the Constitution and provides vital data for the nation. A complete count of the nation's population is an enormous undertaking as the Bureau seeks to control the cost of the census, implement operational innovations, and use new and modified IT systems. In recent years, GAO has identified challenges that raise serious concerns about the Bureau's ability to conduct a cost-effective count. For these reasons, GAO added the 2020 Census to its High-Risk list in February 2017. GAO was asked to testify about the reasons the 2020 Census remains on the High-Risk List and the steps the Bureau needs to take to mitigate risks to a successful census. To do so, GAO summarized its prior work regarding the Bureau's planning efforts for the 2020 Census. GAO also included preliminary observations from its ongoing work examining the IT systems readiness and cybersecurity for the 2020 Census. This information is related to, among other things, the Bureau's progress in developing and testing key systems and the status of cybersecurity risks. What GAO Found The 2020 Decennial Census is on GAO's list of high-risk programs primarily because the Department of Commerce's Census Bureau (Bureau) (1) is using innovations that are not expected to be fully tested, (2) continues to face challenges in implementing information technology (IT) systems, and (3) faces significant cybersecurity risks to its systems and data. Although the Bureau has taken initial steps to address risk, additional actions are needed as these risks could adversely impact the cost, quality, schedule, and security of the enumeration. Innovations. The Bureau is planning several innovations for the 2020 Census, including allowing the public to respond using the internet. These innovations show promise for controlling costs, but they also introduce new risks, in part, because they have not been used extensively, if at all, in earlier enumerations. As a result, testing is essential to ensure that key IT systems and operations will function as planned. However, citing budgetary uncertainties, the Bureau scaled back operational tests in 2017 and 2018, missing an opportunity to fully demonstrate that the innovations and IT systems will function as intended during the 2020 Census. To manage risk to the census, the Bureau has developed hundreds of mitigation and contingency plans. To maximize readiness for the 2020 Census, it will also be important for the Bureau to prioritize among its mitigation and contingency strategies those that will deliver the most cost-effective outcomes for the census. Implementing IT systems. The Bureau plans to rely heavily on IT for the 2020 Census, including a total of 52 new and legacy IT systems and the infrastructure supporting them. To help improve its implementation of IT, in October 2018, the Bureau revised its systems development and testing schedule to reflect, among other things, lessons learned during its 2018 operational test. However, GAO's ongoing work has determined that the Bureau is at risk of not meeting near-term IT system development and testing schedule milestones for five upcoming 2020 Census operational deliveries, including self-response (e.g., the ability to respond to the 2020 Census through the internet). These schedule management challenges may compress the time available for the remaining system development and testing, and increase the risk that systems will not function as intended. It will be important that the Bureau effectively manages IT implementation risk to ensure that it meets near-term milestones for system development and testing, and that it is ready for the major operations of the 2020 Census. Cybersecurity. The Bureau has established a risk management framework that requires it to conduct a full security assessment for nearly all the systems expected to be used for the 2020 Census and, if deficiencies are identified to determine the corrective actions needed to remediate those deficiencies. As of the end of May 2019, the Bureau had over 330 corrective actions from its security assessments that needed to be addressed, including 217 that were considered “high-risk” or “very high-risk.” However, of these 217 corrective actions, the Bureau identified 104 as being delayed. Further, 74 of the 104 were delayed by 60 or more days. According to the Bureau, these corrective actions were delayed due to technical challenges or resource constraints. GAO recently recommended that the Bureau take steps to ensure that identified corrective actions for cybersecurity weaknesses are implemented within prescribed time frames. Resolving identified vulnerabilities more timely can help reduce the risk that unauthorized individuals may exploit weaknesses to gain access to sensitive information and systems. To its credit, the Bureau is also working with the Department of Homeland Security (DHS) to support its 2020 Census cybersecurity efforts. For example, DHS is helping the Bureau ensure a scalable and secure network connection for the 2020 Census respondents and to strengthen its response to potential cyber threats. During the last 2 years, as a result of these activities, the Bureau has received 42 recommendations from DHS to improve its cybersecurity posture. GAO recently recommended that the Bureau implement a formal process for tracking and executing appropriate corrective actions to remediate cybersecurity findings identified by DHS. Implementing the recommendation would help better ensure that DHS's efforts result in improvements to the Bureau's cybersecurity posture. In addition to addressing risks which could affect innovations and the security of the enumeration, the Bureau has the opportunity to improve its cost estimating process for the 2020 Census, and ultimately the reliability of the estimate itself, by reflecting best practices. In October 2017, the 2020 Census life-cycle cost estimate was updated and is now projected to be $15.6 billion, a more than $3 billion (27 percent) increase over its earlier estimate. GAO reported in August 2018 that although the Bureau had taken steps to improve its cost estimation process for 2020, it needed to implement a system to track and report variances between actual and estimated cost elements. According to Bureau officials, they planned to release an updated version of the 2020 Census life-cycle estimate in the spring of 2019; however, they released the update on July 15, 2019. GAO will review the released documentation to see whether the revised estimate will address the recommendations. To ensure that future updates to the life-cycle cost estimate reflect best practices, it will be important for the Bureau to implement GAO's recommendation related to the cost estimate. Over the past decade, GAO has made 107 recommendations specific to the 2020 Census to help address these risks and other concerns. The Department of Commerce has generally agreed with these recommendations and has taken action to address many of them. However, as of July 2019, 32 of the recommendations had not been fully implemented. While all 32 open recommendations are important for a high-quality and cost-effective enumeration, 10 are directed at managing the risks introduced by the Bureau's planned innovations for the 2020 Census. To ensure a high-quality and cost-effective enumeration, it will be important for the Bureau to address these recommendations. What GAO Recommends Over the past decade, GAO has made 107 recommendations specific to the 2020 Census to help address issues raised in this and other products. The Department of Commerce has generally agreed with the recommendations. As of July 2019, 32 of the recommendations had not been fully implemented.
gao_GAO-20-388
gao_GAO-20-388_0
Background Mérida Initiative Projects There were 445 State/INL and USAID Mérida Initiative projects active from fiscal year 2014 through fiscal year 2018, which includes some projects that started before this period and some that continued after this period. State/INL funded 388 of the projects, and USAID funded 57. USAID’s projects tended to be larger with higher funding amounts than State/INL projects. State/INL projects generally focused on providing training and assistance to Mexican officials from the justice sector, border security, military, and law enforcement, as well as equipment, including for forensic laboratories, drug detection, and border surveillance. USAID projects were intended to engage with Mexican government institutions, civil society organizations, and the private sector to address corruption, promote trust in government, or prevent crime and violence, such as through skill building for youth, efforts to advance human rights, or technical support for judicial system development. State/INL allocated about $542 million and USAID allocated about $182 million for assistance to Mexico under the Mérida Initiative from fiscal year 2014 through fiscal year 2018. Mérida Initiative Oversight State/INL and USAID are the lead U.S. agencies for developing the Mérida Initiative’s programming. In these roles, State/INL and USAID work with Government of Mexico officials to outline plans, goals, and objectives for Mérida Initiative projects. State/INL and USAID both manage and fund the Mérida Initiative with the support of a wide range of project implementers, including DOJ, DHS, and DOD, as well as private contractors, nongovernmental organizations, and international organizations. State/INL and USAID implement Mérida Initiative projects primarily through contracts, grants, and agreements with international organizations. State/INL also implements some Mérida Initiative projects through interagency agreements with other U.S. agencies (e.g., DOJ, DHS, and DOD). State/INL and USAID contracting, grant, and agreement officers, are responsible for administering and overseeing contracts, grants, and other agreements that the agencies award, including for Mérida Initiative projects. They delegate the day-to-day monitoring responsibilities to agency officials located in Mexico City, particularly State/INL and USAID Contracting Officer Representatives (COR) for contracts, State/INL Grant Officer Representatives (GOR) for grants, State/INL Agreement Officer Representatives (AOR) for interagency agreements or letters of agreement with international organizations, and USAID AORs for grants and cooperative agreements, according to agency officials. Key monitoring responsibilities of the CORs, GORs, and AORs typically include reviewing quarterly, annual, and other progress reports submitted by project implementers; ensuring other required documents are submitted; communicating with the implementers on the status of assistance activities; and conducting site visits, among other things. Key Practices for Monitoring Foreign Assistance Projects In 2019, we reported on 14 leading practices for monitoring foreign assistance that agencies should incorporate in their monitoring policies to help ensure that they effectively manage foreign assistance, address impediments, and meet their assistance goals. From these leading practices we derived eight key practices that can help agencies monitor the implementation and performance at the project level. To facilitate discussing these key monitoring practices, we grouped them into three areas: (1) assigning monitoring duties to qualified staff, (2) planning monitoring approach, and (3) monitoring project implementation. (See table 1.) These practices are generally consistent with the Office of Management and Budget’s guidelines for Federal departments and agencies that administer United States foreign assistance and related guidance, as well as State’s and USAID’s monitoring policies. For Projects We Reviewed, State Generally Followed Key Monitoring Practices About Half of the Time, but Did Not Consistently Track Performance Measures We reviewed 15 of State/INL’s high–dollar value Mérida Initiative projects to assess the extent to which State/INL followed key practices for monitoring foreign assistance projects in the areas of assigning monitoring duties to qualified staff, planning a monitoring approach, and monitoring project implementation. For these projects, the agency generally followed the key practices about half of the time, as shown in figure 1, and for a subset of four selected projects, it did not consistently track performance data or compare them to established performance measures. State/INL does not have procedures in place for monitoring staff to consistently follow all the key practices. Instead, officials said they focused on tracking implementation of the projects’ activities. Consistently following key monitoring practices would allow State/INL to stay well informed of projects performance, take corrective action when necessary, and help ensure that projects achieve their intended results. For Projects We Reviewed, State Generally Followed Key Monitoring Practices about Half of the Time Assigning Monitoring Duties to Qualified Staff State/INL generally followed key practices for assigning monitoring duties to qualified staff almost always. Assigning staff with the appropriate certification helps ensure that they have the necessary knowledge and skills to perform those duties. Establishing roles and responsibilities helps ensure that the assigned monitoring staff are aware of their monitoring duties. State/INL requires that staff responsible for monitoring Mérida Initiative projects be certified as a COR, GOR, or AOR. State/INL also assigns roles and responsibilities to monitoring staff through a designation letter in which a contract or grant officer designates a COR, GOR, or AOR to oversee each project. However, of the 15 projects we reviewed, one had a gap in the documentation for staff certifications, and four had gaps in the documentation of designation letters. For example, in one case State/INL could not provide documentation to demonstrate that the official responsible for monitoring a project on police training had been officially designated or that the official had a valid certification during the full implementation period of the project. According to State/INL staff, the monitoring staff roles and responsibilities are also outlined in other documents such as the State Department’s Foreign Affairs Manual and the AOR Handbook, of which staff are expected to be aware. Figure 2 illustrates the extent to which State/INL followed each related key practice for assigning monitoring duties. Planning Monitoring Approach State/INL generally followed key practices for planning a monitoring approach a third of the time. Two projects—one for helicopter pilot training and the other for aviation maintenance training—did not have monitoring plans and thus did not meet any of the three key practices for planning a monitoring approach. According to a State/INL manager, State/INL is no longer working with this implementer due to long-standing difficulties in obtaining documentation needed to monitor the projects. Most of the other 13 projects partially met the key practices for planning a monitoring approach. For example, goals and objectives were included in planning documents other than the monitoring plan. Furthermore, while only three of the projects had a monitoring plan that addressed risk, we determined that 10 of the projects partially addressed this key practice, because risks were assessed or considered, but the identified risks were not addressed in the monitoring plan. In addition, almost all of the projects had relevant project-level performance measures. Developing a monitoring plan that identifies project objectives helps focus monitoring efforts on assessing projects outcomes. In addition, identifying and addressing risks in that plan helps focus monitoring efforts on those aspects of project implementation that are most likely to threaten the success of the project in meeting its goals. We did not see evidence that State/INL had procedures in place to ensure that monitoring officials consistently follow key practices in the area of planning monitoring approach. Figure 3 illustrates the extent to which State/INL followed each related key practice to planning a monitoring approach. Monitoring Project Implementation State/INL provided documentation to demonstrate that monitoring managers generally followed key practices for monitoring project implementation about half of the time. Monitoring project implementation helps ensure that projects are meeting their objectives, so that any necessary adjustments or corrective actions can be taken in a timely manner. We found that State/INL did not generally collect all expected progress reports from implementers for seven projects, and of those seven, it did not collect any reports for three projects. Furthermore, State/INL did not provide documentation for eight projects demonstrating that monitoring staff had generally assessed and approved implementers’ periodic progress reports. We also found that for seven projects, State/INL did not provide documentation demonstrating that monitoring staff had generally conducted site or field monitoring visits or taken other steps to validate the partner’s performance implementing the project. For example, for one project that provided training to Mexican immigration officers on the southern border, State/INL only provided one quarterly progress report of the four we requested for the period of our review. For this project, State/INL also did not provide documentation that monitoring staff had taken steps to review and approve the report or that they had conducted any monitoring site visits. A State/INL official explained that they requested the quarterly reports, but at times implementers did not submit them. Without implementing procedures to consistently collect, assess, and approve performance reports from implementers, monitoring staff may not have sufficient information to assess implementers’ performance and determine whether corrective actions are needed. We did not see evidence that State/INL had procedures in place to ensure that monitoring officials consistently follow key practices in the area of monitoring project implementation. Figure 4 illustrates the extent to which State/INL followed each related key practice for monitoring project implementation. State/INL Did Not Consistently Track Performance Data against Established Measures for Projects We Reviewed State/INL monitoring officials did not consistently track performance data against established measures for four Mérida Initiative projects we reviewed; these four projects were a subset of the 15 State/INL projects discussed above. Tracking performance data—a key practice for monitoring project implementation—can provide meaningful information on projects’ progress in achieving intended results. The four projects we reviewed included two grants focused on police professionalization; one interagency agreement focused on assistance to forensic laboratories; and one agreement with an international organization focused on conducting a survey on police standards, training, and professionalization. We reviewed how State/INL tracked performance data for these selected projects as part of its efforts to assess and approve implementing partners’ periodic performance reports and data as outlined in the key monitoring practices. Specifically, we analyzed the extent to which State/INL tracked data contained in quarterly progress reports and compared these data to established performance measures. State/INL and the project implementers outlined these performance measures in monitoring documents that these implementers developed and State/INL approved. Some of these projects’ monitoring documents also included data sources, data collection frequency, and performance targets. State/INL did not track performance data for two of the four selected projects and tracked such data inconsistently for the other two selected projects. As a result, State/INL cannot ensure that it has accurate and reliable performance data for its Mérida Initiative projects. Such data could help State/INL determine whether projects are achieving intended results and take necessary corrective actions to improve project performance over time. State/INL Did Not Track Performance Measures for Two of the Four State/INL Projects We Reviewed For the two police professionalization projects we reviewed, State/INL did not track performance data against established performance measures outlined in the project narrative at the start of the projects. Some of these projects’ performance measures reflected outputs—such as the number of participants completing at least 25 hours of police training and the number of citizen surveys conducted on public trust of law enforcement. Other performance measures reflected outcomes—such as the percentage of law enforcement officials who feel ready for promotion after completing training and results of citizen surveys on perceived security where law enforcement trainings were conducted. (See examples in table 2.) However, State/INL did not clearly track or reference such performance measures in these two projects’ quarterly progress reports. Instead, State/INL provided details in these reports on project activities and training that did not clearly link to the projects’ performance measures. For example, State/INL noted the number of participants who took a specific training course on a certain date, but did not provide the total number of participants’ training hours to compare them to the performance measure on the total number of participants who completed at least 25 hours of training. State/INL monitoring officials said they had not systematically tracked data on the performance measures of these projects over time, but instead focused on ensuring the trainings were conducted and the number of training participants were tracked. These officials acknowledged the need to improve their tracking of these projects’ progress against their performance measures. We also identified information in quarterly progress reports for two projects suggesting that the reports did not accurately reflect project activities in those quarters. For example, for one project, State/INL included identical information in two separate quarterly reports even though the implementer conducted different project activities in those two quarters. Thus, at a minimum, the information in one of the quarterly reports did not accurately reflect the project’s activities conducted in that quarter. We found the same issue with another project’s reports. State/INL officials said they were not aware that the project information in these reports were identical. State/INL Tracked Some Performance Measures for Two of the Four State/INL Projects We Reviewed, but Did So Inconsistently For the two other State/INL projects we reviewed (one forensics laboratory accreditation project and one police survey project), State/INL tracked some performance data but did so inconsistently. These projects’ performance measures reflected outputs, such as the number of survey pollsters hired and trained and the number of accredited forensic laboratories that maintain their accreditation. Other performance measures reflected outcomes, such as the percentage of forensic laboratories trainees reporting improved knowledge of subject matter and satisfaction rates for training courses for the forensics laboratory project. (See examples in table 3.) In one of these two projects’ quarterly reports, the project implementers inconsistently described and numbered some of the performance measures, and they did not explain the discrepancies. Also, the implementers mentioned different performance measures in different quarterly progress reports—with some measures dropping off in some quarters and new measures appearing in others—without providing a rationale in the reports. As a result, State/INL could not consistently track progress of some of the performance measures over time. State/INL officials stated that these two implementers only included activities in the quarterly reports that they conducted in that quarter, which would result in different and inconsistent performance measures in each report. In addition, some of the reported project activities did not consistently and clearly align with the performance measures to allow State/INL to track the project’s progress against these measures. For example, some performance measures reflected percentages (e.g., 90 percent of authorities responsible for forensic laboratories routinely attend regional and national conferences), but the report listed the names of conference participants, dates, and locations in a table next to that performance measure. When asked about these discrepancies, State/ INL officials said that they did not ensure that implementers provided complete information to clearly track the project’s progress against performance measures. However, they said that they also conduct monitoring through informal methods not documented in the progress reports, such as through communication via phone calls and emails with the implementers. Such informal methods do not provide State/INL with the necessary data to assess a project’s performance against its goals. State/INL Monitoring Management Did Not Ensure Their Staff Tracked Performance Measures For the four State/INL projects we reviewed, State/INL monitoring managers did not establish procedures to collect and review project performance data, such as the number of people who completed a targeted number of hours of training, or the results of training surveys. These managers said they did not prioritize establishing performance tracking procedures and instead focused on the implementation of the projects’ activities, such as counting the number of participants who attended one training course for a particular month. For example, while some monitoring staff sent monthly emails to their managers describing project activities, State/INL monitoring managers did not establish procedures—such as holding regular meetings with or requiring reporting from monitoring staff—that focused on tracking the projects’ progress against established performance measures. State/INL Receives Activity Data from Implementers to Monitor Project Implementation State/INL receives activity data from project implementers that it considers useful in helping the agency monitor the projects’ implementation and activities. State/INL officials told us that project activity data in the quarterly progress reports—such as when trainings were conducted and how many people attended—help keep them informed of and monitor the projects’ implementation. In addition, since 2015, State/INL Mexico has collected detailed data and information in tracking databases on (1) training events and related surveys on that training, and (2) forensic laboratory accreditations and correctional facility accreditations. The training tracking database contains data on over 6,000 training events, 100,000 trainee records, and over 20,000 survey responses from training event participants. This database can generate numerous reports covering the number of people who completed a specific trained course, which training courses a specific person completed, training survey results, and which implementer conducted the training, among other information. State/INL databases also collect information on the status of forensics laboratories and correctional facilities across Mexico that are being accredited through Mérida Initiative projects. The forensics database includes pages for each laboratory with detailed information about the level of accreditation received, and types of trainings conducted, among other things. The correctional facilities database is structured similarly to the laboratories database with pages for each facility with detailed information on accreditation status and timeline, among other things. According to State/INL officials, like the training tracking system, the forensics and correctional facilities databases can generate reports, such as monthly progress reports. Finally, State/INL Mexico is implementing a new cloud-based monitoring database—called DevResults—that will consolidate and track data on activity, output, and outcome indicators for all Mérida Initiative projects. According to State/INL officials, they implemented DevResults so that State/INL could track a project’s progress and trends in real time against its performance goals. According to State/INL officials, DevResults included data for 84 projects as of February 2020. They also noted that agency officials and implementers have completed training on DevResults, and additional training will be provided as needed. State/INL officials said they plan to continue adding data for past and present Mérida Initiative projects in 2020. For Projects We Reviewed, USAID Almost Always Followed Key Monitoring Practices and Tracked Performance Data, but Did Not Develop Monitoring Plans That Address Risk We reviewed five of USAID’s Mérida Initiative projects to assess the extent to which USAID followed key monitoring practices in the areas of assigning monitoring duties to qualified staff, planning a monitoring approach, and monitoring project implementation. For these projects, USAID almost always followed key practices—as shown in figure 5—and for a subset of two selected projects, it consistently tracked project performance. According to USAID officials, USAID management conducted periodic portfolio reviews to ensure that monitoring staff adequately monitored Mérida Initiative projects and followed key practices. However, for all five USAID projects we reviewed, monitoring plans did not address identified risks, which could help the agency allocate monitoring resources to those aspects of the projects that warrant closer scrutiny. For Projects We Reviewed, USAID Almost Always Followed Key Monitoring Practices Assigning Monitoring Duties to Qualified Staff USAID generally established roles and responsibilities for technical staff responsible for monitoring projects, but for two of the five projects we reviewed it did not maintain documentation showing that it assigned staff with appropriate certifications. Like State/INL, USAID requires that staff responsible for monitoring Mérida Initiative projects be certified as CORs or AORs, which typically includes periodic training in monitoring projects. USAID assigns roles and responsibilities to these staff through a designation letter in which a contract or agreement officer designates a COR or AOR, respectively, to conduct technical oversight of each project. For the five projects we reviewed, USAID properly designated monitoring roles and responsibilities to technical staff, however there were gaps in staff certification documentation for technical staff for two projects. For example, we found that the person responsible for monitoring a project promoting justice reform and rule of law in Mexico did not have a valid certificate for 9 months of the project’s 4-year period of performance. Maintaining complete documentation of monitoring-related activities helps USAID management ensure adequate, continuous monitoring of projects. According to USAID, the gaps in documentation were caused by staff turnover and trouble accessing the government-wide system for recording the certification of staff, which was difficult to access or down from December 2017 to March 2018. Officials said that once the system to record certificates was brought back online, they were able to track certifications. Figure 6 illustrates the extent to which USAID followed each related key practice for assigning monitoring duties. Planning Monitoring Approach USAID generally developed monitoring plans that included program goals and objectives and project-level performance measures, but the monitoring plans did not address project risks. All five projects generally had a monitoring plan that identified project goals and objectives, and relevant project-level performance measures. However, none of the monitoring plans generally addressed identified risks related to achieving project objectives. While USAID provided documentation showing that the agency had conducted various assessments considering risk for each project, the results of these assessments were not addressed in the projects’ monitoring plans. For example, for a project to promote justice and rule of law in Mexico, USAID assessed risks relating to terrorism, environmental effects, sustainability, and gender equity in carrying out the project. However, the project’s monitoring plan did not address identified risk levels and related monitoring actions designed to mitigate risks identified in these assessments. USAID explained that they address ongoing monitoring of risk through several other processes, such as project design, procurement actions, financial management, award management and administration, semi-annual project portfolio reviews, and annual risk-based assessments of the USAID’s portfolio in Mexico, among others. However, identifying and addressing risks in the monitoring plan can help ensure that monitoring staff are aware of potential impediments to project success about which they need to be vigilant or take steps to mitigate as they monitor the projects. Additionally, determining which activities warrant greater oversight can also help agencies manage monitoring resources cost effectively. Figure 7 illustrates the extent to which USAID followed each related key practice for planning a monitoring approach. Monitoring Project Implementation USAID generally followed key practices for monitoring project implementation about two-thirds of the time. We found that USAID collected all progress reports for four of the five projects we reviewed. For two projects, USAID did not provide documentation demonstrating that monitoring staff had generally assessed and approved implementers’ periodic progress reports. For all five projects, USAID provided documentation demonstrating that monitoring staff had generally validated implementing partners’ performance through site visits. Figure 8 illustrates the extent to which USAID followed each related key practice for monitoring project implementation. USAID Consistently Tracked Established Performance Measures for the Two Projects We Reviewed USAID monitoring officials consistently tracked performance data and compared them to established performance measures for the two projects we reviewed; these two projects were a subset of the five USAID projects discussed above. To review the extent to which USAID assessed and approved implementing partners’ periodic reports and data—one of the eight key monitoring practices—we determined whether USAID tracked performance data contained in quarterly or annual progress reports. USAID funds one of the two projects through a cooperative agreement focused on strengthening human rights, and the other project through a contract focused on improving the criminal justice sector. USAID and project implementers outlined these projects’ performance measures in project-specific monitoring plans that both parties developed at the start of the project or revised after the project was in place. Project implementers developed these plans, and USAID approved them. The plans included details related to the performance measures, such as data sources, data collection frequency, and targets. In accordance with these plans, USAID and project implementers tracked performance measures in annual progress reports, while they primarily tracked detailed project activity in quarterly progress reports. The two USAID projects’ progress reports included tables that tracked project performance. Some of the projects’ performance measures reflected outcomes, such as prosecution rates of Mexican government prosecution units that received technical support and the number of improved measures to address serious human rights violations. Some performance measures reflected outputs, such as the number of Mexican officials trained in human rights advocacy areas. See table 4 for examples of performance measures and information in the progress reports we reviewed. When the implementer and USAID changed performance measures, they also revised project-specific monitoring plans to document these changes. For example, for one project we reviewed, the established measures were no longer effective in measuring progress toward the project’s objectives, according to USAID officials. As a result, the implementer and USAID modified the project’s monitoring plan at least twice, revising the performance measures to better align with the project’s objectives. The subsequent progress reports we reviewed for these projects included data on the revised performance measures. USAID has procedures to help ensure that monitoring staff track performance data. According to USAID officials, USAID began sending out a standard spreadsheet to all Mérida Initiative implementing partners in 2018 that requires them to report performance data on a quarterly or annual basis. USAID uses these spreadsheets to track Mérida Initiative project performance data. Since May 2017, USAID has also conducted 6- month portfolio reviews in which monitoring managers and their staff review project activities and performance data collected for their projects and discuss project successes and challenges. USAID managers told us that they implemented these reviews to help ensure that their staff monitor project performance. Mexico Shares Indicator Data with State/INL for Monitoring the Efforts Related to the Mérida Initiative According to State/INL, the Government of Mexico provides data to State/INL that help the agency monitor its Mérida Initiative assistance efforts and provides insights into the implementation of the initiative overall. State/INL also noted that, in 2014, the agency hired a contractor to work with both the U.S. and Mexican governments to develop a comprehensive set of indicators to evaluate the progress and results of the Mérida Initiative. In 2015, Mexico agreed that it would provide data to State/INL on this set of indicators to demonstrate the effects of the Mérida Initiative, according to State/INL officials. These officials told us that they try to obtain the data on an annual basis. They also noted that the purpose of collecting the data from Mexico was to establish a mechanism to share information on the Mérida Initiative’s effects and to improve U.S.- Mexico cooperation on the initiative. According to State/INL officials, various Mexican agencies collect the data, such as the Army, Air Force, Navy, Tax Administration Service/Customs, Attorney General’s Office, and National Institute of Statistics and Geography. The Mexico data comprise about 170 indicators (data points) related to the overall goals and program areas of the Mérida Initiative: Counternarcotics/Special Investigations, Criminal Prosecutions, Border Security and Ports of Entry, and Security and Law Enforcement. Some data are closely linked to Mérida Initiative–funded projects, such as the accreditation status of Mexican correctional facilities. Other data provide broader context, such as Mexican civil society’s perception of Mexican agencies. In addition, data, such as the number of accredited forensic laboratories and correctional facilities, may reflect progress in institution building. Other data, such as the number of accounts blocked by the Mexican Financial Intelligence Unit, may reflect operational capacity development. See table 5 below for examples of the indicators, as reported by Mexico to State/INL. State/INL officials said they use the indicator data in discussions with Mexican officials to help monitor the implementation and activities of the Mérida Initiative, including which best practices can be replicated across Mexico. State/INL officials said the data also inform the agency’s internal decision making on which Mérida Initiative programs are effective and which programs it should modify. For example, according to State/INL officials, the indicator data help track the use of equipment donated to Mexico through the Mérida Initiative. If the data show extensive use of equipment, State/INL can use the data to justify a request for additional equipment or to approve maintenance of the equipment, according to agency officials. Conclusions For over a decade, the Mérida Initiative has funded programs intended to address serious challenges to security and the rule of law. As the United States continues to support hundreds of Mérida Initiative projects in Mexico, it is important that State/INL monitor these projects carefully and stay well informed of the projects’ performance to ensure that they are as effective as possible. USAID has established procedures that help ensure that it follows most key monitoring practices, including those related to assigning monitoring duties to qualified staff and monitoring project implementation. State/INL management has not established such procedures for the projects we reviewed, limiting its ability to stay well informed of project performance and make course corrections to improve performance when necessary. While State/INL and USAID often conducted assessments to identify risks that may affect the achievement of project objectives, they generally did not address the results of the risk assessments in projects’ monitoring plans. Developing monitoring plans to address risks would help establish the appropriate level of oversight needed for each project, which in turn could lead to more cost-effective management of these projects. Recommendations for Executive Action We are making the following two recommendations, one to State and one to USAID: The Secretary of State should ensure that State/INL establishes procedures that verify that monitoring officials for Mérida Initiative projects follow the key practices. (Recommendation 1) The USAID Administrator should establish procedures to ensure that monitoring officials for Mérida Initiative projects develop monitoring plans that address risks. (Recommendation 2) Agency Comments We provided a draft of this report to State, DOD, DHS, DOJ, and USAID for review and comment. Some of the agencies provided technical comments, which we incorporated as appropriate. State and USAID also provided formal comments, which are reproduced in appendixes III and IV. State agreed with our recommendation to establish procedures for staff monitoring Mérida Initiative projects to follow key practices. State indicated that it is working to create new monitoring and evaluation guidance consolidated across State/INL, based in part on GAO’s leading practices. According to State, the new guidance will address the areas highlighted in this report related to monitoring Mérida Initiative projects. State/INL plans to institute annual program reviews in which monitoring staff will assess project performance, effects, and alignment with current and planned priorities. State indicated that annually reviewing State/INL programming will help identify underperforming projects, give relevant staff a forum to discuss any issues or challenges to implementation and monitoring, and ensure the bureau follows the key monitoring practices outlined in this report. USAID also agreed with our recommendation to establish procedures to ensure that staff monitoring Merida Initiative projects develop monitoring plans that address risk. USAID indicated that USAID/Mexico is revising its Project and Activity Design Mission Order to incorporate recently issued USAID guidance and address our recommendation. According to USAID, the mission order will provide a framework and guidance to ensure that USAID/Mexico systematically addresses project risks and incorporates them into the respective monitoring plan. We are sending copies of this report to the appropriate congressional committees, the Secretary of State, and the USAID Administrator. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2964 or GurkinC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report (1) examines the extent to which the Department of State (State), Bureau of International Narcotics and Law Enforcement Affairs (State/INL), follows key practices in monitoring Mérida Initiative projects and tracks project performance data against established measures; (2) examines the extent to which the United States Agency for International Development (USAID) follows key practices in monitoring Mérida Initiative projects and tracks project performance data against established measures; and (3) describes how State/INL uses data from the Government of Mexico to help monitor the implementation of Mérida Initiative projects. To address these objectives, we reviewed relevant State and USAID agency documents and interviewed agency officials from the Departments of State (State), Homeland Security (DHS), Defense (DOD), and Justice (DOD), and USAID in Washington, D.C., and officials from State and USAID in Mexico City. In 2019, we reported on 14 leading practices for monitoring foreign assistance that agencies should incorporate in their monitoring policies to help ensure that they effectively manage foreign assistance, address impediments, and meet their assistance goals. From these leading practices, which are focused on a high-level assessment of agency monitoring policies, we derived eight key practices that can help agencies monitor the implementation and performance at the project level, such as those implemented under the Mérida Initiative. These eight key practices include those that in our judgment directly relate to monitoring project- level performance activities. We did not address monitoring of financial activities, because our review focused on performance monitoring. We made minor modifications to the key practices selected to reflect the focus of our review. We also grouped the selected key monitoring practices into three areas: (1) assigning monitoring duties to qualified staff, (2) planning a monitoring approach, and (3) monitoring project implementation. To determine the extent to which State/INL and USAID followed key practices in monitoring Mérida Initiative projects, we selected a nongeneralizable sample of 15 high–dollar value State/INL projects and five high–dollar value USAID projects that started between January 1, 2014, and December 31, 2016. (See app. II for details on these 20 projects). Some of these projects were ongoing after fiscal year 2019. We selected the projects from a list provided by State/INL and USAID. State’s list included 388 projects, and USAID’s list included 57 projects for a total of 445 projects under the Mérida Initiative. We selected projects implemented through a variety of mechanisms. For State/INL, we selected two letters of agreement with international organizations, four grants, three contracts, and two interagency agreements implemented by DOD, two interagency agreements implemented by DHS, and two interagency agreements implemented by DOJ. For USAID, we selected two contracts and three grants. The value of the 15 State projects in our sample is about $88 million, and the value of the five USAID projects in our sample is about $107 million. These 15 State/INL projects represent about 25 percent of the total value of the State/INL projects that started during this period. These five USAID projects were the highest value contracts and grants cooperative agreements and represent about 70 percent of the total value of USAID projects that started during this period. Because State/INL implements about 90 percent of all Mérida Initiative projects, we chose a larger State/INL sample than USAID sample. We assessed the agencies’ monitoring of the 20 selected Mérida Initiative projects against eight key monitoring practices largely derived from GAO’s Leading Practices for Monitoring Foreign Assistance. We reviewed documents to determine the extent to which State/INL and USAID followed the eight key monitoring practices for each of the selected Mérida Initiative projects. Specifically, for each selected project, we requested monitoring plans; work plans; risk assessments; Contract, Grant, or Agreement Officer Representative Certificates; Contract, Grant, or Agreement Officers Representatives Designation Letters; implementer progress reports for the latest year of activity of each project (at the time of our review); samples of field or site visit reports; and samples of monitoring emails between monitoring staff and the implementers. We reviewed available documents as they related to each key practice to determine the extent to which the agency had taken steps to follow and document the key practice for each project. On the basis of our review, we assessed whether the key practices were “generally followed,” “partially followed,” or “not followed.” We rated the extent to which the agency followed each key practice as “generally followed” if we received evidence that all critical elements of the key practice were conducted and documented to a large or full extent, “partially followed” if we received evidence that some but not all critical elements of the key practice were conducted and documented, and “not followed” if we did not receive evidence that any of the critical elements of the key practice were conducted and documented. To perform these analyses, two analysts reviewed the documents to rate the extent to which each key practice was met. The analysts worked iteratively, comparing notes and reconciling differences at each stage of the analysis. In addition, GAO staff independent of the two analysts reviewed the final analysis, and modified it as appropriate. To determine the extent State/INL and USAID track project performance, we chose a nongeneralizable subset of the 20 projects listed above. Specifically, we chose six projects—four State/INL projects and two USAID projects—primarily based on their high–dollar values. (See app. II for details on these six projects.) We chose a small subset of State/INL and USAID projects to conduct a detailed analysis of data in the projects’ annual and quarterly reports. Specifically, for the four State/INL projects, we chose high–dollar value projects for each of the following implementing mechanisms: grants, interagency agreements, and agreements with international organizations. We excluded contracts from the State/INL subset sample, because the high–dollar value contracts generally did not have the project-level performance measures needed to assess State’s tracking of performance data. We included a second grant in our sample in place of a contract, because more Mérida Initiative State/INL projects are grants than interagency agreements or agreements with international organizations. As a result, our State/INL sample consisted of two grants, one interagency agreement, and one agreement with an international organization. For the USAID sample, we chose one grant or cooperative agreement and one contract. We did not choose other types of implementing agreements because grants/cooperative agreements and contracts comprise over 98 percent of USAID projects for the timeframe of our review. For both the State/INL and USAID selected projects, we reviewed project monitoring documents—such as project narratives, workplans, and monitoring plans—and identified the performance measures outlined in these documents for each project. We then reviewed these projects’ latest year of implementer quarterly and annual progress reports (at the time of our review), and assessed the extent to which State/INL and USAID assessed and approved implementing partners’ periodic performance reports and data in accordance with the key monitoring practice of assessing and approving performance information. We also met with State/INL and USAID monitoring officials in Washington, D.C., and Mexico to understand the process for how these officials track the performance of these selected projects, including in the projects’ quarterly and annual reports. We also reviewed the reports to identify any discrepancies or errors. To describe the type of Government of Mexico data that State/INL uses to monitor Mérida Initiative implementation, we reviewed data from fiscal years 2015-2018 related to Mérida Initiative projects collected by the Government of Mexico and shared with State/INL. We also met with State/INL officials in Washington, D.C., and Mexico City to discuss the data, including how it is used and its reliability. After our discussions with State/INL officials, State/INL selected some unclassified examples of the indicators, which we included in our report. The purpose of this component of our review was to describe the nature and use of the Mexico data. We conducted this performance audit from November 2018 to May 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Selected State/INL and USAID Mérida Initiative Projects Assessed against Key Monitoring Practices This appendix provides a list of the 15 Department of State (State), Bureau of International Narcotics and Law Enforcement Affairs (State/INL) Mérida Initiative projects, and five United States Agency for International Development (USAID) Mérida Initiative projects selected for our review. We assessed State/INL and USAID monitoring of these projects against key monitoring practices as described in appendix I. The subset of these projects (four State/INL and two USAID) selected for our analysis of the agencies’ tracking of performance data is noted below. State/INL provided the details in table 6, and USAID provided the details in table 7. Appendix III: Comments from the Department of State Appendix IV: Comments from the U.S. Agency for International Development Appendix V: GAO Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, James Michels (Assistant Director), Francisco Enriquez (Analyst-in-Charge), Terry Allen, Ashley Alley, Lilia Chaidez, Martin De Alteriis, Neil Doherty, Teresa Heger, John Hussey, and Andrew Kincare made key contributions to this report.
Why GAO Did This Study The Mérida Initiative is a bilateral U.S.-Mexico partnership to address crime and violence and enhance the rule of law in Mexico. Through this initiative, managed by State/INL and USAID, the United States has provided a wide range of assistance, including training and equipment. Since fiscal year 2008, U.S. funding for the Mérida Initiative has totaled about $3 billion. GAO has identified key practices for monitoring foreign assistance programs that agencies should implement to address impediments, effectively manage foreign assistance, and meet assistance goals. These practices are generally consistent with policies of State, USAID, and the Office of Management and Budget. GAO was asked to review issues related to Mérida Initiative implementation and objectives. This report examines the extent to which State/INL and USAID follow key practices in monitoring Mérida Initiative projects and track project performance against established measures. GAO reviewed State and USAID documents and data for a nongeneralizable sample of 20 high-dollar value projects, and interviewed officials from State; USAID; and other U.S. agencies in Washington, D.C., and Mexico City. What GAO Found For the 15 Department of State (State) Bureau of International Narcotics and Law Enforcement Affairs (State/INL) projects GAO reviewed, State/INL generally followed key monitoring practices about half of the time. (See figure.) For example, State/INL almost always assigned staff with appropriate qualifications to monitor Mérida Initiative projects. However, for most projects, State/INL did not generally follow the key practices for developing monitoring plans that identify project goals and objectives and address risks to achieving them. Furthermore, State/INL did not consistently track project performance data. By establishing procedures for following key monitoring practices, State/INL would be better positioned to stay well informed of its projects' performance, take corrective action when necessary, and help ensure that projects achieve intended results. For the five United States Agency for International Development (USAID) projects GAO reviewed, USAID almost always followed key monitoring practices and tracked performance data. USAID established procedures, such as periodic portfolio reviews, to ensure its staff consistently monitored projects. While USAID identified risks to implementing projects, it did not address those risks in its monitoring plans. (See figure.) Developing monitoring plans to address risks could help USAID determine the appropriate level of oversight for each Mérida Initiative project and manage monitoring resources more cost effectively. What GAO Recommends GAO is making two recommendations, including that State establish procedures to verify monitoring staff follow key practices, and that USAID ensure that monitoring plans address risks. State and USAID concurred with GAO's recommendations.
gao_GAO-20-41
gao_GAO-20-41_0
Background Key SBA Offices and Resource Partners Involved in Entrepreneurial Programs and Outreach The Office of Entrepreneurial Development, Office of Field Operations, and Office of Strategic Alliances are key SBA offices that administer entrepreneurial programs and manage outreach efforts that could foster entrepreneurship (see fig. 1). Office of Entrepreneurial Development. The Office of Entrepreneurial Development oversees several programs, primarily through a nationwide network of public and private resource partners that offer small business counseling and technical assistance. These resource partners include SBDCs, Women’s Business Centers, and SCORE chapters. The SBDC program receives the majority of entrepreneurial development program funding to provide technical assistance (business counseling and training) to small businesses and aspiring entrepreneurs. SBDC services include assisting small businesses access capital, develop and exchange new technologies, and improve business planning, strategy, and financial management. The entities eligible to receive SBDC funding are primarily institutions of higher education. By statute, the amount eligible entities receive is determined by a state population-based funding formula subject to the amount of an appropriation in any given fiscal year. As a condition of receiving the grant, the recipient or host institution is required to match the funding. The host institution (funding recipient) is responsible for establishing a lead center and network of service centers for a designated service area. The SBDC program has 63 lead centers (generally hosted by institutions of higher education) and more than 900 service centers, including satellite locations. SBA has identified certain special emphasis groups to be targeted for assistance by SBDCs, such as certain populations of business owners. The groups do not include institutions; thus, HBCUs are not included as special emphasis groups. According to SBA officials, SBDCs target underrepresented groups in the population of business owners near HBCUs. Office of Field Operations. SBA also provides services through a network of 10 regional offices and 68 district offices that are led by the Office of Field Operations. SBA district offices serve as the point of delivery for most SBA programs and services. Some district office staff (including business opportunity, lender relations, and economic development specialists) work directly with SBA clients. SBA’s district offices also can initiate and oversee outreach activities to foster entrepreneurship. For example, SBA district offices can implement counseling or training events on their own, participate in such events organized by third parties, or co-sponsor such activities with a third party (for-profit, nonprofit, or government entity) through a co-sponsorship agreement. Moreover, district offices can enter into a 2-year agreement with a nonprofit or government party, known as a strategic alliance memorandum, to foster a working relationship designed to strengthen small business development in a local area. Office of Strategic Alliances. The Office of Strategic Alliances, housed in SBA’s Office of Communication and Public Liaison, reviews co- sponsorship agreements and strategic alliance memorandums drafted by district or program offices. The co-sponsorship agreements and memorandums are based on an internal SBA template provided by the Office of Strategic Alliances, which also maintains records for both strategic alliance memorandums and co-sponsorship agreements. Figure 2, an interactive map, illustrates locations of SBDC lead centers and SBA district office in states with HBCUs. See appendix II for additional information on figure 2. Historically Black Colleges and Universities As of December 2018, there were 101 HBCUs, located across 19 states, the District of Columbia and the U.S. Virgin Islands. As previously discussed, HBCUs educated more than 226,000 African-American students in 2017. HBCUs also have played a critical role in supporting underserved students and communities. We previously reported that a higher proportion of students at private HBCUs (77 percent) received Pell Grants in the 2015–16 school year than students at similar private colleges or universities (43 percent). Pell Grants provide low-income undergraduates who demonstrate financial need with financial assistance to help meet education expenses. Executive Orders on the White House Initiative Executive Order 12232 (1980) established the White House Initiative on Historically Black Colleges and Universities to strengthen the capacity of HBCUs to provide quality education. Subsequent administrations issued executive orders to continue the initiative. Most recently, as expressed in Executive Order 13779 (2017), federal priorities for working with HBCUs encompass two missions: (1) increasing the role of private-sector entities in helping to improve capacity of HBCUs, and (2) enhancing HBCUs’ capabilities for helping young adults. The initiative has been housed in the Executive Office of the President since 2017, according to representatives from the initiative. The more recent executive orders (from 2002, 2010, and 2017) direct each department and agency designated by the Secretary of Education to prepare an annual plan on efforts to strengthen HBCU capacity. Annual plans are to describe how the department or agency intends to increase the capacity of HBCUs, including by identifying federal programs and initiatives in which HBCUs are underserved or that HBCUs may have underutilized. SBA is one of the agencies designated to prepare an annual agency plan. The more recent executive orders also state that a Board of Advisors on HBCUs (in the Department of Education) shall report annually to the President on the Board’s progress in carrying out its duties, which include advising the President on matters pertaining to strengthening the educational capacity of HBCUs. The current Board was chartered in May 2019. SBA Used Existing Programs and Mechanisms to Engage with HBCUs; Stakeholders’ Collaborative Experiences Varied SBA has used SBDCs, strategic alliance memorandums, and co- sponsored activities to foster entrepreneurship with HBCUs in recent years; stakeholders’ experiences collaborating with SBA varied. Small Business Development Centers. Two HBCUs—Howard University in Washington, D.C., and the University of the Virgin Islands in St. Thomas, U.S. Virgin Islands—have been longstanding host institutions for SBDCs. More specifically, they have been the only host institutions for two lead SBDCs, the District of Columbia SBDC and the Virgin Islands SBDC, for more than 30 years (and remained so as of September 2019). Colleges and universities predominately have been the institutional hosts of lead SBDCs since the 1980s according to SBA officials. According to SBA officials, there is little turnover among institutions hosting lead SBDCs because SBDC program announcements for host institutions are not full and open competitions and existing host institutions often renew their cooperative agreements to continue operating lead SBDCs. Based on the statutorily defined and population- based allocation formula, the District of Columbia SBDC and the Virgin Islands SBDC together received about 1.3–1.4 percent of the total SBDC funding awarded to institutions of higher education from fiscal year 2008 through 2018. The District of Columbia SBDC and the Virgin Islands SBDC have engaged with HBCU students, alumni, or faculty. As we previously reported, District of Columbia SBDC representatives told us that as of November 2018 they were working with 10–15 Howard University student clients. They also stated they work with all students who come to their center seeking help and do not have a cap on the number of student clients. Similarly, the Virgin Islands SBDC representatives told us as of February 2019 they had made presentations to upper-level business classes and freshmen development seminars at the University of the Virgin Islands. They also counseled students who participated in an annual entrepreneurial competition. They noted that many of the SBDC clients they serve have some affiliation with the university, such as being an alumnus or having attended classes there. In addition to establishing the lead SBDC, the host institution establishes a network of service centers to deliver services, such as counseling and training, within its service area, including at HBCUs. As of September 2018, at least 16 HBCUs hosted SBDC service centers across 11 states. Three SBDC service centers we reviewed that were HBCU- hosted had engaged with HBCU students, alumni, or faculty. For example, the Alabama SBDC service center representatives (housed at Alabama State University in Montgomery, Alabama) said the center works with several faculty members who provided training at SBDC workshops and assisted the service center on specialized topics, such as marketing. Through its relationship with faculty members, the Alabama SBDC service center also conducts outreach to students. Similarly, representatives of two service centers for the North Carolina Small Business Technology and Development Center (housed at North Carolina Central University in Durham, North Carolina, and North Carolina A&T State University in Greensboro, North Carolina) said they have worked with students on the respective campuses. For example, the service center at North Carolina Central University has engaged graduate business students on marketing projects. While the number of HBCU-hosted lead SBDCs has remained unchanged in recent years, it is unclear how many HBCUs have hosted service centers. SBA officials told us that the number fluctuates but were unable to provide the list of all service centers in existence prior to 2018. We discuss SBA’s data collection efforts later in this report. Strategic alliance memorandums. From 2013 through 2018, SBA signed at least 35 strategic alliance memorandums with HBCUs (see table 1). SBA signed at least 51 such memorandums with institutions of higher education in states with HBCUs in that period. As we previously reported, strategic alliance memorandums are mechanisms to initiate and formalize a relationship with nonprofit and governmental agencies, but they are not necessary to initiate a relationship. SBA officials told us the memorandums do not authorize or fund events or activities and are largely symbolic. In August 2019, SBA officials said that numbers of strategic alliance memorandums can fluctuate due to their 2-year duration and changes in SBA administration. Representatives of six HBCUs with whom we met that signed strategic alliance memorandums varied in their assessment of the usefulness of the memorandums. Three of the six HBCUs said they had positive experiences as a result of the memorandums: Representatives of an HBCU in North Carolina said a May 2013 memorandum established a relationship with SBA and provided access to information and resources not otherwise available. Representatives of another HBCU in North Carolina said a 2013 memorandum helped recruit speakers for two entrepreneurship classes. A representative from an HBCU in Tennessee told us that a 2013 memorandum enabled the college to connect students, alumni, and faculty with the resources of SBA’s Tennessee District Office and its resource partners. The representative said a subsequent 2018 memorandum resulted in collaboration with SBA to host a 1-day small business conference on campus. In contrast, representatives of the three other HBCUs either were unaware of the memorandum or said it produced no results: Representatives of two HBCUs (one in Alabama and one in Georgia) told us they were unaware of the signed strategic alliance memorandums (March and April 2013, respectively) due to staffing changes in senior administrative positions. A representative from another HBCU in Georgia told us the school had little involvement with the Georgia SBA district office after signing a memorandum in April 2013. Officials from the district office with whom we spoke agreed with this statement but noted the college had not asked them to participate in any events. Co-sponsored activities. As shown in table 2, from fiscal years 2013 through 2018, SBA signed at least 16 co-sponsorship agreements with HBCUs to jointly conduct activities or events. Twelve of the 16 co- sponsored activities were training or counseling events related to entrepreneurship. SBA signed at least 78 co-sponsorship agreements with institutions of higher education in states with HBCUs in that period. SBA’s Recent Plan and Goals for HBCU- Related Efforts Were Not Communicated to the Field SBA Developed 2018 Plan for the White House Initiative on HBCUs but Was Unable to Provide Plans for Other Years SBA developed a fiscal year 2018 plan for the White House Initiative on HBCUs, in accordance with Executive Order 13779 (2017). SBA’s 2018 plan included two primary goals. The first goal was to raise awareness and provide information to increase the capacity of HBCUs to participate in federally funded programs. More specifically, the plan stated that SBA would engage with HBCUs and provide them with information needed to access and compete for federal grants and contracts. The second goal was to promote collaboration among HBCUs, SBA resource partners, and SBA district offices. For example, the plan stated that SBA would encourage the formation of strategic alliance memorandums between SBA district offices and HBCUs to promote and support entrepreneurship in underserved markets. The plan also stated that SBA would explore and consider partnerships with the National Association for Equal Opportunity in Higher Education, among other organizations, to raise awareness, disseminate information, and share resources among and with HBCUs. The 2018 plan also described five measures to monitor SBA’s efforts to engage, share information, and increase the capacity of HBCUs. The measures are (1) number of outreach events, (2) number of outreach attendees, (3) number of partnerships established, (4) percentage of engaged HBCUs that pursued federal funding, and (5) percentage of HBCUs engaged that found the information useful. The two previous Executive Orders (from 2002 and 2010) on the White House Initiative on HBCUs also directed designated agencies to prepare annual plans on their efforts to support HBCUs. For years prior to 2018, SBA only could provide documentation of plans for 2011 and 2012. Officials of the Office of Entrepreneurial Development told us they were not aware of records for plans developed for the other years in the period we reviewed (2008–2018). Responsibilities of SBA Offices for Addressing the White House Initiative on HBCUs Are Unclear SBA was unclear about the responsibilities of the offices involved in the agency’s efforts for addressing the White House Initiative on HBCUs. In March 2019, SBA officials told us the SBA Administrator had designated the Office of Entrepreneurial Development as the lead office for addressing the initiative in 2018. However, the responsibilities for other offices involved in efforts that include HBCUs remain unclear. SBA could not provide documentation of roles, responsibilities, or reporting lines among offices involved in addressing the White House Initiative on HBCUs. For example, the Associate Administrator for the Office of Entrepreneurial Development stated the agency’s interaction with HBCUs occurs through SBA district offices. However, there is no documentation describing how the Office of Entrepreneurial Development and Office of Field Operations, which is responsible for SBA’s district offices, should work together to address the White House Initiative on HBCUs. Moreover, because SBA has not documented specific roles and responsibilities (to include reporting lines), it is unclear how plans prepared for the White House Initiative on HBCUs would be implemented among headquarters, field offices, and resource partners. Additionally, the role of the director of the Office of Faith Based and Community Initiatives in efforts to address the HBCU initiative is unclear. SBA officials told us the Office of Faith Based and Community Initiatives is not involved in administering the initiative. However, the director of that office serves as SBA’s representative to the Interagency Working Group of the White House Initiative on HBCUs. According to SBA officials, the director’s role is to support efforts by the Office of Entrepreneurial Development on the initiative due to staffing shortages. However, officials were unable to tell us in greater detail how the director would provide such support. Federal internal control standards state that management should establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives. For example, management assigns responsibilities to discrete units to enable the organization to operate in an efficient and effective manner and to delegate authority to key roles throughout the entity. Additionally, management establishes defined reporting lines within an organizational structure so that units can communicate (up, down, and across the organization) the quality information necessary for each unit to fulfill its overall responsibilities. SBA’s uncertainty about the responsibilities of the offices involved in the White House Initiative on HBCUs may be a result of changes over the years as to which program office was chiefly responsible for the effort. According to the Associate Administrator of the Office of Entrepreneurial Development, the responsibilities for the White House initiative on HBCUs have resided in various SBA program offices over the years. Moreover, the Associate Administrator told us the Office of Entrepreneurial Development was designated the lead office for the initiative late in the planning process; therefore, it took time to transfer responsibilities for addressing the initiative to the Office of Entrepreneurial Development. As a result, the Office of Entrepreneurial Development had not yet defined the responsibilities for other offices involved in efforts related to the White House Initiative on HBCUs. In September 2019, SBA officials told us they intended to establish an intra-agency working group focused on HBCUs, which would define the roles and responsibilities of headquarters offices related to the initiative. While the Office of Entrepreneurial Development has been designated as the lead office, without clearly assigned roles, responsibilities, and reporting lines for the other offices involved in the White House Initiative on HBCUs, SBA may not be able to effectively implement future plans for the initiative. Additionally, the lack of clearly assigned roles, responsibilities, and reporting lines has resulted in and may result in future loss of institutional knowledge on efforts to implement the initiative. SBA Has Not Communicated Its Plan to Support HBCUs to Key SBDCs and District Offices SBA’s 2018 plan to support HBCUs had the goal of promoting collaboration among HBCUs, SBA resource partners, and SBA district offices. However, SBA headquarters did not communicate its plan for supporting HBCUs to SBDCs and district offices with HBCUs in their service areas. Specifically, SBA officials told us the Office of Entrepreneurial Development, which oversees the SBDC program, did not communicate the 2018 plan to support HBCUs to SBDCs, including the goal to collaborate with HBCUs. None of the SBDC representatives with whom we spoke (for six lead centers and three service centers), reported that SBA communicated information related to the 2018 plan, including goals, measures, or other HBCU-related expectations. Furthermore, none said they received guidance from SBA headquarters related to fostering entrepreneurship with HBCUs, although SBDCs deliver counseling and training to potential and existing business owners. Similarly, the Office of Field Operations, which oversees district offices, did not communicate the 2018 plan to support HBCUs to district offices, including the goal to collaborate with HBCUs, according to SBA officials. While SBA’s district offices deliver most of SBA’s programs and services, none of the representatives of the eight district offices with whom we spoke answered questions related to SBA’s planned efforts to support HBCUs because they stated they were not involved with agency plans for the White House Initiative on HBCUs or were otherwise unable to provide a response. Federal internal control standards state that management should internally communicate the necessary quality information to achieve the entity’s objectives. For example, management assigns the internal control responsibilities for key roles and communicates quality information up, down, and across reporting lines. This enables personnel to perform key roles in achieving objectives, addressing risks, and supporting the internal control system. According to SBA officials, SBA headquarters did not communicate its plan for supporting HBCUs to SBDCs and district offices due to the timing of the plan’s issuance—the 2018 plan was not finalized until near the end of the fiscal year. SBA officials told us that instead of communicating the 2018 plan at the end of the 2018 fiscal year, officials chose to focus on the upcoming fiscal year and future efforts to support HBCUs. Additionally, SBA officials stated the Office of Field Operations was not involved in addressing the White House Initiative on HBCUs, although the office is responsible for providing policy guidance and oversight to district offices in implementing agency goals and objectives. Because SBA headquarters did not communicate its plan for supporting HBCUs, SBDCs and district offices with HBCUs in their service areas were not aware of the goal to collaborate with HBCUs. Therefore, the agency may have missed opportunities to collaborate with HBCUs and work toward 2018 plan goals, even if for a brief period. As of September 2019, SBA officials told us the agency’s fiscal year 2019 plan (or update) for the White House Initiative on HBCUs had not been finalized. As a result, it was unclear when this plan would be communicated to SBDCs and district offices. If the 2019 and subsequent plans for supporting HBCUs are not communicated to SBDCs and district offices, SBA risks repeating a scenario in which SBDCs and district offices with HBCUs in their service areas are unaware of goals to support HBCUs, and therefore may miss opportunities to engage with HBCUs. SBA’s Data Collection for Its HBCU-Related Efforts Is Limited The extent to which SBA collected information about its programs and activities with HBCUs is limited. More specifically, SBA did not collect relevant information to establish a baseline for performance measures developed in its 2018 plan for the White House Initiative on HBCUs. SBA officials told us that they wanted to use the measures to establish a baseline to better assess progress towards meeting the plan’s goals to support HBCUs in fiscal year 2019. As noted earlier, the 2018 plan’s five measures are (1) number of outreach events, (2) number of outreach attendees, (3) number of partnerships established, (4) percentage of engaged HBCUs that pursued federal funding, and (5) percentage of HBCUs engaged that found the information useful. Number of outreach events and attendees. SBA collects information on the number of outreach events and the number of outreach attendees, but this information is incomplete and not specific to HBCUs. According to SBA officials, SBA district offices are required to collect and report the number of outreach events and attendees to the Office of Field Operations. However, information for outreach activities is reported on an aggregate basis to headquarters and does not specifically identify which institutions hosted or participated in the events. As such, the information reported also does not specifically identify attendees affiliated with an HBCU, such as students, faculty, or alumni. Therefore, while representatives of all eight district offices we contacted said they have conducted outreach activities with HBCUs, these activities would not be readily identifiable in the information reported to headquarters. Until July 2019, SBA district offices reported outreach events through the activity contact report. District offices were able to include optional information, such as the event location and organization name for their outreach events, as shown in figure 3. SBA officials told us they can perform manual searches for specific text (such as the specific name of an institution or “HBCU”) included in information reported by district offices that may identify HBCU-related activities. However, they said manual searches are not easy or effective or routinely performed. Therefore, manually searching for specific text that may be included in information reported by district offices does not lend itself to efficient monitoring of HBCU-related outreach. SBA officials told us a temporary reporting tool (used since late July 2019 in place of the activity contact report) includes an optional data field for district offices to identify whether their activity was HBCU-related. While this additional field may enable users to conduct manual searches for HBCU-related outreach more easily, SBA officials told us the data from the field are still reported in the aggregate to SBA headquarters and therefore continue to be not readily identifiable as HBCU-related. For more information about SBA’s systems for reporting (including district offices), see appendix IV. Additionally, SBA officials told us headquarters does not have policies or guidance for district offices for systematically collecting or reporting data on their HBCU-related outreach. At least one district office, West Virginia, voluntarily tracks its activities with the HBCUs in its region, using a spreadsheet it developed. Unlike district offices, SBDCs are not required to collect and report information related to outreach (such as the number of outreach events and attendees) to the Office of Entrepreneurial Development. As a result, although all nine SBDCs we contacted conduct outreach to HBCUs, SBA lacks data about these activities. The 2020 funding opportunity for SBDCs requires SBDCs with HBCUs in their states to report outreach events with HBCUs in their semi-annual and final year-end reports. Number of partnerships established. SBA collects information on the number of partnerships established, but this information is incomplete and not specific to HBCUs. According to SBA officials, there is no written definition defining partnerships for this measure, but it would include both informal and formal partnerships. SBA collects information related to formal partnerships: SBDCs, strategic alliance memorandums, and co- sponsorship agreements. However, these records do not allow for the ready identification of HBCU partnerships because there are no data fields to identify whether the partner is an HBCU. SBDCs. Information on the number of SBDCs hosted by HBCUs is incomplete. SBA’s records do not allow for ready identification of HBCUs as host institutions because there is no field to identify whether a host institution is an HBCU. While SBA provided information on the number of SBDC lead centers hosted by HBCUs over time, information was not available on the number of SBDC service centers hosted by HBCUs during the time frame of our review (2008–2018) because according to SBA officials, host institutions for service centers can change over time. Strategic alliance memorandums. Information on the number of strategic alliance memorandums signed with HBCUs is incomplete. In September 2018, SBA provided us a list of HBCUs that signed strategic alliance memorandums from 2008 through 2018, developed by cross-referencing records of memorandums with a list of HBCUs. SBA identified 24 such HBCUs, but we identified an additional three HBCUs that had signed strategic alliance memorandums during this period. In June 2019, SBA provided us a list of all strategic alliance memorandums signed from 2015 through 2018, but we found that a 2016 memorandum with Alabama A&M University (Huntsville, Alabama) was not included. Co-sponsorship agreements. Information on the number of co- sponsorship agreements signed with HBCUs is incomplete. In November 2018, SBA provided us copies of co-sponsorship agreements signed with HBCUs from fiscal years 2013 through 2018 by manually cross-referencing its records with a list of HBCUs. SBA identified 14 such agreements, but we identified an additional two co- sponsorship agreements signed with HBCUs. Usefulness. SBA does not collect information on the percentage of HBCUs engaged in activities that found the resources and information SBA provided to be useful. According to SBA officials, district offices are not required to collect written feedback related to the usefulness of information presented during their activities, such as counseling and training. If district offices solicit feedback, it cannot be distinguished as feedback from HBCUs. For example, SBA district offices may solicit written feedback for co- sponsored activities using a headquarters-developed form. The form does not include fields for participants to identify their affiliation with an HBCU and therefore, feedback received would not be HBCU-specific. Unlike district offices, SBDCs are required to issue evaluation forms for SBDC clients who receive continuous counseling or attend an SBDC training event. For example, representatives from the Alabama SBDC lead center told us they conduct quarterly counseling surveys, which include questions related to the timeliness of the counseling and knowledge of the business advisor. SBDCs report data on client satisfaction rates to SBA headquarters. However, SBA officials told us the feedback-related information SBDCs collect and report to headquarters is not specific to HBCUs, despite the agency’s identification of this measure as relevant in its 2018 plan. Federal internal controls standards state that management should use quality information to achieve the entity’s objectives. For example, management obtains relevant data based on the identified information requirements and relevant data have a logical connection with the identified information requirements. SBA lacks information related to programs and activities with HBCUs because district offices and SBDCs with HBCUs in their service areas have not been required by relevant program offices to collect or report information specific to HBCUs, including information relevant for measures developed in SBA’s 2018 plan. Without collection of relevant information for its HBCU-related efforts, particularly for measures developed for annual plans, SBA will not be able to establish a baseline of its efforts to support HBCUs. Moreover, without this baseline SBA cannot determine the extent or effectiveness of its efforts to support and engage HBCUs. Conclusions SBA’s priority goals include reaching emerging markets that are socially and economically disadvantaged. The agency’s efforts related to HBCUs, which educate many low-income students and help support their local communities, can assist the agency in advancing that goal. But while SBA has long participated in the White House Initiative on HBCUs, it has not clearly assigned responsibilities among relevant offices for addressing its plan for the initiative; communicated its plan to support HBCUs to SBA district offices and SBDCs (with HBCUs in their service areas), which deliver training and counseling; and collected relevant information to establish a baseline and track ongoing efforts to support HBCUs. Addressing these issues would better position SBA to assess the extent to which it is reaching its goals specific to supporting HBCUs, as well as agency-wide priority goals to more broadly reach socially and economically disadvantaged communities. Recommendations for Executive Action We are making the following three recommendations to the Small Business Administration: The SBA Administrator should assign and document clear roles, responsibilities, and reporting lines for headquarters offices’ implementation of SBA’s plan for addressing the White House Initiative on HBCUs in a timely manner. (Recommendation 1) The Associate Administrators of the Office of Entrepreneurial Development and Office of Field Operations should communicate planned efforts to support HBCUs, including expectations, goals, and related measures, to the district offices and Small Business Development Centers with HBCUs in their service areas. (Recommendation 2) The Associate Administrator of the Office of Entrepreneurial Development should take and document steps to ensure that the office’s reporting mechanisms collect the information needed to establish a baseline for, and also inform future monitoring and assessment of, efforts to support HBCUs. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to the Small Business Administration for review and comment. In comments reproduced in appendix V, the Small Business Administration agreed with our three recommendations. The Small Business Administration also provided additional examples of recent accomplishments and plans in their comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Acting Administrator of the Small Business Administration and other interested parties. In addition, the report will be made available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or OrtizA@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology You asked us to review SBA’s entrepreneurship-related efforts with Historically Black Colleges and Universities (HBCU). This report examines, as did two related products, (1) the Small Business Administration’s (SBA) efforts to foster entrepreneurship through key programs and activities with HBCUs in recent years, (2) SBA’s agency plans for the White House Initiative on HBCUs, and (3) the extent to which SBA collected and recorded information specific to HBCUs. Our review of efforts to foster entrepreneurship focused on counseling and training. To address the first objective, we analyzed SBA programs and activities that in previous work we identified as key for fostering entrepreneurship with HBCUs. Key programs and activities are the Small Business Development Center (SBDC) program, strategic alliance memorandums, and co-sponsored activities. We obtained data from SBA’s Office of Entrepreneurial Development and Office of Strategic Alliances for these key programs and activities, and identified the participation of institutions of higher education, including HBCUs. We reviewed and analyzed data provided to us by SBA of the host institutions and the total amount of funding obligated to administer the SBDC program in fiscal years 2008– 2018 and signed agreements (strategic alliance memorandums and co- sponsorship agreements) with institutions of higher education (HBCUs and non-HBCUs) in fiscal years 2013–2018. In addition, we conducted an on-site file review to record strategic alliance memorandums signed in fiscal years 2013–2015 that were not readily available electronically. To assess the reliability of these data, we reviewed available data, cross- walked them with publicly available information, if applicable, and requested written responses from SBA officials about the data and their limitations, if any. We determined the data were sufficiently reliable for describing the general scale of SBA’s efforts to engage with HBCUs and non-HBCUs. To address the second objective, we reviewed SBA’s 2018 plan for the White House Initiative on HBCUs and documentation of plans for 2011 and 2012, which were the only years in the period of our review (2008– 2018) for which SBA could provide documentation of such plans. We also analyzed the three most recent executive orders related to HBCUs to understand the responsibilities expected of federal agencies and identify changes over time. We also reviewed additional documents that SBA provided related to its agency plans, such as efforts to promote small business research programs, and one available annual agency submission (fiscal year 2010) to the White House Initiative on HBCUs on SBA’s efforts to support HBCUs. We interviewed SBA officials from the Office of Entrepreneurial Development, Office of Field Operations, and the Office of Strategic Alliances. We also interviewed representatives of six SBDC lead centers and three associated service centers, and eight SBA district offices (with 47 HBCUs in their areas). We selected these SBDC networks and district offices based on a combination of factors, including (1) HBCU participation in an SBDC network (hosting a lead or service center), (2) high number of HBCUs located in the state, and (3) high number of agreements (strategic alliance memorandums or co-sponsorship agreements) SBA signed with HBCUs. Based on data SBA provided of signed strategic alliance memorandums with HBCUs, we selected and contacted 12 HBCUs that had signed a strategic alliance memorandum with SBA between 2013 and 2018 or were located close to SBA offices or resource partners such as SBDCs. We interviewed staff at eight of these HBCUs and the remaining four HBCUs did not respond. We visited the District of Columbia, Maryland, North Carolina, and the U.S. Virgin Islands and met with SBDC representatives, SBA district officials, and HBCU representatives, as applicable. We also interviewed America’s SBDCs, an association for SBDCs, and representatives of the following advocacy groups: the Thurgood Marshall College Fund, the United Negro College Fund, and the National Association for Equal Opportunity. To address the third objective, we reviewed two sets of SBA standard operating procedures to understand information collected and reported for (1) the SBDC program, and (2) outreach activities that include co- sponsored activities and strategic alliance memorandums. We reviewed available program announcements or funding opportunities, and cooperative agreements for recipients of the SBDC program to identify their reporting requirements. We reviewed guidance related to the Office of Field Operations’ goals and measures to identity SBA district offices’ reporting requirements. In addition, we reviewed user manuals, data entry form templates, and data dictionaries for SBA information systems used by SBDCs and SBA district offices, such as the Entrepreneurial Development Management Information System and the Activity Contact Report, to identify the extent to which data collected and reported included HBCU-related activities. We reviewed SBA’s 2018 plan for the White House Initiative on HBCUs to identify performance measures developed to monitor SBA’s HBCU- related efforts. We then analyzed whether the information that SBDCs and SBA district offices are required to report included information for monitoring the performance measures developed in the 2018 plan. We assessed SBA’s plans and related efforts against federal internal control standards. Additionally, we interviewed SBA officials from the Office of Entrepreneurial Development, Office of Field Operations, and Office of Strategic Alliances; SBA district office officials from eight offices; and SBDC representatives from nine SBDC networks to better understand the extent to which SBA collects and records information related to their engagement with HBCUs. We conducted this performance audit, from June 2018 to November 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Selected Small Business Administration Resources in Historically Black Colleges and University States The Small Business Administration’s (SBA) Small Business Development Centers (SBDC) and district offices provide services that could foster entrepreneurship. The SBDC program provides technical assistance (business counseling and training) to small businesses and aspiring entrepreneurs. SBDCs assist small businesses access capital, develop and exchange new technologies, and improve business planning, strategy, and financial management, among other services. The recipient or host institution of the SBDC is responsible for establishing a lead center and a network of service centers for a designated service area. SBA district offices serve as the point of delivery for most SBA programs and services. Some district office staff (including business opportunity, lender relations, and economic development specialists) work directly with SBA clients. SBA’s district offices also can initiate and oversee outreach activities to foster entrepreneurship. As of December 2018, there were 101 Historically Black Colleges and Universities (HBCU), located across 19 states, the District of Columbia, and the U.S. Virgin Islands. Table 3 lists those states (in addition to the District of Columbia and the U.S. Virgin Islands), the locations of SBDC lead centers and district offices, and the HBCUs. Appendix III: Small Business Administration Outreach to Historically Black Colleges and Universities on Two Research Programs The Small Business Administration’s (SBA) 2018 plan for the White House Initiative on Historically Black Colleges and Universities (HBCU) identifies two programs as available resources that are underutilized by HBCUs. More specifically, the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs provide research and development funding to small businesses to develop and commercialize innovative technologies. The programs are authorized by the Small Business Act, and SBA’s Office of Investment and Innovation is responsible for their oversight, including coordinating the participating agencies’ efforts for the programs. The SBIR program began in 1982 and has four main purposes: (1) use small businesses to meet federal research and development needs, (2) stimulate technological innovation, (3) increase private-sector commercialization of innovations derived from federal research and development efforts, and (4) foster and encourage technological innovation by small businesses owned by women and disadvantaged individuals. The STTR program began in 1992 and has three main purposes: (1) stimulate technological innovation, (2) foster technological transfer through cooperative research and development between small businesses and research institutions, and (3) increase private-sector commercialization of innovations derived from federal research and development. Both programs are similar in that participating agencies identify topics for research and development projects and support small businesses, but the STTR program requires the small business to partner with a nonprofit research institution, such as a college or university or federally funded research and development center. SBA has made some recent efforts to increase awareness among HBCUs about opportunities to access these programs. For example: SBA participated in the HBCU and Minority-Serving Institution Technology Infusion Road Tour, which was organized by the National Aeronautics and Space Administration. As a part of this effort, SBA participated in presentations on the SBIR and STTR programs at three HBCUs: Tennessee State University (Nashville, Tennessee) in April 2017, Johnson C. Smith University (Charlotte, North Carolina) in February 2018, and Clark Atlanta University (Atlanta, Georgia) in March 2018. In 2018, SBA conducted an SBIR Road Tour to raise awareness of available research and development funding. As part of the tour, the agency conducted workshops and presentations at two HBCUs: Alabama A&M University (Huntsville, Alabama) and Jackson State University (Jackson, Mississippi). SBA participated in the 2018 National HBCU Week Conference hosted by the White House Initiative on HBCUs. SBA staff served as speakers and panelists in sessions related to access to federally funded programs (such as SBIR and STTR), and science, technology, engineering, and mathematics. The North Carolina Small Business and Technology Development Center, an SBA resource partner, hosted a workshop in April 2018 at an HBCU—North Carolina Central University (Durham, North Carolina)—focused on preparing proposals for the SBIR and STTR programs. Appendix IV: Small Business Administration Information Systems and Forms on Counseling and Training Activities The Small Business Administration’s (SBA) information systems collect a variety of information about SBA’s Small Business Development Centers (SBDC) and district office activities, including counseling and training. Partner Identification Management System. SBDC lead centers are required to maintain their lead center and service center information in SBA’s Partner Identification Management System. This information includes each SBDC service location by name, host institution, and physical address. Additionally, SBDC locations are identified as the lead center, service center, or satellite location. There is no data field to identify the type of host institution (such as institution of higher education) or whether the host institution is a Historically Black College and University (HBCU). Entrepreneurial Development Management Information System. SBDCs are required to report their program data, including counseling and training activities, through SBA’s data collection system, known as the Entrepreneurial Development Management Information System. According to the user manual, the system is designed around two primary forms: SBA’s counseling information form and SBA’s management training report. These forms include data fields for users to enter demographic information on clients and training participants, such as race, gender, and veteran status. Figure 4 shows the data fields related to demographic information included in SBA’s counseling information form. Figure 5 shows data fields related to demographic information included in SBA’s management training report. There are no data fields for users to enter information related to whether a client or training participant is associated with an institution of higher education, including an HBCU. The Entrepreneurial Development Management Information System enables SBA management to generate reports based on demographic information, such as the number of minority participants trained by SBDCs, but not on the number of HBCU- affiliated clients and training participants. Activity contact report. Until July 2019, district offices were required to report activities (including training, presentations, and interactions with stakeholders) that aligned with their goals and measures to the Office of Field Operations through SBA’s activity contact report. District office staff reported their activities in categories that included general inquiries, training, presentations, counseling and technical assistance, outreach, meetings, and special initiatives. Activity contact report forms did not include data entry fields specific to the type of institution (such as institutions of higher education, including HBCUs) that hosted or participated in the district office’s activity. Additionally, the activity contact report forms did not include data entry fields to identify whether participants were affiliated with an HBCU (students, faculty, or alumni). For some activity contact report categories, the forms included additional data entry fields for the event location and name of the organization involved. For example, the activity contact report form for meetings included optional data fields for the event location and organization name, as shown in figure 6. According to SBA officials, the temporary reporting tool (used since late July 2019 in place of the activity contact report) includes an optional data field for district offices to identity whether their activity was HBCU-related. Appendix V: Comments from the Small Business Administration Appendix VI: GAO Contact and Staff Acknowledgements GAO Contact Staff Acknowledgments In addition to the contact named above, Lisa Moore (Assistant Director), Chir-Jen Huang (Analyst in Charge), Rachel Beers, John Karikari, Ben Licht, John Mingus, Sulayman Njie, Maria Psara, Barbara Roesmann, Jessica Sandler, Jena Sinkfield, and Andrew Stavisky made key contributions to this report.
Why GAO Did This Study The 101 HBCUs play an important role in higher education and in their local and regional economies. Among African Americans who obtained a doctorate in science, technology, engineering, or mathematics in 2005–2010, more than one-third earned their undergraduate degrees from an HBCU. SBA is part of a long-standing White House initiative to strengthen the capacity of HBCUs, including their ability to access and participate in federal programs. SBA's mission includes business development, and SBA also works with colleges and universities to provide entrepreneurial training and counseling. GAO was asked to review SBA's entrepreneurship-related efforts with HBCUs. This report examines (1) SBA efforts to foster entrepreneurship with HBCUs in recent years, (2) SBA's plans for the White House Initiative on HBCUs, and (3) the extent to which SBA collected information specific to HBCUs. GAO analyzed SBA information on HBCU participation in programs and activities for fostering entrepreneurship and reviewed related standard operating procedures. GAO also interviewed officials at SBA headquarters and eight SBA district offices, and representatives of nine Small Business Development Centers (selected for a high number of agreements with HBCUs and other factors). What GAO Found The Small Business Administration (SBA) worked with Historically Black Colleges and Universities (HBCU) to foster entrepreneurship, primarily through its Small Business Development Center program (which provides counseling and training), strategic alliance memorandums, and co-sponsorship agreements. Two HBCUs—Howard University and the University of the Virgin Islands—have hosted SBDC “lead centers” since the 1980s. SBA also signed at least 35 strategic alliance memorandums with HBCUs and at least 16 co-sponsorship agreements in 2013–2018. In 2018, SBA developed a plan to support HBCUs (including goals and measures) for the White House Initiative on HBCUs. However, SBA headquarters did not communicate this plan or its goals to key Small Business Development Centers or SBA district offices (those with HBCUs in their service areas). As a result, SBA may have missed opportunities to collaborate with HBCUs and help achieve the goals of its plan. SBA has collected limited information about its programs and activities with HBCUs. SBA could not establish a baseline for performance measures developed in its 2018 plan because SBA district offices and the Small Business Development Centers are not required to collect or report information about their HBCU-related outreach and other activities. For example, while representatives from the nine Small Business Development Centers with whom GAO spoke said they conducted outreach to HBCUs, this information was not reported to SBA headquarters. Without collecting relevant information about its HBCU-related efforts, including data for performance measures, SBA cannot assess the extent or effectiveness of its efforts to support HBCUs. What GAO Recommends GAO is making three recommendations, including that SBA communicate planned efforts to support HBCUs to key Small Business Development Centers or district offices, and collect additional information on its efforts to support HBCUs. SBA agreed with GAO's recommendations.
gao_GAO-20-47
gao_GAO-20-47_0
Background SEC is a federal agency responsible for protecting investors, maintaining fair, orderly, and efficient markets, and facilitating capital formation. Among its efforts, SEC requires public companies to disclose meaningful financial and other information to the public, examines firms it regulates, and identifies and investigates potential violations of federal securities laws. Each year, SEC brings hundreds of enforcement actions—including judicial enforcement actions and administrative proceedings—against individuals and companies as a result of its investigations. Examples of actions taken in fiscal year 2018 include charges against a company that allegedly defrauded investors in a Ponzi scheme and charges against a bank for misconduct in its sales practices for certain financial products offered to retail investors. SEC’s responsibilities are divided among five divisions and 24 offices. The Division of Enforcement conducts investigations of potential violations of federal securities laws. Enforcement recommends, when appropriate, that SEC bring enforcement actions, litigates these actions, negotiates settlements on behalf of SEC, and works with other law enforcement agencies to bring criminal cases when warranted. Enforcement is currently led by two co-directors who report to the Chairman. Enforcement staff operate from headquarters in Washington, D.C., and in 11 regional offices. Enforcement maintains a database that tracks enforcement-related activities, including all cases from investigation through litigation, and is the source of statistics used in public reporting (see fig. 1). For tracking purposes, “case” encompasses all stages of a possible enforcement action, beginning either as a matter under inquiry or as an investigation. Some cases advance and become an enforcement action. Enforcement’s database includes all key case data other than data on financial penalties and disgorgements, which SEC’s Office of Financial Management stores and manages. Enforcement’s CMS are responsible for recording key data into the database and conducting quality checks on the data throughout the investigative and litigation processes of a case. There are two groups of CMS: local CMS and national CMS, both of which can be located in SEC regional offices or at SEC headquarters. Local CMS in regional offices report to their regional managers but coordinate with the Enforcement Case Management and Systems Reporting Group. National CMS and local CMS at SEC headquarters report to the Case Management Systems and Reporting Group. National CMS have the responsibility of reviewing and verifying case data input by local CMS. SEC Has Written Procedures for Recording Enforcement Data but Not for Public Reporting of Enforcement Statistics Enforcement Has Documented Procedures for Recording and Verifying Enforcement Data Enforcement has documented procedures for recording and verifying enforcement-related data in its central database. More specifically, the Enforcement database user guide has step-by-step procedures for recording case data and clear descriptions of each data entry field. For example, the guide includes brief descriptions of primary classifications— or categories—used to describe the nature of the enforcement action (such as insider trading or delinquent filing). According to the database user guide and other SEC documentation, local CMS have the primary responsibility for recording most case data used in Enforcement metrics. Local CMS may assist with data recording in the opening of a case as a matter under inquiry or, if it is known the case will advance to the next stage, as an investigation. The user guide also states that local CMS are responsible for recording the advancement of a case from an investigation to an enforcement action. According to the user guide, CMS use information (generally, case documentation) received from the courts or SEC staff responsible for the case to create the action entry in the central database, including the primary classification for the action. CMS also facilitate closing completed cases in the database. Enforcement procedures call for Enforcement staff to perform multiple data reviews for all information in the Enforcement database, according to Enforcement staff and the user guide. According to the user guide, local CMS review the accuracy of key case-related data recorded in the system at certain stages as a case proceeds (see fig. 2). The local CMS add case information by checking any new documentation, such as court filings. In addition to the review by the local CMS, national CMS also are to review newly opened cases, as well as cases that have advanced to an investigation, changed from an investigation to an action, or closed. To do this, national CMS compare information recorded in the system against any primary documents related to the case, such as court documentation. Finally, Enforcement staff told us that they have an informal process whereby a group of attorneys in the Case Management and Systems Reporting Group review all primary classifications for enforcement actions. Enforcement Does Not Have Documented Procedures for Generating Its Annual Report and Verifying the Statistics Used in It Enforcement lacks written procedures for generating the Enforcement Annual Report, including for compiling and ensuring the accuracy of the statistics published within. Enforcement staff explained that they follow an informal process to generate the annual report, which includes steps to help ensure reliable reporting and detect and prevent errors (see fig. 3). However, Enforcement was unable to provide documentation of this process or of the implementation of the steps to help ensure accuracy. According to staff, the process for generating the annual report includes selecting what statistics to include and what activities and accomplishments to describe in the report narrative. Specifically, Enforcement staff said that the division’s co-directors hold regular weekly meetings with their staff to discuss management of the division. Staff said program metrics and other measures may be discussed at these meetings, including the types of information and statistics that might be used in the Enforcement Annual Report. According to the staff, at the end of the fiscal year the co-directors determine what information and statistics the division will include in reports. Once decisions have been made about the annual report’s content, Enforcement staff told us a contractor uses software queries of the database to compile statistics for the report based on data parameters defined by Enforcement staff. An Enforcement staff member familiar with the data reviews the queries’ output to verify accuracy, according to Enforcement staff. Staff then add the compiled statistics to the draft annual report. According to staff, the draft report is then sent to the Office of Public Affairs for formatting and publication. Enforcement staff stated that staff familiar with the data perform an additional check to ensure that no data values were mistyped or otherwise edited in the formatting process. Finally, the co-directors of Enforcement are to review the draft report. After they give a final approval, the annual report is published. Control activities such as written procedures help ensure that operational processes are effective and actions are taken to address risks. In particular, federal internal control standards identify documentation— including documentation that demonstrates procedures are being implemented—as a necessary part of an effective internal control system and as a means to help detect and prevent errors. Enforcement staff stated that the division does not have written procedures for generating its annual report or documenting the implementation of review processes because the report is not required by law and is discretionary. The staff said they were confident about the reliability of report data because staff were familiar with enforcement data and the informal processes they currently use to verify accuracy. In contrast, Enforcement uses documented SEC guidelines for reviewing and verifying the data used to support performance metrics in the agency- wide SEC Annual Performance Report. Documenting written procedures for generating both Enforcement’s annual report and the processes it uses to verify published statistics— including documentation that procedures were implemented—would provide Enforcement with greater assurance that staff follow necessary steps to help ensure the reliability and accuracy of reported information. Reliability and accuracy of information are important to maintaining the division’s credibility and public confidence in its efforts. In addition, developing written procedures would better position Enforcement to manage risk associated with staff turnover and help ensure continuity of operations in its public reporting. SEC Has Made Modifications to Its Reporting of Enforcement Statistics Since 2009 Since 2009, SEC has made changes to how it reports and presents enforcement or enforcement-related statistics, which are included in a number of reports (see table 1). As previously discussed, we reviewed reports from 2009 through 2018 that included enforcement statistics. More specifically, SEC made the following changes to its public reporting of enforcement statistics, which include the creation of a stand-alone Enforcement Annual Report in 2017. Prior to 2017, Enforcement reported similar statistics in the annual Select SEC and Market Data Report. Definition of enforcement actions. Enforcement staff told us that before 2013, the Select SEC and Market Data Reports changed little from year to year, with the previous year’s report used as a template to create the next one. SEC adjusted its definition of enforcement actions in the 2013 report, and included notes explaining the change and providing what the number of enforcement actions would have been under the previous definition. Presentation of enforcement statistics. Enforcement staff said the Office of the Chief Operating Officer determined changes in presentation (such as the order of enforcement action classifications) in the Select SEC and Market Data Report. In 2015, Enforcement changed how the report presented summary data for enforcement actions. Previously, Enforcement counted enforcement actions as civil actions or administrative proceedings, but the fiscal year 2015 report separately identified and counted the proceedings as stand-alone (initial) or follow-on (after initial action). Enforcement staff said these changes were made possible by better software that allowed for enhanced and expanded presentation of the data. Enforcement Annual Report. As previously mentioned, the Select SEC and Market Data Report was discontinued after the fiscal year 2017 report and the Enforcement Annual Report was first published in November 2017. The annual report included additional data tables of enforcement statistics not previously reported (some comparing statistics to the previous year) and narratives about enforcement priorities and cases. Enforcement staff told us the annual report was created to increase transparency and provide more information and deeper context than previous reporting had. Conclusions The SEC Division of Enforcement voluntarily issues an annual report that includes statistics and highlights significant enforcement actions and initiatives of the previous fiscal year. Enforcement has documented procedures and has designated staff to input and review enforcement- related data in its case-tracking system. However, the division does not have written procedures for generating its public reporting (currently, the annual report), including for compiling and verifying the report’s statistics, or documenting that procedures were implemented as intended. Written procedures would help Enforcement ensure the reliability and accuracy of reported information, manage risk associated with staff turnover, and promote continuity of operations in its public reporting. Recommendation for Executive Action The Securities and Exchange Commission’s Co-Directors of Enforcement should develop written procedures for generating Enforcement’s public reports, including procedures for compiling and verifying statistics used in the reports and documenting their implementation. (Recommendation 1) Agency Comments We provided a draft of this report to SEC for review and comment. In written comments (reproduced in appendix I), SEC generally agreed with our findings and concurred with our recommendation. In addition, SEC provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairman of the Securities and Exchange Commission, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or clementsm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Securities and Exchange Commission Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above Kevin Averyt, John Forrester, (Assistant Directors), Jordan Anderson (Analyst in Charge), Tim Bober, Ryan Braun, Marc Molino, Kirsten Noethen, Barbara Roesmann, and Farrah Stone made key contributions to this report.
Why GAO Did This Study Enforcement supports SEC's mission by bringing civil and administrative actions against individuals and entities for fraud, financial and accounting irregularities and misstatements, and other misconduct. According to SEC, these enforcement actions serve as a deterrent against future wrongdoing. Since 2017, Enforcement has published an annual report that provides statistics on its enforcement activities and highlights its priorities for the coming year. GAO was asked to examine SEC reporting of enforcement statistics. This report examines (1) the ways that enforcement statistics reporting changed over the last 10 years, and (2) policies and procedures for recording, reviewing, and reporting enforcement statistics. GAO reviewed SEC's internal policies, procedures, and manuals for recording, verifying, and reporting data. GAO also interviewed SEC officials and reviewed past SEC reports containing enforcement statistics. What GAO Found Since 2009, the Division of Enforcement (Enforcement) in the Securites and Exchange Commision (SEC) has made modifications to its reporting of enforcement statistics, including by releasing a stand-alone annual report beginning in fiscal year 2017. The Enforcement Annual Report included additional data on enforcement statistics not previously reported and narratives about enforcement priorities and cases. Enforcement staff told us the annual report was created to increase transparency and provide more information and deeper context than previous reporting had provided. Enforcement has written procedures for recording and verifying enforcement-related data (including on investigations and enforcement actions) in its central database. However, Enforcement does not have written procedures for generating its public reports (currently, the annual report), including for compiling and verifying the enforcement statistics used in the report. To produce the report, Enforcement staff told GAO that staff and officials hold meetings in which they determine which areas and accomplishments to highlight (see figure). Enforcement was not able to provide documentation demonstrating that the process it currently uses to prepare and review the report was implemented as intended. Developing written procedures for generating Enforcement's public reports and documenting their implementation would provide greater assurance that reported information is reliable and accurate, which is important to maintaining the division's credibility and public confidence in its efforts. What GAO Recommends GAO recommends that SEC's Co-Directors of Enforcement develop written procedures for generating Enforcement's public reports, including procedures for compiling and verifying statistics used in the reports, and documenting their implementation. SEC agreed with the recommendation.
gao_GAO-19-587
gao_GAO-19-587_0
Background Retirement Resources Many older Americans are retired and rely on different parts of the U.S. retirement system for their financial security. The U.S. retirement system is often described as being composed of Social Security, employer- sponsored pensions and retirement savings plans, and individual savings. In addition, older Americans may work past traditional retirement ages or phase into retirement. Social Security’s Old-Age and Survivors Insurance program is the foundation of the U.S. retirement system and provides benefits to retired workers, their families, and survivors of deceased workers. In 2018, about 53 million retirees and their families received $844.9 billion in Social Security retirement benefits, according to the Social Security Administration. However, Social Security is facing financial difficulties that, if not addressed, will affect its long-term stability. If no changes are made, current projections indicate that by 2034, the retirement program Trust Fund will only be sufficient to pay 77 percent of scheduled benefits. Employer-sponsored pensions include DB plans, which generally promise to offer a monthly payment to retirees for life. Employers also sponsor defined contribution (DC) plans, such as 401(k)s, in which individuals accumulate tax-advantaged retirement savings in an individual account based on employee and/or employer contributions, and the investment returns (gains and losses) earned on the account. Participants in both DB and DC plans receive certain tax preferences provided the plans comply with requirements outlined in the Internal Revenue Code (IRC). For fiscal year 2018, estimated tax expenditures related to retirement plans and savings amounted to about $188 billion. The Employee Retirement Income Security Act of 1974 (ERISA) outlines minimum standards and requirements that must be met by most private sector employer- sponsored retirement plans; it does not, however, require any employer to establish, or continue to maintain, a retirement plan. Assets rolled over from employer-sponsored DC plans when individuals change jobs or retire are the primary source of funding for individual retirement accounts (IRAs). Over the past 40 years, private sector employers have increasingly moved from offering DB plans to offering DC plans. While DC plans offer more portability, some financial risks—such as poor investment returns, decreases in interest rates, and increases in longevity—have shifted from the employer to the employee, with important implications for individuals’ retirement planning and security. Individual savings are any other non-retirement plan savings and investments. Home equity is an important asset for many households. Other sources of savings or wealth may include amounts saved from income or wages, contributions to accounts outside of a retirement plan, non-retirement financial wealth that is inherited or accumulated over time, and equity from other tangible assets such as vehicles. Wealth: For analyses in this report, we defined wealth as net worth, i.e., assets minus debt. Assets could be financial (e.g., savings accounts, stocks, bonds, retirement accounts) or nonfinancial (e.g., the value of any houses or vehicles). Retirement accounts include defined contribution plans, such as a 401(k), or individual retirement account (IRA)s. Net worth is a measure often used by researchers studying retirement security. Present value of future income from Social Security and defined benefit pensions: Older Americans may also have other future retirement resources, not included in net worth, such as the present value of benefits expected from defined benefit (DB) pension plans and Social Security. These present value estimates could be included in a broader definition of economic resources or wealth, and we were able to produce estimates of these additional retirement resources to supplement our analysis of the distribution of income and wealth among older Americans over time. While all estimates produced using survey data are subject to some uncertainty, our present value estimates for these additional retirement resources are also subject to additional uncertainty that arises from using another data source—the Financial Accounts of the United States—to create a measure of aggregate defined benefit entitlements; having limited information about lifetime earnings in the Survey of Consumer Finances; and making assumptions about life expectancy, real discount rates, and retirement ages, which are unlikely to hold for all households. Data limitations prevented us from producing this broader measure of retirement resources for our analysis examining the distributions of income and wealth as a cohort of older Americans aged. Income: For analyses in this report, we defined household income as the sum of income across all sources, including wages and salaries, Social Security benefits, traditional pension benefits from defined benefit plans, withdrawals from retirement accounts, and income from any other sources, such as interest on financial assets or benefits from social safety net programs such as the Supplemental Nutrition Assistance Program (SNAP). See appendix I for more information on our definitions and the methods used to produce estimates of wealth, the present value of future income expected from Social Security and defined benefit plans, and income. Older Americans may also have wages or salaries from working longer as they transition to retirement. According to data from the Bureau of Labor Statistics, more older Americans are working. From 1989—the earliest starting year for our analyses—to 2018, the labor force participation rate for Americans aged 55 or older increased from 30 percent to 40 percent. In addition, some older Americans may receive income from financial assets, such as interest or dividends, and from other benefit programs, such as Social Security Disability Insurance. Increases in the Number of Older Americans The number of older Americans is increasing faster than the population as a whole. In 1990, about 52 million, or around 1 in 5, people in the United States were aged 55 or older. By 2030, that number is expected to be about 112 million, or around 1 in 3. The aging of the baby boomers— that is, people born between 1946 and 1964—as well as increasing longevity and lower fertility have contributed to this trend. The oldest baby boomers turned 55 in 2001 and the youngest are turning 55 this year. In addition, average life expectancy for those ages 65 or older has increased significantly over the past century and is projected to continue to increase. For example, a man turning 65 in 2030 is expected to live, on average, to age 85.0, an additional 5.3 years compared to a man who turned 65 in 1980, who was expected to live, on average, to age 79.7. A woman turning 65 in 2030 is expected to live, on average, to age 87.3, an additional 3.5 years compared to a woman who turned 65 in 1980, who was expected to live, on average to age 83.8. Since life expectancies are averages—some individuals will live well beyond their life expectancy— longer life expectancies, combined with the possibility of living well beyond life expectancy, mean that people must now prepare for the potential for more years in retirement with greater risk of outliving their savings. Disparities in Income and Wealth Increased Among Older Households Even As More Households Had Retirement Accounts Disparities Increased from 1989 to 2016, with Households in the Top 20 Percent Generally Having Disproportionately Higher Income and Wealth in 2016 Disparities in income and wealth among older households have become greater over the past 3 decades, according to our analysis of 1989 to 2016 data from the SCF. For our analysis, we divided older households in the data into five groups, or quintiles, based on income or wealth. Each year of data in our analysis used a different set of households. Therefore, each quintile includes different sets of households over time. In other words, the households in the top 20 percent in 1989 are not the same households as those in the top 20 percent in 2016. While the households included in the SCF are different for each year of data we used in our analysis, we were able to examine how the distribution of income and wealth across older households changed over time. We found mostly higher income and wealth across all quintiles over time, disproportionately so for the top quintile. For example, we estimated that average income of households in the top 20 percent in 1989 was about $242,000. In 2016, estimated average income of households in the top 20 percent was about $398,000, which is about 64 percent higher (see fig 1). In comparison, estimated average income of households in the bottom quintile—bottom 20 percent—was about $9,000 in 1989. In 2016, estimated average income of households in the bottom 20 percent was about $14,000, which is about 55 percent higher. We found similar results when we analyzed changes in median income. Our findings were similar when we analyzed changes in wealth (defined as net worth). Estimated average wealth of households in the top 20 percent was about $2.1 million in 1989. In 2016, estimated average wealth of households in the top 20 percent was about $4.6 million, which is more than twice as high. (See fig. 2.) In comparison, average wealth of households in the bottom 20 percent was similar over time from 1989 to 2013. In fact, in both 2010 and 2013, estimated average wealth of households that were in the bottom 20 percent in either of those years was negative, meaning that those households, on average, had more debt than assets. (See text box for discussion of how recessions during the time period of our analysis could affect retirement security.) Within the top quintile, a disproportionate share of income and wealth is held by the top 1 percent compared to the next 19 percent. (See figs. 3 and 4 for average income and wealth of households in the top 1 percent.) For example, we found households in the top 1 percent in 1989 had estimated average wealth that was about $13 million more than estimated average wealth for households in the next 19 percent (about 10 times as much estimated average wealth). By 2016, households in the top 1 percent had about $34 million more in estimated average wealth compared to households in the next 19 percent (about 13 times as much estimated average wealth). Social Security is the foundation of retirement security in the United States, and along with income from traditional DB pensions, can be particularly important for older households with lower wealth. As discussed in the text box above, some older Americans will expect future income from Social Security, DB pensions or both. We analyzed the present value of these sources for two subsets of older Americans: 1) those who expect future income from Social Security but not DB pensions, and 2) those who expect future income from both Social Security and DB pensions. On average, households with lower wealth, and that expect future income from Social Security but not DB pensions, may receive a significant income stream from future Social Security benefits, according to our analysis of SCF data (see fig. 5). The bottom 20 percent have little in wealth, on average, but the estimated present value of future Social Security benefits provides them relatively significant financial security in retirement. On the other hand, for the top two quintiles, wealth was the most important retirement resource, as households in the top quintile have wealth that, on average, far exceeds the estimated present value of benefits provided by any future Social Security or pension benefits. We found similar results for households with lower wealth and that expect future income from Social Security and DB pensions. While the lower quintiles may have little in wealth, on average, they may expect to receive a significant income stream from future Social Security and DB pension benefits (see fig. 6). Wealth was the most important financial retirement resource for the top two quintiles, on average. While disparities remain, the present value of future income expected from Social Security and DB pensions mitigate these disparities to some extent for those households that expected such income, as illustrated by the examples below. Estimates for all older households in 2016 that expect future income from Social Security but not DB pensions: Households in the top quintile had, on average, about $6.1 million in assets, about 272 times as much as the bottom quintile, which had estimated assets of, on average, about $22,000. When looking at a broader definition of retirement resources (assets plus the present value of future income from Social Security), we estimated that the top quintile had, on average, $6.6 million in these resources, about 27 times as much as the bottom quintile, which had, on average, about $241,000. Estimates for all older households in 2016 that expect future income from Social Security and DB pensions: Households in the top quintile had, on average, about $3.2 million in assets, about 61 times as much in assets as the bottom quintile, which had estimated assets of, on average, about $52,000. When looking at a broader definition of retirement resources (assets plus the present value of future income from Social Security and DB pensions), we estimated that the top quintile had, on average, about $4.3 million in these resources, about 8 times as much as the bottom quintile, which had, on average, about $535,000. Recent research has theorized that benefits expected from Social Security “ a long way” to explaining why having little in DC accounts and future income expected from pensions does not necessarily translate into dramatic changes to living standards as people retire. In particular, the progressivity of Social Security, meaning Social Security benefits replace a higher percentage of pre-retirement earnings for lower-earning households, could be helpful for these households, especially in the absence of other resources, such as retirement accounts. Income and Wealth Disparities by Demographic Characteristics Income and wealth were consistently lower over time for older households headed by someone who was a racial minority, single, or hadn’t attended college, according to our analysis of 1989 through 2016 SCF data. (See fig. 7 for an example using the middle quintile.) We found these disparities existed across all quintiles and all years (see fig. 8 for another example, this time using the top quintile). Generally, the largest disparities from 1989 to 2016 were between 1) households in which the head had not attended college and households in which they had and 2) coupled households and single women. These results are consistent with our prior work, which found that women age 65 and older had less retirement income, on average, and live in higher rates of poverty than men in that age group. Disparities were also sizeable for households headed by someone who was white and non-Hispanic compared to those headed by a minority. There are multiple reasons why households headed by someone with at least some college education may have more wealth in retirement. Most notably, those with more education may have access to higher-paying jobs and be able to save more. Our review of the literature identified several other theories to explain this association. These include (1) education increases awareness about the need to save, (2) highly- educated individuals may have more financial education and achieve higher rates of return on savings, (3) those with more education may be willing to work longer, and (4) highly-educated individuals may have wealthier parents and thus may have received larger bequests. Our prior work has explored how recent trends in marital patterns and saving for retirement, among other factors, can negatively affect retirement security for minorities, women, or those who are single. Percentage of Older Households with Retirement Accounts Has Increased Since 1989, Although Non-Retirement Assets Remain Important The percentage of households with retirement accounts was higher across all wealth quintiles in 2016 compared to 1989, and it was disproportionately higher for the top quintile, according to our analysis of SCF data. In 1989, the percentage of households with retirement accounts—amounts in DC plans and IRAs—ranged from 4 percent of the bottom quintile to 65 percent of the top quintile (see fig. 9). By 2016, 11 percent of households in the bottom quintile had retirement accounts compared to 86 percent of households in the top quintile. These increases reflect the transition to more employers offering DC plans, among other factors. Further, the percentage of households in the bottom quintile with retirement accounts had not returned to its pre- recession rate. As discussed earlier, households with less wealth may be more reliant on income from Social Security and DB plans. Further, we found the amount in retirement accounts was often low, particularly for the lower quintiles. In 2016, 89 percent of the households in the bottom quintile had no retirement accounts, and another 10 percent had account balances of less than $50,000 (see fig. 10). In comparison, over half the households in the middle quintile had retirement accounts, and almost all of these households had less than $200,000 in their accounts. Older Americans may rely on resources other than those discussed above for financial security in retirement (see fig. 11), and these “non- retirement assets” remained important over the time span of our analysis, regardless of their value relative to retirement account balances or the present value of future income from Social Security or DB pensions. Home equity. We estimated that over 80 percent of households in each of the top four quintiles of the wealth distribution owned a home in each year of our analysis. However, the home ownership rate for households in the bottom quintile in each year of our analysis was consistently much lower than for the other quintiles–ranging between 18 and 32 percent. Further, the home ownership rate for households in the bottom 20 percent in 2016 (19 percent) was significantly lower than the home ownership rate for households in the bottom 20 percent in 2007 (28 percent), the starting year for the most recent recession. In 2016, the estimated average amount of home equity of households in the bottom quintile was about $2,000, and $50,000 for the second-from-the-bottom quintile, compared to about $118,000 for the middle quintile, about $208,000 for the fourth (or second-from-the- top) quintile, and about $559,000 for the top quintile. According to researchers, most households appear to treat a house as a source of reserve wealth that can be tapped in the event of a substantial expense, further pointing to the importance of home ownership for many older Americans. Vehicles. A majority of households in each quintile of the wealth distribution owned a vehicle across all years in our analysis, although the bottom quintile had ownership rates that were disproportionately lower. However, despite this, we estimated that vehicles provided higher value, on average, relative to other non-retirement assets for households in the bottom quintile from 2010 onward. For example, in 2016, the estimated average value of vehicles among households in the bottom quintile was about $7,000 in 2016, compared to estimated average values of less than $2,000 in home equity and about $3,000 in all other non-retirement assets. All-other non-retirement assets. For the top quintile of households, the average value of these “other assets”—which included stocks, bonds, and other savings outside of retirement accounts, among other things—was more than average home equity or the average value of vehicles over the period of our analysis. Estimated average wealth in this other assets category was about $3.3 million in 2016 for the top quintile. Individual income sources and debt were also important factors in older households’ financial security. Researchers have examined the importance of income sources for households and found Social Security is more important for households with lower incomes, while older households with the most income tend to have a diverse range of income sources, such as earnings from financial assets and income from DB plans. We found that debt could have a substantial effect on households’ financial security, particularly for the bottom 20 percent. For example, in 2010 and 2013, average net worth for this group was negative because debt was greater than assets. A Substantial Number of Older Americans Are Living Into Their Seventies or Early Eighties, Which May Have Implications for Retirement Security A substantial number of older Americans born from 1931 through 1941 lived into at least their 70s or early 80s, according to our analysis of data on a cohort of people born in these years. (See text box and app. I for more on how we analyzed Health and Retirement Study (HRS) data on this cohort.) However, this same cohort faced disparities in longevity. Further, our analysis, as well as that of other researchers, found income and wealth each have strong associations with longevity, as do certain demographic characteristics, such as gender and race. However, even among those with multiple factors associated with a shorter life, such as having lower mid-career earnings and not having attended college, a significant proportion from our cohort were alive in 2014, when they were in their 70s or early 80s. Taken all together, individuals may live a long time, even individuals with factors associated with lower longevity, such as low income or education. Those who live a long time and have little or nothing in DC account balances or pension benefits may have to rely primarily on Social Security or safety net programs. Analyzing Income, Wealth and Longevity We examined the association of income and wealth with longevity in a nationally representative sample of Americans born from 1931 through 1941. Throughout this analysis, our references to “older Americans” and “households” apply to that specific subset of older Americans born from 1931 through 1941 and their households. The Health and Retirement Study (HRS) began in 1992 and first surveyed these individuals when they were 51 to 61 years old. The same individuals have been re-interviewed every 2 years since, provided they continued to participate in the survey, and the most recent complete data is from 2014, when those who were still alive were 73 to 83 years old. We were able to measure deaths over a period of 22 years (1992 through 2014). Every 2 years, the HRS attempted to measure whether the original respondents were still alive, but these longevity data were incomplete because some of the original respondents declined to participate in later waves of the survey. Once these respondents left the survey, their actual longevity could not be followed. Therefore, we used survival analysis to estimate the proportion of individuals in the1992 sample alive in 2014. Survival analysis accounts for survey respondents with complete or incomplete longevity data and allowed us to estimate the chance of death by any given time in the observation period. Most importantly, our analysis assumed actual longevity from 1992 to 2014 of the individuals in our analysis did not have a systematic relationship with whether the original HRS respondents continued to participate in the study except that leaving the study implied a later death. We believe this assumption to be reasonable for the purpose of our analysis for two reasons. First, a small percentage (8 percent) of the original respondents dropped out of the survey, so that the impact of any longevity differences among the population who dropped out would likely have been small. Second, while some baseline characteristics of respondents do appear correlated with non-response over time, the population that dropped out of the study does not appear to vary significantly from those completing each wave, except for race and ethnicity. We conducted this analysis, at the individual level, for HRS respondents in 1992, and any spouses or partners also born in 1931 through 1941. Additional details and caveats to this analysis are available in appendix I. We broke the sample into quintiles based on their income or wealth. To determine an individual’s place in the income distribution, we measured mid-career household earnings using administrative records from the Social Security Administration that are linked to the HRS data. Specifically, we defined mid-career household earnings based on average annual earnings reported to the Social Security Administration for years when the survey respondent we identified as the household head was ages 41 to 50 as well as the earnings of their spouse or partner during those years if the respondent was part of a couple in 1992. This measure of earnings provides a relatively stable indicator of the household’s labor market experience, compared to using a single year of earnings, which could be unusually high or low. For wealth, we used the household’s initial net worth in 1992, including any balances in defined contribution accounts or individual retirement accounts, but excluding second homes, which HRS did not consistently capture in all years. In both instances, the sample was broken into quintiles. For additional details on our methodology, see appendix I. Overall, an estimated 63 percent of the individuals in our sample were alive in 2014 (ages 73 to 83), and greater levels of income and wealth were associated with greater longevity in our analysis of HRS data. For income, an estimated 52 percent of individuals from households in the bottom quintile of the mid-career earnings distribution were alive in 2014, compared to an estimated 74 percent of individuals from households in the top quintile. (See fig. 12.) The percentages by wealth quintile were similar. Other researchers have similarly found that greater levels of income and wealth are associated with greater longevity. For example, a researcher at the Social Security Administration has established that men with higher earnings had seen greater gains in longevity than those with lower earnings. Understanding the association among income, wealth, and longevity is complicated because of relationships among the characteristics, as well as their relationships with demographic characteristics (see text box). Besides income and wealth, several demographic characteristics were also associated with longevity in our analysis of HRS data, and these relationships have also been noted in other researchers’ studies. Women tended to live longer than men: Women had greater longevity through 2014, with an estimated 69 percent living to at least ages 73 to 83 compared to an estimated 58 percent of men. Non-Hispanic whites and Hispanics tended to live longer than blacks: For Hispanics, an estimated 68 percent lived to at least 2014, as did an estimated 65 percent of non-Hispanic whites, compared to an estimated 52 percent of non-Hispanic blacks. More educated individuals tended to live longer than those with less education: An estimated 75 percent of college graduates lived to at least 2014, compared to an estimated 65 percent of those who graduated from high school and an estimated 50 percent of those with less than a high school diploma or GED. Individuals who self-reported being in good health tended to live longer than those who reported being less healthy: Among those who self-reported being in excellent health in 1992, an estimated 78 percent lived to at least 2014, compared to an estimated 31 percent of those who reported being in poor health. Income, Wealth, and Demographics Are Interrelated The relationships of income, wealth, and demographics with longevity are complex because of interactions among these characteristics themselves, which make it difficult to determine the direction or extent of causality. For example, there are many potential interactions among educational status, income, and wealth. Higher levels of education could provide access to better job opportunities, increasing income. Education could contribute to greater financial literacy and better financial decision making, increasing wealth. Having access to wealth could make it easier to attain additional education. While income, wealth, and education all are associated with longevity, it is difficult to interpret their individual associations with longevity because of their possible interactions with each other. We estimated that individuals whose households were in the top two quintiles (top 40 percent) of the mid-career earnings distribution were more likely than their counterparts in the bottom 60 percent to be alive in 2014 (ages 73 to 83) in an analysis controlling for race and ethnicity, gender, age, education level, and initial self-reported health status on entry into HRS in 1992. In a similar analysis, we found that individuals from households in the top quintile (top 20 percent) of wealth in 1992 were more likely to be alive than their counterparts in the bottom four quintiles. Our findings are consistent with the work of other researchers who also controlled for such factors. However, such observational studies are only able to demonstrate that a statistical association exists between two characteristics. For example, one study that found a strong association between income and life expectancy specifically notes that unmeasured factors likely affect the association. Similarly, we cannot determine from our analysis the extent to which income or wealth causes differences in longevity. Even among individuals with characteristics associated with decreased longevity, a substantial proportion of older Americans lived at least into their 70s or early 80s, according to our analysis of 1992 to 2014 HRS data. For example, we constructed three scenarios to illustrate how longevity varies for those with different mid-career earnings and education. Among those in the “bottom” scenario–those individuals who had no college education and were from households in the bottom 20 percent of the earnings distribution–an estimated 50 percent were still alive in 2014 (see fig. 13). We estimated that the corresponding percentages for our “middle” scenario and “top” scenario were 65 percent and 80 percent, respectively, of individuals still alive in 2014. Thus, even among those with education and earnings associated with lower longevity, a significant proportion, 50 percent, were still alive in 2014, and these individuals will need to provide for themselves through their remaining years. We also analyzed a subset of our bottom scenario that included those who had no college education and were from households in the bottom 20 percent of the earnings distribution and whose self- reported health status was fair or poor. While the percentage of the individuals who survived was lower, an estimated 39 percent were alive in 2014, which is a substantial proportion. Most individuals have the potential for an unexpectedly long life, including individuals with demographic characteristics associated with lower longevity, income or wealth. In addition, individuals may face major expenses as they age. For example, several experts we spoke with noted that health care costs can pose a particular challenge at older ages. Taken all together, individuals may live a long time and face financial challenges in their later years, including those with less income and wealth. For example, of the individuals in the bottom group of our scenarios illustrating the effects of earnings and education on longevity, an estimated 50 percent were still alive in 2014. Should these individuals not have DC accounts or have little in them, or should they have little to no DB pension benefits, they may have to rely primarily on Social Security (which itself faces financing difficulties) or safety net programs. While Income Disparities Declined As a Cohort of Older Americans Aged and Worked Less, Disparities in Wealth Persisted Using HRS data and following the same households over time, we examined how income and wealth distributions changed and found that, in general, disparities in income decreased while disparities in wealth persisted among a cohort of older Americans as they aged (see text box for more information on our analysis). Households with the top 20 percent of mid-career earnings saw larger drops in income than households in other mid-career earnings groups, decreasing income disparities overall. During the same time period, the amount of wealth held by most households remained steady and wealth disparities persisted. We also found important differences in the distribution of income and wealth among households by race and ethnicity and education level. Analyzing Income and Wealth for Households Over Time We analyzed Health and Retirement Study (HRS) data to estimate how income and wealth distributions changed as a particular cohort of older Americans aged over time. We analyzed income, wealth, and select financial resources for the same group of survey respondents (heads of households) or their spouses or partners who responded to the survey in 1992 and were still alive and responded in 2014, which is the most recent year for which the data are complete. We defined wealth as net worth. Data limitations prevented us from producing estimates of the present value of future income expected from Social Security or defined benefit pensions. The heads of households we analyzed were from the original HRS cohort and were born in 1931 to 1941. If neither the head of household or the spouse or partner interviewed in 1992 was still alive in 2014, their household was not included in our sample. As a nationally representative longitudinal survey, the HRS allows us to follow the same set of Americans from their 50s through the remainder of their lives; these household heads or their spouses or partners had reached their 70s or early 80s by 2014, allowing us to estimate how income and assets changed for the households as they progressed through retirement. We are reporting medians, as our analysis indicated that means were not consistently reliable. Appendix VI contains additional figures examining how assets and income changed for households headed by individuals in HRS’ “War Babies” cohort, who were born from 1942 through 1947. For our analysis, we divided older households in the data into five equally sized quintiles, or earnings groups, based on the number of households and their mid-career household earnings. We defined mid-career household earnings based on earnings reported to the Social Security Administration for years when the survey respondents were ages 41 through 50, as well the earnings of their spouses or partners during those years if the respondents were part of a couple in 1992. For more on our analysis, see appendix I. As described in the textbox above, our analysis included households in which either the head of the household or their spouse or partner were still alive in 2014, and table 1 shows the race and ethnicity and education level of the household head, as well as the composition of the household. As discussed in the previous section, certain demographic characteristics, such as being a minority or being less educated, are associated with a shorter life. However, not everyone with these demographic characteristics will have a shorter life. As the table below shows, there are households in which the head had at least one of these characteristics and lived into his or her 70s or early 80s. Income Disparities Decreased Overall as Higher-Earning Households in Our Cohort saw Drops in Income We analyzed HRS data and found that household income declined as heads of households born from 1931 through 1941 and their spouses or partners aged, with decreased earnings from work contributing to the decline as people retired. Those households that had the highest mid- career earnings—those in the top earnings group—experienced the largest declines in income from 1992 when the heads of household were ages 51 to 61 to 2014 when the surviving heads of household or their spouses or partners were ages 73 to 83 (see fig. 14). For example, estimated median income for the top earnings group decreased by 53 percent, from about $121,000 in 1992 to about $57,000 in 2014. In comparison, for those with the lowest mid-career earnings—those in the bottom earnings group—estimated median income declined by 36 percent, from about $28,000 to about $18,000 over this same period. The decrease in income disparities may reflect the shift from work-related earnings to Social Security as the largest source of income for households in the top 20 percent, indicating the possible transition from working to retirement. More specifically, in 1992, 94 percent of households in the top mid-career earnings group had work-related earnings, which contributed the largest amount to their income. By 2014, only 25 percent of the top earnings group still had work-related earnings, and Social Security provided the highest median value of all income sources. Among households in the bottom mid-career earnings group, 68 percent had work-related earnings in 1992, and 15 percent continued to have work-related earnings in 2014. Similarly, work-related earnings provided the greatest source of income for these households in 1992, and Social Security provided the highest median value of all income sources for these households in 2014. However, concerns about retirement insecurity for those with lower earnings may remain. Social Security is progressive, meaning it replaces a higher percentage of income for those with lower earnings, but the formula for calculating Social Security benefits provides a higher benefit amount to those with higher lifetime earnings. In addition, those households with higher mid-career earnings maintained relatively higher income in retirement, perhaps due to their having higher levels of other types of non-wage income after retiring. For example, in 2014, a significantly greater percentage of households in the top two earnings groups had income from employer-sponsored retirement accounts compared to those in the bottom earnings groups, although households may not be consistent in how they spend down these funds. Wealth Remained Steady for Most Households in Our Cohort, and Disparities Persisted We analyzed HRS data from 1992 to 2014—when heads of households were in roughly their 50s to when they were in their 70s or early 80s—and found that for most households, the level of wealth was relatively consistent as they aged, and disparities in wealth persisted over time. As shown in figure 15, wealth remained relatively steady for households in the bottom three mid-career earnings groups over the time period we examined while households in the top two mid-career earnings groups experienced larger fluctuations in wealth. More specifically, households in the top two earnings groups saw their wealth increase overall from 1992 to 2014. However, while wealth increased from 1992 to 2006, this was followed by declines in wealth from 2006 to 2014. Looking at the overall time period of our analysis, wealth disparities persisted between households in the top earnings groups and households in the bottom earnings groups. For example, in 1992, households in the bottom 20 percent had estimated median wealth of about $93,000 while households in the top 20 percent had estimated median wealth of about $432,000, a difference of about $339,000 (or the top had about 4.6 times the median wealth of the bottom). In 2014, households in the bottom 20 percent had estimated median wealth of about $66,000 while households in the top 20 percent had estimated median wealth of about $539,000, a difference of about $473,000 (or the top had about 8.2 times the median wealth of the bottom). Other researchers have found that that some households may not spend down their wealth as much during retirement due to factors including a generally higher propensity to save, a desire to leave bequests, and the desire to self-insure against medical costs. Households in the top 20 percent of mid-career earnings had greater participation in retirement accounts (see sidebar) and increased home equity relative to other households, which may have contributed to wealth disparities over the time period of our analysis. Retirement Accounts. Among households that had retirement accounts, the median value of retirement accounts increased for all of our income groups (see fig. 16); however, the continued wealth disparities between higher- and lower-earning households may be due to significant differences in the value of retirement accounts and in household participation. The value of retirement accounts for households in the top and bottom earnings groups increased substantially between 1992 and 2014 (a 93 percent and 138 percent increase, respectively). Some of the increase in retirement account balances over time may be due to contributions to DC plans and IRAs during years in which individuals worked, as well as waiting until age 70 ½, when many individuals are required to take minimum distributions from their IRAs. Despite this potential for gains in account balances across the distribution, disparities still exist. In 2014, among households that had retirement accounts, we estimated that households in the top 20 percent had about three times more in their retirement accounts compared to households in the bottom 20 percent (about $176,000 compared to about $54,000). Higher-earning households may not spend down their retirement account balances as much in retirement whereas lower-earning households may have spent down all or part of their account balances. In addition to having more in their retirement accounts, a greater percentage of households in the top earnings group had retirement accounts compared to households in the bottom earnings group. For example, in 2014, an estimated 69 percent of households in the top 20 percent had retirement accounts compared to an estimated 19 percent of households in the bottom 20 percent. Home equity. From 1992 to 2014, home equity increased across all mid-career earnings groups for households with home equity; however, households in the top two earnings groups saw greater increases in the value of their home equity compared to households in the bottom two earnings groups (see fig. 17). Over this time period, a greater percentage of households in the top 20 percent had home equity compared to households in the bottom 20 percent. More specifically, from 1992 to 2014, the percentage of households in the bottom 20 percent with home equity ranged from an estimated 61 percent to 70 percent. For the top 20 percent, the percentage of households with home equity ranged from 88 to 94 percent. Despite the recession from 2007 to 2009, which may have caused home values to depreciate, median home equity for households in the top 20 percent that had home equity increased by an estimated 30 percent from 1992 to 2014. At the same time, median home equity for the bottom 20 percent of households with home equity increased by an estimated 14 percent, though this change was not statistically significant. One expert we interviewed also noted recent real estate appreciation as benefiting wealthier retirees. Race and Ethnicity and Education Were Factors in Persistent Income and Wealth Disparities As Households in Our Cohort Aged Significant differences in income and wealth associated with race and ethnicity, as well as education levels, continued as households aged, according to our analysis of heads of households and their spouses or partners as they aged from roughly their 50s to their 70s or early 80s using 1992 through 2014 HRS data. Race and Ethnicity Non-Hispanic, white households in the bottom 40 percent of mid-career earnings had higher estimated median incomes, and non-Hispanic, white households across the mid-career earnings distribution generally had greater wealth, than minority households. In terms of income, the gap between non-minority and minority households in the bottom 40 percent persisted even as median income decreased overall for households as they aged. For example, we estimated that, in 1992, non-Hispanic, white households in the bottom 20 percent had about $20,000 more in income than minority households. The income disparity was smaller (about $9,700) in 2014, but still remained. In terms of wealth, non-Hispanic, white households had persistently higher wealth compared to minority households across all levels of the mid-career earnings distribution. For example, among the bottom 20 percent of households, in 1992, non-Hispanic, white households had about $138,000 more in estimated median wealth than minority households. While this difference decreased to about $119,000 in 2014, the wealth difference remained. Similarly, for the top 20 percent of households, in 1992, non-Hispanic, white households had about $170,000 more in estimated median wealth than minority households, and, in 2014, the wealth disparity increased to about $294,000. Education Households headed by someone with at least some college education generally had higher median incomes and more wealth than households headed by someone who did not attend college. Income disparities existed across the mid-career earnings distribution from 1992 to 2014. For example, we estimated that, in 1992, households in the top 20 percent with heads who attended college had about $44,000 more in income compared to households in the top 20 percent with heads who did not attend college. We estimated that, in 2014, households with heads in the top 20 percent who had attended college still had greater income, though the difference was smaller (about $25,000). Similarly, heads of households in the bottom 20 percent who had attended some college had more income than heads of household who had not. For example, in 1992, households with heads who had attended some college had about $31,000 more in income than households with heads who had not, and that difference decreased to $9,700 in 2014. Wealth disparities generally existed across the mid-career earnings distribution over time. For example, in 1992, households in the top 20 percent with heads who had attended some college had about $166,000 more in estimated median wealth compared to households in the top 20 percent with heads who did not attend college. In 2014, the difference in estimated median wealth between these same groups was about $386,000. Similarly, households in the bottom 20 percent with heads who had attended some college had greater median wealth than households in the bottom 20 percent with heads who had not attended college. For example, we estimated that, in 1992, households in the bottom 20 percent with heads who attended college had about $176,000 more in wealth than heads who had not. In 2014, the difference in median wealth between these groups was about $120,000. Our findings are consistent with those of other researchers, who found that educational attainment was an important determinant of wealth at age 65, and that it was strongly correlated with wealth even after controlling for lifetime earnings. Agency Comments We provided a draft of this report to the Department of Labor, the Department of the Treasury, the Internal Revenue Service, and the Social Security Administration for review and comment. While none of the agencies provided official comments, the Department of Labor and Social Security Administration provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Labor, the Secretary of the Treasury, the Commissioner of the Internal Revenue Service, and the Commissioner of the Social Security Administration. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or jeszeckc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Appendix I: Objectives, Scope, and Methodology Overview To determine how growing disparities in the distributions of income and wealth affect older Americans, we examined (1) the distributions of income and wealth among all older Americans over time; (2) the association between income, wealth, and longevity among older Americans; and (3) how the distributions of income and wealth have changed over time for a cohort of individuals as they aged. This appendix provides a detailed account of the data sources used to answer these questions and the analyses we conducted. The appendix is organized into three sections. Section I describes how we reviewed literature relevant to this report’s objectives and provides information on the interviews we conducted. Section II describes the information sources and methods we used to analyze the distributions of income and wealth among all older Americans over time. Section III describes the information sources and methods we used to analyze how income and wealth among older Americans are associated with longevity, and how the distributions of income and wealth changed as a cohort of individuals aged. For the purposes of our analysis, we defined wealth to be a household’s net worth—that is, total assets minus total debt. Net worth is a measure often used by researchers studying retirement security. Older Americans may have other future retirement resources, such as the present value of future income expected from defined benefit (DB) pension plans and Social Security. Section I: Literature Review and Interviews We supplemented our data analysis with a literature review and interviewed researchers to identify appropriate background information and context. We had two primary methods for identifying literature to include in our literature review: a snowball technique and a database search. To apply the snowball technique, we first identified possible relevant literature by examining the studies cited in our 2016 report examining the relationship between Social Security benefits and longevity. Then we reviewed the citations included in those studies. Finally, we reviewed relevant literature included in a weekly report called “Current Awareness in Aging Report,” produced by the Center for Demography of Health and Aging at the University of Wisconsin-Madison, which includes a comprehensive list of recently issued materials relating to aging, including retirement security. We compiled relevant citations across these sources and analyzed abstracts to identify working papers, journal articles, and reports that required further review. We identified reports for inclusion based on whether they provided insight into the following relationships: As older Americans age, the relationship between wealth and expenses, and income and wealth. For older Americans, how income and/or wealth inequality are (1) related to the topics below and (2) how, if at all, these relationships have changed over time or generations: Rural vs urban locations Role of inequality (income, wealth, longevity) in reliance on federal income security programs among older Americans To complement the snowball technique search, we also conducted a database search. We searched the Proquest database EconLit for scholarly journals and working papers for a 5-year span, from 2013 through 2018, that matched keywords related to our criteria for relevance. We took additional steps to enhance the robustness of our results. We solicited recommendations for literature from GAO stakeholders, agency officials, and contacts at the Congressional Research Service and Congressional Budget Office and added these recommendations to our list for consideration. During interviews with experts, we discussed contrary opinions and findings in the research and requested full citations as needed. We also attended retirement security events and reviewed news clippings for references to contrary opinions or findings in breaking research. Finally, an economist reviewed the methods and reliability of all studies. We included 26 out of 34 articles from the snowball technique search and expert recommendations and an additional 3 out of 160 articles from the database search (the database search identified some of the same articles as the snowball technique search). These 29 articles that best matched our criteria for inclusion were the articles we reviewed. We also identified and interviewed nine researchers whose work was relevant to our objectives and interviewed them in order to identify researchers’ explanations and theories about the relationships between inequality and longevity, health status, gender, education, and race and ethnicity. To select these researchers, we considered their areas of expertise; whether they worked for a federal agency, university, or other type of organization; and their ideological perspective, if known. Section II: Analyzing Trends over Time in the Distribution of Income and Wealth among All Older Americans Data Sources This section describes the two main data sources we used to analyze trends in the distribution of income and wealth among all older Americans: the Survey of Consumer Finances (SCF) and the Financial Accounts of the United States (FA). Survey of Consumer Finances To examine the distributions of income and wealth among all older Americans over time, we used 1989 through 2016 data from the SCF. The SCF is a triennial survey of household assets and income from the Board of Governors of the Federal Reserve System (Federal Reserve) and asks households detailed questions about their income—including pension benefits—and assets—including amounts in retirement accounts. The survey also asks about debt and demographic information, among other topics. A different sample of households was used for each year in our analysis. These data allow for comparison of the experiences of same-age households at different points in time. The SCF is conducted using a dual-frame sample design. One part of the design is a standard, multistage area-probability design, while the second part is a special over-sample of relatively wealthy households. This is done in order to accurately capture financial information about the population at large as well as characteristics specific to the relatively wealthy. The two parts of the sample are adjusted for sample nonresponse and combined using weights to make estimates from the survey data nationally representative of households overall. In addition, the SCF excludes people included in the Forbes magazine list of the 400 wealthiest people in the United States. Furthermore, the SCF omits observations that have net worth at least equal to the minimum level needed to qualify for the Forbes list. For example, the 2016 SCF surveyed 6,254 U.S. households and removed six households that had net worth equal to at least the minimum level needed to qualify for the 2016 Forbes list. Over time, the number of households interviewed has expanded (see table 2). We found the SCF to be reliable for the purposes of our report. While the SCF is a widely used federal data source, we conducted an assessment to ensure its reliability. Specifically, we reviewed related documentation and internal controls, spoke with agency officials, and conducted electronic testing. When we learned that particular estimates were not reliable for our purposes, or had sample sizes too small to produce reliable estimates, we did not use them. Nonetheless, the SCF and other surveys that are based on self-reported data are subject to nonsampling error, including the ability to get information about all sample cases; difficulties of definition; differences in the interpretation of questions; and errors made in collecting, recording, coding, and processing data. These nonsampling errors can influence the accuracy of information presented in the report, although the magnitude of their effect is not known. Estimates from the SCF are also subject to some sampling error since, for any given year, the sample is one of a large number of random samples that might have been drawn. Since each possible sample could have provided different estimates, we express our confidence in the precision of the sample results as 95 percent confidence intervals. These intervals would contain the actual population values for 95 percent of the samples that could have been drawn. In this report, we present 95 percent confidence intervals alongside the numerical estimates that were produced using SCF data. All financial figures using the SCF data are in 2016 dollars. Financial Accounts of the United States We supplemented the SCF data with data from the Financial Accounts of the United States (FA). The FA include data on transactions and levels of financial assets, and liabilities, by sector and financial instrument; balance sheets, including changes in net worth, for households and nonprofit organizations, nonfinancial corporate businesses, and nonfinancial noncorporate businesses; Integrated Macroeconomic Accounts; and additional supplemental detail. These data provide an aggregate estimate of DB pension entitlements (or liabilities, as the FA refer to them), which can be apportioned across SCF respondents (see detailed explanation below). Cross-Sectional Analysis This section describes the analysis that we conducted using the SCF and FA to analyze trends in income and wealth over time for all older Americans. Key Definitions and Assumptions We chose to look at household-level resources because couples may pool their economic resources and the SCF asks some of its questions about resources for households. The Federal Reserve provides the underlying programming code for creating the variables presented in its publications. Where possible, we relied on variable definitions used for Federal Reserve publications using the SCF. For example, we used the race or ethnicity of the household head, defined as either 1) white, non- Hispanic or 2) non-white or Hispanic (which we renamed “minority” for ease of reporting). We also relied on the Federal Reserve’s definitions for net worth, which we refer to as “wealth” in this report; retirement account balances (DC plans and IRAs); income from withdrawals from retirement accounts; and income from Social Security, pension, or disability benefits or annuities. In other cases, we developed our own variables, based on the raw variables described in the SCF codebooks. For example: Older households: households in which the survey respondent or any spouse or partner were aged 55 or older. Household income: estimated total income by adding up all of the individual income components created by the Federal Reserve. Other assets: any other assets that are not retirement accounts, the present value of future income from Social Security or DB pensions, or the value of the household’s primary residence (if one is owned) or vehicles. Other income: any other income coming from a source besides wages; withdrawals from retirement accounts; and Social Security, pension, or disability benefits or annuities. Analysis Goals The SCF is a cross-sectional survey, meaning it presents a nationally representative “snapshot” for each survey wave rather than following the same households over time. To create an income distribution, we rank ordered older households by household income and then broke them into five even groups, or quintiles. The “top” refers to the top 20 percent of households in this ranking while the “bottom” refers to the bottom 20 percent of households. We repeated this exercise for each year of the data. While the households included in the SCF are different every survey year, we were able to examine how the distribution of income and wealth across older households changed over time. We used the same method to create wealth distributions, except we rank ordered households by net worth, one measure of wealth, instead of income. To better understand increases in the top quintile, we also estimated the amount of income and wealth held among the top 10 percent, 5 percent, and 1 percent of households, when possible, for each survey year. We also created distributions of income and wealth for other subcategories of older households. As with the analysis for all older households, we broke the subcategory population into quintiles. We estimated distributions of income and wealth for the following subcategories for each survey year: Households in which the head was white and non-Hispanic Households in which the head was a minority Households in which the head attended at least some college Households in which the head did not attend college For all older households, we also estimated the percentage of households in each survey year that had 1) wage income, 2) income from retirement account withdrawals or 3) income from Social Security, pension, or disability benefits or annuities, as well as the amount of income provided by each source. Similarly, we estimated the percentage of older households that had a retirement account (DC or IRA), owned their home, or owned a vehicle, as well as the value of each of these assets. To better understand the importance of these asset types across the wealth distribution, we also estimated the percentage of households that had a retirement account (DC or IRA) with a balance of at least a $100; owned a vehicle worth at least $100; or had home equity of at least $100. We also analyzed the percentage of households with retirement account balances by bands of $50,000. Additional sensitivity analysis included comparing a household’s location in the income distribution to its location in the wealth distribution for each survey year. We found that the vast majority of households were in the same quintile of the income and wealth distributions or were only one quintile apart. Very few households were in the bottom quintile for income and top quintile for wealth or vice-versa. From 1989 through 2016, the percentage of households who fit these two scenarios was always under 1 percent. Estimating the Present Value of Social Security and Defined Benefit Pension Benefits The literature on retirement adequacy emphasizes the importance of including measures of the value of future DB and Social Security benefits in measures of the wealth distribution. However, the SCF does not provide estimates of the present value of expected future DB and Social Security benefits. As a result, we did a separate analysis to estimate the present value of future income from DB and Social Security benefits using the SCF and FA data from the Federal Reserve, as well as life expectancy data from the Social Security Administration (SSA). In general, our analysis was done for respondents and spouses/partners separately at the individual level, and estimates were combined to create household totals. We generally followed methods presented in an 2016 paper entitled “Is the U.S. Retirement System Contributing to Rising Wealth Inequality?” by Devlin-Foltz, Henriques, and Sabelhaus (see bibliography for the full citation), but made some changes in the assumptions given our specific focus on older Americans. In order to estimate the present value of income expected from DB plans at the household-level, we started with the aggregate value of accrued DB benefits by survey year from the FA. Following Devlin-Foltz et al. (2016), we calculated aggregate DB pension entitlements as the portion of total pension entitlements not found in DC assets and annuities held in IRAs at life insurance companies. Then, we allocated aggregate DB entitlements across households in a series of steps, ultimately splitting the aggregate DB entitlements between SCF respondents who were already receiving benefits and those who were covered by DB plans but were not yet receiving benefits. In the first step of the allocation, we estimated the present value of promised DB benefits for current DB beneficiaries. The present value of promised DB benefits for those already receiving benefits was based on the reported values for DB benefits in the SCF, life tables from SSA, and an assumed 3 percent real discount rate. After solving for the present value of promised DB benefits for those currently receiving benefits, we subtracted the total amount of DB benefits promised to current DB beneficiaries from the aggregate DB assets to solve for the share to be distributed to future DB beneficiaries. By doing this, we effectively assumed that current DB beneficiaries had first claim to DB pension assets. We allocated the remaining DB assets to future DB recipients by assigning each future DB beneficiary a share of the amount of the residual of aggregate DB entitlements (left over after current beneficiaries claimed their share) based on their earnings, the number of years they participated in a DB plan, their expected retirement age as stated in the SCF, and a 3 percent real discount rate. We also estimated the present value of expected future Social Security benefits for current and future Social Security beneficiaries, using information from the SCF on Social Security benefits for current Social Security beneficiaries and earnings information for future Social Security beneficiaries. With respect to current Social Security beneficiaries, we solved for the present value of Social Security benefits using annual Social Security benefits as reported in the SCF, life tables from SSA, and an assumed 3 percent real discount rate, consistent with our DB analysis. For future Social Security beneficiaries, we used current earnings or earnings from the longest job held as reported in the SCF as the basis for the Social Security benefit. Given that our analysis focused on older Americans, we assumed that future Social Security beneficiaries were close enough to retirement that the earnings information in the SCF provided a reasonable proxy for lifetime earnings. We created a monthly average of these earnings, which we used as a simplified version of the average indexed monthly earnings (AIME). We used these thresholds to compute something similar to the primary insurance amount (PIA) by assigning 90 percent of earnings up to the first bend point, 32 percent of earnings between the first and second bend points, and 15 percent of earnings between the second bend point and the monthly taxable maximum. We assumed everyone who was not yet receiving benefits but would in the future started collecting benefits at 62 or at their current age if older than 62. We applied benefit rules associated with each individual’s birth year to the PIA as set by the Social Security Administration and made adjustments for spousal benefits. We estimated the present value of Social Security benefits for future beneficiaries using the estimated PIA, a retirement age of 62 or their current age if older than 62 and not yet receiving benefits, life tables from SSA, and a 3 percent real discount rate. While adding these present value estimates to wealth better captures the totality of resources available to older Americans, our estimates of the present value of income from future DB and Social Security benefits are subject to uncertainty and should be interpreted with caution. For example, our estimates of the present value of DB benefits for future beneficiaries are not based on SCF respondent-reported expected DB benefits. Instead, we used the aggregate DB entitlements in the FA data and allocated that amount across households with DB plans. We followed this method, in part, because it appears that workers do not have a good understanding of their pension plan parameters and confuse DB benefits with other types of payouts in the SCF data, according to Devlin-Foltz et al. (2016). Moreover, our estimates of the present value of Social Security benefits for future beneficiaries are not based on lifetime earnings since the SCF does not collect all of the inputs needed to project Social Security benefits for respondent-families. However, it is possible to get a sense of the distributional impact of Social Security by focusing on those near retirement in certain points in time. A general limitation of our analysis of the present value of future income from DB pensions and Social Security is that our estimates rely on assumptions about life expectancy, real discount rates, and retirement ages, which are unlikely to hold for all households. As a result, we conducted some sensitivity analyses, particularly with respect to real discount rates and retirement ages. For both the DB and Social Security sensitivity analyses, we varied the real discount rate given the uncertainty about future interest rates. In general, higher discount rates result in lower estimated present values, so our estimates of the present value of future DB and Social Security benefits are sensitive to the assumptions about the discount rate. This is especially important in the DB analysis, as changing the assumed discount rate affects the allocation of aggregate DB assets between current and future DB beneficiaries. For example, using a 2 percent real discount rate, as opposed to a 3 percent real discount rate, yielded a higher allocation of aggregate DB assets for current beneficiaries compared to our baseline estimates. Using a 4 percent real discount rate, as opposed to 3 percent, generated a higher allocation of aggregate DB assets for future DB beneficiaries relative to our baseline estimates. For future beneficiaries, we had to make assumptions regarding the respondent and spouse/partner’s retirement age. For the DB analysis, we used the SCF-reported expected retirement age, given that our focus is older Americans, and older people not yet claiming benefits are relatively close to retirement. Given these assumptions, we also did the analysis assuming that all future DB beneficiaries retired at 62 and 65. Assuming different retirement ages can change the amount of the share of aggregate DB assets allocated to individual future DB beneficiaries in the SCF. For the Social Security analysis, we generally assumed that future Social Security beneficiaries retired at 62, in part because a sizeable proportion of people claim Social Security at 62, despite increases in the full retirement age. In addition, according to Devlin-Foltz et al. (2016), assuming a low retirement age decreases the present value of benefits directly if the reductions for early retirement are not actuarially fair, and indirectly if the individual were to keep working at a high enough income to increase their average indexed monthly earnings. Agency officials raised technical concerns about choosing age 62. It is possible that setting the retirement age at 62 may overstate the present value of future Social Security benefits, depending on various factors including interest rates and mortality. We considered using alternative retirement ages and do not believe that choosing a different retirement age for those not yet retired would substantively change our findings. Alternative methods to using present value estimates of future income expected from Social Security and DB pensions for analyzing distributional disparities in retirement security exist. For example, one option would be to evaluate how future monthly income from Social Security and DB pensions would be expected to affect retirement security, perhaps by assessing how the standard of living for workers would be expected to change. Additionally, disparities in health in adulthood could contribute to subsequent disparities in income and wealth at older ages. However, for our analysis, it was useful to estimate the present value of Social Security and DB pensions so we could compare the value of these sources to retirement account balances. In addition, the SCF does not include sufficient data on health to consider its role in income and wealth disparities for this part of our analysis. Section III: Analyzing Income and Wealth: How it Changes as Older Americans Age and Associations with Longevity This section describes the analysis we conducted to determine how the income and wealth of a specific cohort of older Americans were associated with longevity, and how the distributions of income and wealth changed as this cohort aged. For these analyses, we used data from the Health and Retirement Study (HRS), described below. Health and Retirement Study We analyzed data collected through the HRS, a nationally representative survey of older Americans. The HRS is a longitudinal survey, meaning that it follows the same individuals and households over the course of the study, allowing us determine how households’ income and wealth changed over time. HRS is a project of the University of Michigan’s Institute for Social Research that is funded through a cooperative agreement with the National Institute on Aging (U01AG009740). It collects information on individuals over age 50 and, among other things, contains detailed data on their education, marital status, work history, health, assets, and income. Data Availability When the HRS began in 1992, it consisted of a representative sample of Americans then aged 51-61, which is called the original or core HRS cohort. Since then, several additional cohorts of individuals have been added to the data to maintain representation of the older population, beginning in 1993 with the Asset and Health Dynamics Among the Oldest Old (AHEAD) cohort. Currently, a new cohort of participants aged 51-56 is added to the study every 6 years (see table 3). Respondents are surveyed every 2 years. We analyzed the HRS original cohort for our examinations of the association between longevity, income, wealth, and other factors; and our analysis of how income and assets change as the original HRS cohort aged. We also analyzed how income and assets changed for the War Babies cohort, which includes individuals born from 1942 through 1947. Figures from this analysis are presented in Appendix VI. We used three forms of HRS data: Public-Use HRS data: Most HRS datasets are available for download from the HRS website. For each wave, HRS makes an early release version of the data available prior to the final version. As of June 2019, final release files are available for each wave of the survey from 1992 through 2014, and the 2016 early release file is available. RAND HRS data: Researchers at RAND have created a more user- friendly version of the public-use HRS data (see below for more details). As of June 2019, RAND files are available through the 2014 final release data. Restricted-use HRS data: Some data resources in the HRS are restricted, meaning they are available only under special agreement because they contain sensitive and/or confidential information. For this report, we used restricted data containing earnings records from SSA. We conducted our analysis of the restricted-use files via a virtual desktop environment data enclave made available by the University of Michigan’s Center on the Demography of Aging (MiCDA). Data Processing RAND, a research organization, cleans and processes the HRS data to create a user-friendly longitudinal dataset that has consistent and intuitive naming conventions and model-based imputations for missing wealth and income data. In most cases, we used the RAND version of the HRS variables due to the greater ease of use and the additional data cleaning already performed. RAND income and wealth variables were given in nominal dollars. We adjusted these variables to real 2016 dollars using the Consumer Price Index for All Urban Consumers. To calculate mortality, we supplemented the RAND files with information from the early release 2016 public use file to the extent that it provided additional information on mortality through 2014. See the data reliability section below for further discussion of the mortality data. Data Reliability We found the HRS variables presented in this report to be sufficiently reliable. We conducted a data reliability assessment of selected variables by conducting electronic data tests, reviewing documentation on the dataset, and reviewing related internal controls. When we learned that particular variables were not sufficiently reliable, we did not use them in our analysis. We selected our analyses to ensure there was sufficient sample size to produce reliable estimates. We produced variance estimates using a statistical technique chosen to account for the sample design of the HRS and adjusted the sample weights to account for potential bias due to the linkage to SSA administrative data, as described below. We identified additional limitations due to the survey responses being self-reported. As such, they are subject to the respondent’s possible errors in reporting specific financial amounts. We measured mortality from 1992 through 2014. Mortality data in the HRS, including an indicator for a respondent’s death in a given survey year and month and year of death, come from matches with the National Death Index or follow-up interviews with surviving family members. There is complete date of death (specifically month and year of death) information for nearly everyone who died prior to 2012. However, for deaths since 2012, the HRS data linked to the National Death Index was not available, which likely lead to more deaths without information on month and year of death. Since the 2012 and 2014 survey years, there has been time to gather death date information from follow up interviews with families, and less than 10 percent of those who died between the 2012 and 2014 survey years had incomplete data on month and year of death. However, in the 2016 survey year early release public use file, we found that a higher proportion of those who died did not have death dates, likely due to the lack of linkage with the National Death Index and a lack of time to follow up with families since the 2016 survey year to find out when survey participants died. As a result, we determined that we had reliable data on mortality through 2014. Weight Adjustments HRS contains restricted data drawn from SSA administrative sources for participants who have provided explicit consent to link their responses to administrative data and subsequently were successfully linked with the administrative data. It is possible that respondents who were linked may differ in systematic ways from respondents who were not linked, which would affect the generalizability of estimates derived solely from the subset of participants who were linked. The survey weights provided with HRS data account for the complexity of the survey design (e.g., oversamples of minorities and Floridians), nonresponse, and post- stratification adjustments for demographic distributions, but do not adjust for the administrative linkage. There is evidence that in at least some waves of the survey, there are modest but statistically significant differences in linkage rates on characteristics including race, income, and wealth. One technique to address this potential source of bias is to adjust the sample weights used in variance estimation for observed differences between those with and without linked administrative data. Kapteyn et al. suggest a technique for computing inverse probability weights to account for these differences. Following this technique, HRS has computed a set of weights that account for consent to SSA administrative linkage, but only for the 1992, 1998, and 2004 survey waves. However, this report needed adjusted household weights for all 12 waves and adjusted respondent weights for wave 1. We opted to address the potential non- linkage bias using a logistic model-based propensity score adjustment, rather than a weighting class adjustment for several reasons. First, we had the benefit of many variables with which to model the propensity of non-linkage. Second, weighting class adjustments, which involve creating mutually exclusive classes based on the variables associated with non- linkage, were not feasible because of the large number of variables we included in the adjustment. The number of respondents per cell would be too small. Third, the propensity score adjustment allows us to consider many variables at the same time. Finally, the propensity score adjustment allows us to rank respondents, rather than assume that the characteristics used in a weighting class adjustment would perfectly predict non-linkage. We compared estimates and standard errors obtained using the original weights to the non-linkage adjusted weights. The adjusted weights changed estimates and their standard errors in generally small amounts, but did not affect observed trends in this report. For instance, the median absolute value of the change was less than 1 percent for estimates of median household income for individuals by mid-career earnings quintiles from 1992 to 2014. The median absolute value of the change was 5.7 percent for the standard errors of those estimates. Variance Estimation We used the balanced repeated replication method to estimate standard errors for the income and wealth statistics we reported using HRS because the income and wealth statistics were quantiles (i.e., medians). The standard Taylor series (Woodruff) variance estimation method assumes that quantiles can be expressed as a smooth function in the sample and population. However, quantile functions are not considered smooth. After ruling out Taylor series method, we explored replication methods such as jackknife, bootstrap, and balanced repeated replication. Of those, the balanced repeated replication is most suited for the two primary sampling units per stratum design of the HRS. The Fay adjustment stabilizes the estimates across strata when using the normal balanced repeated replication method. This adjustment is particularly relevant for smaller samples. The literature we reviewed suggested that the jackknife produces a poor estimate of the variance of quantiles (Lohr 2009 and Judkins 1990) and that the bootstrap requires more computations than balanced repeated replication. Mid-Career Household Earnings Measure Construction For our analyses, we wanted to classify HRS respondents into income groupings based on a relatively stable measure of income that uses multiple years of administrative data, to reduce measurement error in self- reported survey data and to reduce the chance of basing the income grouping on a single year of unusually low or high income. Several limitations prevent us from classifying households based on their full lifetime income from all sources. HRS does not contain administrative data on income sources besides earnings and Social Security benefits. Moreover, for years before 1978, the administrative earnings records are only available for earnings covered by Social Security and below the taxable maximum. Finally, not all sources of earnings are covered by Social Security. While around 96 percent of employment is currently covered by Social Security, this has not always been the case. In particular, successive expansions of coverage in the 1950s and 1960s greatly increased the proportion of the workforce covered by Social Security, such that relying on SSA earnings records going back to 1951 would underestimate the earnings of large numbers of older HRS participants. Thus, for our analysis, we constructed earnings groupings based on a measure of “mid-career” earnings, based on a household’s average annual reported earnings when the household head was age 41 to age 50. Earnings tend to peak (and remain relatively stable) for workers in their mid-40s through their early 50s. We begin measuring earnings at age 41 to avoid using data prior to expansions of Social Security coverage and to minimize our reliance on imputed earnings above the taxable maximum. In the early years of the study, HRS sought retrospective consent for administrative data linkages. As a result, some participants who only provided consent for the administrative linkage during their initial interview and did not provide consent in subsequent interviews did not have earnings records after age 50. Therefore, we set age 50 as the upper bound for our measure of mid-career earnings. Analyzing the Association Among Income, Wealth, Longevity, and Other Variables Analysis Goals Our goal was to determine how income, wealth, and other demographic and health-related factors are associated with the longevity of older Americans over age 50 in the original HRS cohort. We measured the proportion of original HRS participants still alive at the end of the survey to examine how longevity varied across the income and wealth distributions, as well as across different demographic and health-related variables, including race, educational attainment, gender, and self- reported health status at the beginning of the survey. Survival Analysis In order to examine these relationships, we used data from the original HRS cohort to measure deaths over a maximum of 22 years (1992 through 2014). Every 2 years, the HRS attempted to measure whether the original respondents were still alive, but these longevity data were incomplete because some of the original respondents declined to participate in later waves of the survey. Once these respondents left the survey, their actual longevity could not be followed. This incomplete measurement of longevity is generally known as “censored data” in statistics. Special methods of “survival analysis” are required to avoid making inaccurate conclusions about actual longevity from this type of data, when the analyst can only measure longevity up to a certain time before death. Survival analysis accounts for survey respondents with complete or incomplete longevity data. Without making this distinction, ordinary statistical methods, such as linear regression models of the observed longevities, would not include the correct sample of respondents when estimating the chance that a respondent would die at any time within the observation period. In addition, ordinary methods would incorrectly treat the longevities observed in the observation period as actual longevities, when some of them are the shorter, censored longevities observed before the respondents dropped out of the study. Survival analysis methods correct for this problem, in order to reliably estimate the chance of death by any given time in the observation period. Most importantly, our analysis assumed actual longevity during the observation period did not have a systematic relationship with whether the original HRS respondents continued to participate in the study except that leaving the study implied a later death (“noninformative censoring”). In other words, participants with censored and actual longevities did not systematically differ in ways that affected longevity or the variables associated with it. We believe this assumption to be reasonable for the purpose of our analysis for two reasons. First, a small percentage (8 percent) of the original respondents dropped out of the survey, so that the impact of any longevity differences among the population who dropped out would likely have been small. Second, while some baseline characteristics of respondents do appear correlated with non-response over time, the population that dropped out of the study does not appear to vary significantly from those completing each wave, except for race and ethnicity. In our survival analysis, the dependent variable was composed of two parts, including the time in months to death and whether death was observed during the survey period. In general, we used continuous time survival models, including Kaplan-Meier and Cox proportional hazards regression models to estimate survival functions, which estimate the probability of surviving (or dying) up to the end of the survey period, and hazard functions, which estimate the probability of death, per time unit, given that an individual has survived up to that point in time. We used the Kaplan-Meier method to estimate survival probabilities as a function of time and to obtain univariate statistics on survival for different groups. For example, we estimated the percentage of survivors during the survey period across income and wealth quintiles. We also estimated survivorship across the demographic and health-related variables. Moreover, using the Cox proportional hazards regression models, we analyzed the relationship between income and longevity and wealth and longevity, controlling for related demographic and health-related variables, as well as age at the beginning of the survey. These regressions allow the relationships between various characteristics and death to be described as hazard ratios. For example, hazard ratios that are statistically significant and greater than 1.00 indicate that individuals with those characteristics are more likely to die during the survey period compared to a reference group. Hazard ratios that are statistically significant and less than 1.00 indicate that individuals with those characteristics are less likely to die in the study period compared to a reference group. We estimated survivorship among individuals with the following characteristics in combination: bottom income (earnings) quintile and no college; middle of the income (earnings) distribution (third quintile) and high school diploma or some college (excluding GED); and top of the income (earnings) distribution and college diploma. We then ran a subset of these scenarios using different combinations of self-reported health status for each of the three main scenarios. For example, we estimated survivorship among individuals in the bottom income (earnings) quintile, who had not attended college, and reported being in fair or poor health in 1992. Our results have limitations and should be interpreted with caution. Results from the survival analysis present correlations, not causal estimates. Moreover, while our main analysis includes self-reported health status at the beginning of the study period, we also excluded this variable as a sensitivity check given the interconnectedness of income, wealth, and health and the conclusions were similar. Furthermore, due to limitations with respect to the mortality data in later years of the HRS, we did not have specific months and years of death for 60 respondents we know died during the observation period due to death indicators in the interview status variables from HRS. As a result, we imputed their death dates based on the survey year they were indicated to have died in from the HRS interview status questions. While death is continuous in the sense that it can happen to any person at any time, we only observe death within a given month for those with death dates in the data, and only within a year for those whose death information we gathered for the interview status variables. As a sensitivity check, we redid the analysis using survival information at the person-year level and discrete survival analysis techniques and found similar results. Analyzing How Income and Wealth Change as Older Americans Aged This section describes how we used the HRS to determine how the distributions of income and wealth change as older Americans in the original HRS cohort aged. Key Definitions and Assumptions We focused this analysis on the original HRS cohort (born 1931-1941). This cohort entered the study in 1992 at ages 51-61 and had reached their 70s or early 80s by 2014, allowing us to analyze how income and assets changed as these households progressed through retirement. We conducted our analysis and reported results at the household level because couples may pool financial resources or co-own assets. Also, RAND HRS variables on income and wealth are presented at the household level. When necessary, we combined respondent and spouse or partner level variables we used from the public-use file in order to obtain household-level variables. We restricted this analysis to survey respondents (“household heads”), or any spouses or partners, who were still alive in 2014 to ensure we followed the same group of people throughout our analysis. We grouped households into five earnings groups based on their mid-career earnings, as described above. Analysis Goals Our primary goal was to examine how the distribution of income and wealth changed over time for households in the original HRS cohort, based on their mid-career earnings groups. We also examined how specific sources of income and wealth changed over time. We also wanted to determine how these trends varied based on household demographic characteristics, including race and ethnicity and education level, without attempting to ascribe causality. Our analysis included survey respondents (heads of households) or their spouses or partners who responded to the survey in 1992 and were still alive and responded in 2014, which is the most recent year for which the data are complete. The heads of households we analyzed were from the original HRS cohort and were born in 1931 to 1941. If neither the head of household or the spouse or partner interviewed in 1992 was still alive in 2014, their household was not included in our sample. In order to do so, we estimated median levels of household wealth and income every 2 years for each earnings group, as well as median levels for specific sources of income and wealth. We estimated the percentage changes and absolute changes in median wealth and income for each earnings group from 1992 through 2014 in order to determine whether income or wealth levels increased or decreased over time. For specific sources of income and wealth, we estimated medians for all households in each earnings group as well as for only those households which reported having the specific source of income or wealth. For example, we determined the median home equity for all households in each earnings group as well as the median home equity for only those households with home equity for each earnings group. Finally, we calculated the percent of our sample having each type of wealth and income (e.g. home equity, Social Security benefits) for each year in the data. As a sensitivity check, we also analyzed how total assets and income changed for the HRS’s “War Babies” cohort (born 1942-1947). For this analysis, we report 99 percent confidence intervals alongside the percentage or other numerical estimates. We chose to use this level of confidence to account for the use of imputation in the RAND HRS data in addition to the sampling error that using survey data introduces. All financial figures using the HRS data are in 2016 dollars. Appendix II: Financial and Demographic Characteristics across the Wealth Distribution This appendix compares the top 1 percent of the wealth distribution of older households to several other groups in this distribution: (1) the next 19 percent, (2) the top 20 percent, (3) the bottom 80 percent, and (4) the bottom 20 percent. These comparisons provide context for the financial security of the top 1 percent relative to other households at the top of the wealth distribution, the remainder of the wealth distribution, and households at the bottom of the distribution, respectively. To draw these comparisons, we used 2016 data from the Survey of Consumer Finances, a triennial, cross-sectional survey produced by the Board of Governors of the Federal Reserve System. A different sample of households was used for each year in our analysis. These data allow for comparison of the experiences of same-age households at different points in time. We chose to look at household-level resources because couples may pool their economic resources, and the SCF asks some of its questions about resources for households. We conducted our analysis for older households, which were defined as those in which the household head or any spouse or partner were ages 55 or older. We defined wealth as net worth, or assets minus debt. Because the sample size for the top 1 percent is small, we presented dollar values rounded to thousands of 2016 dollars. By race and ethnicity of household head: dollars) By education level of household head: Estimated median value (2016 dollars) Vehicle(s) All other assets Debt n/a Not available. There were insufficient data to produce a reliable estimate of median debt. Financial resource Retirement account(s) Home Vehicle(s) 90 percent were white, non- Hispanic loans, lines of credit, and credit card balances after the last payment. By household type: By education level of household head: Financial resource Retirement account(s) Estimated median value (2016 dollars) Home Vehicle(s) All other assets n/a Not available. There were insufficient data to produce a reliable estimate of median debt. loans, lines of credit, and credit card balances after the last payment. Financial resource Retirement account(s) Home Vehicle(s) 91 percent were white, non- Hispanic loans, lines of credit, and credit card balances after the last payment. By household type: Financial resource Retirement account(s) Home Vehicle(s) By education level of household head: Estimated median value (2016 dollars) Financial resource Retirement account(s) Home Vehicle(s) 95 percent confidence interval lower bound 434,000 95 percent confidence interval upper bound 556,000 n/a Not available. There were insufficient data to produce a reliable estimate of median debt. loans, lines of credit, and credit card balances after the last payment. Financial resource Retirement account(s) Home Vehicle(s) 70 percent were white, non- Hispanic mortgages, loans, lines of credit, and credit card balances after the last payment. By household type: Financial resource Retirement account(s) Home Vehicle(s) By education level of household head: mortgages, loans, lines of credit, and credit card balances after the last payment. Estimated median value (2016 dollars) Financial resource Retirement account(s) Home Vehicle(s) Financial resource Retirement account(s) Home Vehicle(s) 55 percent were white, non- Hispanic mortgages, loans, lines of credit, and credit card balances after the last payment. By household type: Financial resource Retirement account(s) Home Vehicle(s) By education level of household head: mortgages, loans, lines of credit, and credit card balances after the last payment. Estimated median value (2016 dollars) Financial resource Retirement account(s) Home Vehicle(s) 95 percent confidence interval lower bound 0 0 3,000 1,000 95 percent confidence interval upper bound 0 0 4,000 1,000 n/a Not available. There were insufficient data to produce a reliable estimate of median debt. mortgages, loans, lines of credit, and credit card balances after the last payment. Appendix III: Additional Data Tables Appendix IV: Additional Survival Analysis Results This appendix contains additional results from our survival analysis, as shown in the tables below. Appendix V: 2014 Population in the Health and Retirement Study (HRS) Appendix V: 2014 Population in the Health and Retirement Study (HRS) This appendix compares the demographic characteristics, as of 2014, of the HRS sample we used in our analysis. Appendix VI: Estimated Income and Wealth for War Babies Cohort This appendix contains estimates of income and wealth for households, where the heads of households were born from 1942 through 1947. The Health and Retirement Study (HRS) refers to this cohort as the “War Babies” cohort. Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Michael Collins (Assistant Director), Jennifer Gregory (Analyst-in-Charge), Garrick Donnelly, Kathleen McQueeney, Chris Wickham, and Christopher Zbrozek made key contributions to this report. Also contributing to this report were Susan Aschoff, James Bennett, Deborah Bland, Melinda Bowman, Nisha Hazra, Kirsten Lauber, Jeffrey Miller, Oliver Richard, Amrita Sen, Joseph Silvestri, Jeff Tessin, Frank Todisco, Adam Wendel, and Sirin Yaemsiri. Bibliography Auerbach, Alan J., Laurence J. Kotlikoff, and Darryl R. Koehler. “U.S. Inequality, Fiscal Progressivity, and Work Disincentives: An Intragenerational Accounting.” NBER Working Paper, no. 22032 (2016). Banerjee, Sudipto. “Asset Decumulation or Asset Preservation? What Guides Retirement Spending?” EBRI Issue Brief, no. 447 (2018). Banerjee, Sudipto. “Income Composition, Income Trends and Income Shortfalls of Older Households.” EBRI Issue Brief, no. 383 (2013). Bee, Adam and Joshua Mitchell. “Do Older Americans Have More Income Than We Think?” SESHD Working Paper, vol. 2017, no. 39 (2017). Board of Governors of the Federal Reserve System. “Changes in U.S. Family Finances from 2013 to 2016: Evidence from the Survey of Consumer Finances,” Federal Reserve Bulletin, vol. 103, no. 3 (Washington, D.C.: September 2017). Bosworth, Barry P. and Kan Zhang. “Evidence of Increasing Differential Mortality: A Comparison of the HRS and SIPP.” Center for Retirement Research Working Paper, vol. 2015, no. 13 (2015). Bricker, Jesse, Alice M. Henriques, Jake A. Krimmel, and John E. Sabelhaus. “Measuring Income and Wealth at the Top Using Administrative and Survey Data.” Finance and Economics Discussion Series, vol. 2015, no. 030 (2015). Brookings Economic Studies Program. Later Retirement, Inequality in Old Age, and the Growing Gap in Longevity between Rich and Poor (Washington, D.C.: Brookings Institution, 2016). Burtless, Gary. “What Do Stock Market Fluctuations Mean for the Economy?” Brookings Opinions, February 23, 2018. Accessed March 14, 2019, https://www.brookings.edu/opinions/what-do-stock-market- fluctuations-mean-for-the-economy?/ Chen, Anqi, Alicia H. Munnell, and Geoffrey T. Sanzenbacher. “How Much Income Do Retirees Actually Have? Evaluating the Evidence from Five National Datasets.” Center for Retirement Research Working Paper, vol. 2018, no. 14 (2018). Chetty, Raj, Michael Stepner, Sarah Abraham, Shelby Lin, Benjamin Scuderi, Nicholas Turner, Augustin Bergeron, and David Cutler. “The Association Between Income and Life Expectancy in the United States, 2001-2014.” JAMA, vol. 315, no. 16 (2016). Choi, HwaJung and Robert F. Schoeni. “Health of Americans Who Must Work Longer to Reach Social Security Retirement Age.” Health Affairs, vol. 36, no. 10 (2017). Congressional Budget Office. Measuring the Adequacy of Retirement Income: A Primer. 53191. Washington, D.C.: October 2017. Congressional Budget Office. Trends in Family Wealth, 1989 to 2013. 51846. Washington, D.C.: August 2016. Congressional Research Service. The U.S. Income Distribution: Trends and Issues. R44705. Washington, D.C.: December 8, 2016. Devlin-Foltz, Sebastian, Alice Henriques, and John Sabelhous. “Is the U.S. Retirement System Contributing to Rising Wealth Inequality?” The Russell Sage Foundation Journal of the Social Sciences, vol. 2, no. 6 (2016). Federal Reserve Bank of St. Louis. “When the Stock Market Rises, Who Benefits?” On the Economy Blog, February 27, 2018. Accessed March 14, 2019, https://www.stlouisfed.org/on-the- economy/2018/february/when-stock-market-rises-who-benefits. Ghilarducci, Teresa, Siavash Radpour, and Anthony Webb. “Employer Retirement Wealth Inequality: 1992 and 2010.” Paper presented at the annual meeting of the American Economics Association Meetings, Philadelphia, P.A.: January 6, 2018. Goda, Gopi Shah, Shanthi Ramnath, John B. Shoven, and Sita Nataraj Slavov. “The Financial Feasibility of Delaying Social Security: Evidence from Administrative Tax Data.” Journal of Pension Economics and Finance, vol. 17, no. 4 (2018). Gustman, Alan, Thomas Steinmeier, and Nahid Tabatabai. “Distributional Effects of Means Testing Social Security: Income Versus Wealth.” NBER Working Paper Series, no. 22424 (2016). Johnson, Barry W. and Brian Raub. “How Much Longevity Can Money Buy? Estimating Mortality Rates for Wealthy Individuals.” Statistical Journal of the IAOS, vol. 34 (2018). Johnson, Richard W. Delayed Retirement and the Growth in Income Inequality at Older Ages (Washington, D.C.: Urban Institute, 2018). Kindig, David A. and Erika R. Cheng. “Even As Mortality Fell in Most US Counties, Female Mortality Nonetheless Rose in 42.8 Percent of Counties From 1992 to 2006.” Health Affairs, vol. 32, no. 3 (2013). Olshansky, S. Jay, Toni Antonucci, Lisa Berkman, Robert H. Binstock, Axel Boersch-Supan, John T. Cacioppo, Bruce A. Carnes, Laura L. Carstensen, Linda P. Fried, Dana P. Goldman, James Jackson, Martin Kohli, John Rother, Yuhui Zheng, and John Rowe. “Differences in Life Expectancy Due to Race and Educational Differences Are Widening, and Many May Not Catch Up.” Health Affairs, vol. 31, no. 8 (2012). Owyang, Michael T. and Hannah G. Shell. “Taking Stock: Income Inequality and the Stock Market.” Economic Synopses, No. 7 (2016). Pijoan-Mas, Josep and Jose-Victor Rios-Rull. “Heterogeneity in Expected Longevities.” Demography, vol. 51, no. 6 (2014). Poterba, James M. “Retirement Security in an Aging Population.” The American Economic Review, vol. 104, no. 5 (2014). Poterba, James, Steven Venti, and David A. Wise. “Longitudinal Determinants of End-of-Life Wealth Inequality.” Journal of Public Economics, vol. 162 (2018). Poterba, James M., Steven F. Venti, and David A. Wise. “Were They Prepared for Retirement? Financial Status at Advanced Ages in the HRS and AHEAD Cohorts.” NBER Working Papers, no. 17824 (2012). Poterba, James, Steven Venti, and David Wise. “The Composition and Drawdown of Wealth in Retirement.” Journal of Economic Perspectives, vol. 25, no. 4 (2011). Raskin, Sarah Bloom. “Downturns and Recoveries: What the Economies in Los Angeles and the United States Tell Us.” Remarks at the luncheon for Los Angeles business and community leaders, Los Angeles Branch of the Federal Reserve Bank of San Francisco. April 12, 2012. Ruiz, John M., Patrick Steffen, and Timothy B. Smith. “Hispanic Mortality Paradox: A Systematic Review and Meta-Analysis of the Longitudinal Literature.” American Journal of Public Health, vol. 103, no. 3 (February 2012). Saez, Emmanuel and Gabriel Zucman. “Wealth Inequality in the United States Since 1913: Evidence from Capitalized Income Tax Data.” The Quarterly Journal of Economics, vol. 131, no. 2 (2016). Waldron, Hilary. “Mortality Differentials by Lifetime Earnings Decile: Implications for Evaluations of Proposed Social Security Law Changes.” Social Security Bulletin, vol. 73, no. 1 (2013). Waldron, Hilary. “Trends in Mortality Differentials and Life Expectancy for Male Social Security-Covered Workers, by Socioeconomic Status.” Social Security Bulletin, vol. 67, no. 3 (2007). Wang, Haidong, Austin E Schumacher, Carly E Levitz, Ali H Mokdad, and Christopher JL Murray. “Left Behind: Widening Disparities for Males and Females in US County Life Expectancy, 1985-2010.” Population Health Metrics, vol. 11, no. 8 (2013). Related GAO Products Retirement Security: Most Households Approaching Retirement Have Low Savings, an Update. GAO-19-442R. Washington, D.C.: Mar. 26, 2019. The Nation’s Retirement System: A Comprehensive Re-evaluation Needed to Better Promote Future Retirement Security. GAO-19-342T. Washington, D.C.: Feb. 6, 2019. The Nation’s Retirement System: A Comprehensive Re-evaluation Is Needed to Better Promote Future Retirement Security. GAO-18-111SP. Washington, D.C.: Oct.18, 2017. Older Workers: Phased Retirement Programs, Although Uncommon, Provide Flexibility for Workers and Employers. GAO-17-536. Washington, D.C.: June 20, 2017. Retirement Security: Low Defined Contribution Savings May Pose Challenges. GAO-16-408. Washington, D.C.: May 5, 2016. Retirement Security: Shorter Life Expectancy Reduces Projected Lifetime Benefits for Lower Earners. GAO-16-354. Washington, D.C.: Mar. 25, 2016. Retirement Security: Better Information on Income Replacement Rates Needed to Help Workers Plan for Retirement. GAO-16-242. Washington, D.C.: Mar. 1, 2016. Retirement Security: Most Households Approaching Retirement Have Low Savings. GAO-15-419. Washington, D.C.: May 12, 2015. Retirement Security: Trends in Marriage and Work Patterns May Increase Economic Vulnerability for Some Retirees. GAO-14-33. Washington, D.C.: Jan. 15, 2014. Retirement Security: Women Still Face Challenges. GAO-12-699. Washington, D.C.: July 19, 2012. Unemployed Older Workers: Many Experience Challenges Regaining Employment and Face Reduced Retirement Security. GAO-12-445. Washington, D.C.: Apr. 25, 2012.
Why GAO Did This Study Income and wealth inequality in the United States have increased over the last several decades. At the same time, life expectancy has been rising, although not uniformly across the U.S. population. Taken together, these trends may have significant effects on Americans' financial security in retirement. GAO was asked to examine the distribution of income and wealth among older Americans, as well as its association with longevity, and identify the implication that these trends may have on retirement security. This report examines (1) the distributions of income and wealth among all older Americans over time; (2) the association between income, wealth, and longevity among older Americans; and (3) how the distributions of income and wealth changed over time for a cohort of individuals as they aged. To conduct this work, GAO analyzed data from two nationally representative surveys: the SCF, using data from 1989 through 2016, and the HRS. GAO used 1992 through 2014 HRS data linked to earnings records from the Social Security Administration. While preliminary 2016 HRS data are available, GAO used 2014 data, which contain more complete information for GAO's analysis. GAO also reviewed studies and interviewed researchers to further analyze the relationships between income, wealth, longevity, and retirement security. What GAO Found Disparities in income and wealth among older households have become greater over the past 3 decades, according to GAO's analysis of Survey of Consumer Finances (SCF) data. GAO divided older households into five groups (quintiles) based on their income and wealth. Each year of data in the analysis, and, thus, each quintile, included different sets of households over time. Average income and wealth was generally higher over time (see fig. 1 for average income), disproportionately so for the top quintile (top 20 percent). For example, in 2016, households in the top quintile had estimated average income of $398,000, compared to about $53,000 for the middle quintile and about $14,000 for the bottom quintile. GAO also found that for quintiles with lower wealth, future income from Social Security and defined benefit pensions provide a relatively significant portion of resources in retirement for those who expect such income. A substantial number of older Americans born from 1931 through 1941 lived at least into their 70s or early 80s, according to GAO's analysis of data from the Health and Retirement Study (HRS), a nationally representive survey which follows the same individuals over time. GAO divided individuals born from 1931 through 1941 into quintiles based on their mid-career household earnings using records from the Social Security Administration. GAO's analysis, as well as that of other researchers, shows that differences in income, wealth, and demographic characteristics were associated with disparities in longevity. However, even with these disparities, we found a substantial number of people in the sample were alive in 2014, including those with characteristics associated with reduced average longevity, such as low earnings (see fig. 2) and low educational attainment. Taken all together, individuals may live a long time, even individuals with factors associated with lower longevity, such as low income or education. Those with fewer resources in retirement who live a long time may have to rely primarily on Social Security or safety net programs. GAO's analysis of HRS data also found that disparities in household income decreased while disparities in wealth persisted as a cohort of older Americans aged from approximately their 50s into their 70s or early 80s. Income disparities decreased between higher- and lower-earning households because higher-earning households saw larger drops in income over time, indicating the possible transition from working to retirement. For example, we estimated median income for the top mid-career earnings group decreased by 53 percent while estimated median income for the bottom earnings group decreased by 36 percent over the same period. Wealth remained relatively steady for households in the bottom three earnings groups over the time period GAO examined, while households in the top two earnings groups experienced larger fluctuations in wealth. GAO estimated that median retirement account balances and median home equity increased across earnings groups for households that had these assets. However, the continued wealth disparities may be due to significant differences in the median value of retirement accounts and home equity between higher- and lower-earning households. GAO also found that white households in the bottom two earnings groups had higher estimated median incomes, and white households in all earnings groups generally had greater estimated median wealth, than racial minority households in those earnings groups. In addition, within each earnings group, households headed by someone with at least some college education generally had higher median incomes and wealth than households headed by someone who did not attend college.
gao_GAO-20-210
gao_GAO-20-210_0
Background Created as part of the Employee Retirement Income Security Act of 1974, as amended (ERISA), traditional IRAs provide tax advantages to help individuals—including small business owners, independent contractors, and other workers not covered by employer-sponsored retirement plans— save for retirement. Employees who have employer-sponsored retirement plans, such as a 401(k), can also roll over these assets into an IRA when they retire or change jobs. Since the enactment of ERISA, different types of IRAs with different features for individuals and small businesses have been created. The two IRA types with federal income tax benefits for individuals are traditional IRAs (which allow eligible individuals to make tax-deductible contributions and accumulate tax-deferred investment earnings) and Roth IRAs (which allow eligible individuals to make after-tax contributions and accumulate investment earnings tax free). IRA owners are able to invest their IRA savings in a wide variety of asset types. IRA owners generally make tax-favored contributions to their accounts to purchase assets from investment options offered through banks or other IRS-qualified firms acting as custodians of the IRA assets. Most IRA custodians limit holdings in IRA accounts to firm- approved stocks, bonds, mutual funds, and CDs. Some custodians offer so-called “self-directed IRAs” that allow investments in a broader set of unconventional assets—such as real estate, certain precious metals, private equity, and virtual currency—than is permitted by most IRA custodians. As we previously reported, custodial agreements for these accounts often require IRA account owners to be responsible for directing their investments, and to oversee the selection, management, monitoring, and retention of all investments in the account. The account owners bear the consequences of any mistakes made in managing their accounts, such as being noncompliant with IRA rules. Through our prior work, we identified the following four areas where complex rules are likely to apply to IRA owners investing in unconventional assets: Barred investments. Investments in life insurance contracts and collectibles, such as artwork and antiques, are prohibited. Although precious metals are generally prohibited collectibles, certain types of coins and bullion are permitted provided that they meet specific purity and custody requirements. Prohibited transactions. IRA owners are not permitted to engage in prohibited transactions that personally benefit the owner or other disqualified persons in a way other than as a vehicle to save for retirement. Examples of such prohibited transactions include IRA owners selling their own property to an IRA, or taking a salary from an IRA-funded business. IRA owners who believe that an otherwise prohibited transaction should be permitted, may apply to the Department of Labor (DOL) to request an exemption for a specific transaction. Unrelated business income. Earnings and profits made in tax- deferred savings vehicles like IRAs generally are reinvested in the account without generating current federal tax liability, but investments in certain unconventional assets can generate ongoing tax liability for IRA owners. Any IRA that earns $1,000 or more of gross income from an unrelated business must file Form 990-T Exempt Organization Business Income Tax Return with IRS and pay related taxes. Fair market value (FMV). When IRA owners invest in less conventional and nonpublicly traded assets, custodians may find it challenging to properly report the FMV of those assets. Starting with tax year 2015, IRS began requiring IRA custodians to report selected information on unconventional assets in their clients’ accounts. For some hard-to-value unconventional assets, IRA owners may need to supply custodians with independent appraisals or other evidence to substantiate an asset’s current FMV. Failure to abide by the rules governing IRAs with unconventional assets can have significant consequences for IRA owners. For example, if an IRA owner engages in a prohibited transaction that has not been exempted by DOL, the IRA will lose its tax-favored status, and the account is treated as distributing all of its assets to the owner at the FMV on the first day of the year in which the prohibited transaction occurred. Noncompliance with IRA rules—if not detected—can also lead to millions of dollars in uncollected tax revenue for the government. Individuals who invest in certain unconventional assets using Roth IRAs can avoid taxation on investment gains. For example, founders of companies (or key initial investors) who use IRAs to invest in nonpublicly traded shares of their newly formed companies can realize many millions of dollars in tax-favored gains on their investment if the company is successful. IRS is responsible for enforcing IRA tax laws, including rules that apply when IRA owners invest in unconventional assets. Within IRS, four business operating divisions have responsibilities for enforcing compliance with IRA rules. Table 1 provides an overview of each division’s IRA enforcement activities. Third-party reporting by IRA custodians provides information that taxpayers can use in preparing their tax returns and that IRS can use to identify noncompliant taxpayers and help close the tax gap. In 2015, IRS began requiring custodians to report new information to help identify IRAs with hard-to-value unconventional assets. IRS Form 5498 IRA Contribution Information has a new box 15a for custodians to report the portion of the IRA FMV attributable to nonmarket assets as well as a box 15b with codes describing the type of nonmarket assets. Custodians are to report similar information on IRS Form 1099-R identifying distributions of IRA assets that do not have a readily available FMV. Primary IRS Publication Could Better Help IRA Owners with Unconventional Assets Understand Complex Rules The first article in the Taxpayer Bill of Rights is the right to be informed which means that taxpayers have the right to know what they need to do to comply with tax laws. IRS’s Publication 1, Your Rights as a Taxpayer, further states that taxpayers are entitled to clear explanations of the laws and IRS procedures in all forms, instructions, publications, notices, and correspondence. To help taxpayers and their advisors better understand tax rules, such as those governing IRAs with unconventional assets, IRS produces several types of resources. Taxpayers (or their advisers and paid tax preparers) with complicated returns or transactions may require detailed and technical resources, such as guidance published in a weekly IRS publication called the Internal Revenue Bulletin (IRB). Tax regulations— issued by the Department of the Treasury (Treasury)—are published in the IRB together with technical IRS guidance such as revenue rulings and revenue procedures. IRS has stated that only guidance published in the IRB contains IRS’s authoritative interpretation of the law. IRS also produces resources that are less technical and intended to be more easily understood by most taxpayers. IRS issues hundreds of publications on a variety of tax topics, and many are updated annually. IRS also produces a variety of information on its website (IRS.gov) such as online tools, instructions, and FAQs. IRS’s Publication 590-A, Contributions to Individual Retirement Arrangements (IRAs), and Publication 590-B, Distributions from Individual Retirement Arrangements (IRAs), serve as a general IRA handbook for IRA owners and a logical starting point for all IRA owners with tax questions, including those with unconventional assets. At more than 120 pages combined, Publications 590-A and 590-B comprise one of IRS’s longest publications on retirement related topics. Publications 590-A and 590-B provide some limited information on the four compliance topics that we identified through prior work as likely to affect IRA owners with unconventional assets. However, the two-part publication lacks additional information that IRA owners with unconventional assets need to comply. Publications 590-A and 590-B recommend that taxpayers research IRS’s website (IRS.gov) for additional information. We found some additional information on IRS’s website about three of the four compliance topics. This information was typically in the form of FAQs in a section of IRS’s website about retirement plans (https://www.irs.gov/retirement-plans). Table 2 summarizes: what information for IRA owners with unconventional assets can be found in Publications 590-A and 590-B; what other IRS sources provide relevant information; and what information was not readily available on the IRS website for the four compliance areas likely to affect IRA owners with unconventional assets. Appendix II describes in more detail the information available and the information lacking in Publications 590-A and 590-B and other IRS sources. Given the complexity of the four compliance topics we identified as well as the relatively few numbers of taxpayers affected and the already large publication size, it may not be feasible to provide complete information on these topics within Publications 590-A and 590-B. IRS publications (like 590-A and 590-B) are intended to explain the law in plain language for taxpayers and their advisors. They generally summarize and translate into layperson’s terms more complex and technical information from authoritative sources like the Internal Revenue Code and more authoritative guidance like tax regulations, revenue procedures, and revenue rulings. IRS analysis indicates that perhaps only about 2 percent of IRAs have invested in hard-to-value unconventional assets. However, even small numbers of taxpayers with particular circumstances have the right to know what they need to do to comply with tax laws. IRA owners with unconventional assets who turn to Publications 590-A and 590-B are unlikely to fully understand how certain IRA investment decisions can increase their risks for noncompliance. Misunderstanding the rules governing IRAs could result in increased tax liability for taxpayers making unintentional errors and jeopardize their retirement savings. Given the serious consequences that could result for a taxpayer found to be noncompliant, IRS’s current publications are not clearly providing information for IRA owners with unconventional assets. Adding information to Publications 590-A and 590-B would be one solution that IRS could explore, but we recognize that it may not be practical for IRS to add substantially more information to Publications 590-A and 590-B for a relatively small percentage of IRA owners. Alternatives to adding more pages to Publications 590-A and 590-B could include directing readers with questions about rules affecting unconventional IRA assets to other IRS resources, such as IRS web pages or tax regulations that contain more technical and specialized information. As shown in table 2 above, we found some additional information on IRS web pages that would be helpful to IRA owners with unconventional assets. Adding language in Publication 590-A or 590-B directing taxpayers to specific web page URL addresses for additional information could help taxpayers more easily locate this information. For more technical or specialized information, IRS could direct readers of Publications 590-A and 590-B to the relevant sections of the Internal Revenue Code and related tax regulations. This additional information could help IRA owners better understand and navigate the potential compliance challenges associated with certain types of unconventional assets. Insufficient Data and Fragmented Expertise across IRS Organizational Units Complicate Enforcement of IRA Rules Involving Unconventional Assets IRS Cross-Divisional Team Identified Risks of IRA Noncompliance Based on Different Asset Types In October 2017, the Deputy Commissioner for Service and Enforcement commissioned a cross-divisional team comprised of representatives from all four IRS operating divisions to identify, assess, and mitigate risks of IRA noncompliance. In its February 2018 interim presentation, the IRS cross-divisional team categorized potential noncompliance risks over an IRA life cycle into two mitigation strategies, which are summarized below. 1. Noncompliance risks for most contribution and distribution IRA rules can be mitigated systemically through automated enforcement. For example, IRS can detect excess IRA contribution deductions and unreported IRA distributions by matching information from taxpayer returns with information reported by custodians. For the large population of IRA owners investing in conventional assets held by custodians, IRS relies on automated enforcement. 2. Noncompliance risks associated with the small population of IRAs with hard-to-value unconventional assets or under direct control of the IRA owner are generally mitigated through case-by-case audits. For example, noncompliance with the complex rules governing prohibited transactions and unrelated business income is generally not reflected on individual tax returns. Some custodians rely on IRA owners to provide asset value information and may not have complete and accurate data to report to IRS. Undervaluing IRA assets hampers automated enforcement, for example, to detect excess contributions and taxable distributions. Noncompliance involving IRAs with unconventional assets is generally detected through labor-intensive audits of individual taxpayers. IRS’s SB/SE division uses field audits to pursue complex individual tax return cases, including those that could involve IRAs with unconventional assets. In February 2018, an IRS cross-divisional team that studied the risks of IRA noncompliance reported that, from fiscal years 2012 to 2016, IRS audited about 26,000 tax returns with IRA issues. IRS officials provided us examples of SB/SE job aides and training materials designed to help examiners recognize different types of noncompliance associated with IRAs invested in unconventional assets. For example, the job aides provide instructions on prohibited transactions, barred collectibles, and FMV issues involving IRAs. When interviewing taxpayers, examiners are instructed to ask a series of questions covering subjects such as: what kind of advice the taxpayer received from promoters or custodians of self-directed IRAs, whether the taxpayer had direct involvement in purchasing unconventional assets through a control feature known as “checkbook access,” whether the taxpayer has a limited liability company (LLC) tied to the how the taxpayer determined the FMV of unconventional assets. IRS officials told us that enforcing rules associated with IRAs investing in unconventional assets can be particularly challenging for investments involving LLCs or special partnership arrangements. An IRA owner may establish an LLC that is owned by the IRA. Once the LLC is set up, a business checking account is linked to the IRA funds and the account owner is named the manager of the LLC with control over the checkbook. This allows IRA owners to purchase assets directly from investment sponsors without having to wait for custodians to execute a purchase or sale. The LLC may be used to invest in businesses that could generate unrelated business income. According to IRS officials, prohibited transactions may also be more likely to occur when custodians allow “checkbook” access to IRAs, in part because the marketing of this IRA structure is appealing to individuals who want less oversight of their IRA transactions and are more likely to intentionally engage in self-dealing transactions. IRS examination officials told us that the 3-year statute of limitations for assessing taxes owed remains an obstacle in pursuing noncompliance that may span the many years of an IRA investment. For example, abuses involving prohibited transactions frequently are not reflected on any filed tax return and may be difficult to detect within the general 3-year statute of limitations period. IRS agreed with our October 2014 recommendation for the Commissioner to work in consultation with the Department of the Treasury (Treasury) on a legislative proposal to expand the statute of limitations on IRA noncompliance. IRS said Treasury is aware of IRS’ support for changing the limitation period for IRA noncompliance. Treasury reviews and presents the administration’s tax proposals and has not released a legislative proposal as of October 2019. Data Collection Has Improved, but Opportunities Exist to Further Strengthen Identification of Potentially Abusive IRAs With electronically compiled data for tax year 2016 filed in 2017, IRS was positioned for the first time to quantify the number of IRAs with specified types of hard-to-value assets. IRS officials said that even with the new custodian reporting, the broad IRA asset type data alone may be inadequate for improving audit selection criteria and identifying potentially abusive IRAs in a timely manner. In February 2018, using the newly available data, an IRS cross-divisional team identified that about 2 million IRAs included one or more types of hard-to-value assets for tax year 2016. However, custodians reported an FMV dollar amount for hard-to- value assets for only 1.6 million of those IRAs, as shown in table 3. The combined FMV was approximately $137 billion. As shown in table 3, about 400,000 (about 20 percent) of the Form 5498s reporting that the IRA held investments in one or more of the specified unconventional categories were missing the 2016 FMV dollar amount for those assets. The cross-divisional team identified that undervaluation risk affects custodian reporting. IRS officials said that the team did not review the custodian reporting patterns as part of its initial analysis of the 2016 Form 5498 data. Forthcoming tax regulations on IRAs may help to improve custodian reporting of FMVs on Form 5498. IRS officials told us that the new IRA regulations would address FMV for certain categories of hard-to-value unconventional assets. IRS officials also told us that it would be premature to publish new guidance for IRA owners and custodians on the FMV of unconventional assets until the new regulations are issued. The tax year 2016 Form 5498 information indicated about 141,000 IRAs invested in LLCs—an asset type which IRS has determined presents greater noncompliance risk. Prior to the newly available asset type data, SB/SE conducted an interim Compliance Initiative Project (CIP) using external state government information to identify businesses, including LLCs and partnerships, owned by IRAs as a way to select IRA owners for audit. Completed in October 2019, the interim compliance research revealed that audits detecting prohibited transactions can result in substantial tax adjustments. In September 2018, SB/SE approved a new CIP using the asset type data from Form 5498s for tax year 2017 to select a sample of traditional and Roth IRAs that had an ownership in an LLC or real estate. The latest compliance research field work began in February 2019 and is to be completed in January 2021. IRS officials told us they plan to use this research in combination with the interim research results to improve criteria for selecting tax returns with IRAs at greater risk of noncompliance for audit. To detect abusive transactions, IRS can require taxpayers to self-report certain transactions that have been used by other taxpayers to avoid taxes. Transactions become “reportable” (meaning a taxpayer must report it to IRS) when IRS designates them as a “listed transaction” or “transactions of interest.” Listed Transaction. A listed transaction is reportable when it is the same or substantially similar to one of the types of transactions that IRS has determined to be an avoidance transaction. In 2004, IRS determined that Roth IRA “stuffing” is an abusive tax avoidance transaction that taxpayers must report to IRS as a listed transaction. “Stuffing” involves shifting value through transactions that disguise Roth IRA contributions exceeding annual IRA limits, such as selling receivables at less than FMV to a Roth IRA, or other transactions between a closely-held business in which the Roth IRA invests and another closely-held business of the Roth IRA owner. Transaction of Interest. A transaction of interest is one that IRS and Treasury believe to have the potential for tax avoidance or evasion, but which lacks enough information for IRS and Treasury to determine whether the transaction should be identified as a tax avoidance transaction. As of December 2019, IRS has not identified or classified any IRA asset types or investment transactions as reportable transactions of interest. Taxpayers are required to disclose all types of reportable transactions on Form 8886, Reportable Transaction Disclosure Statement. Similarly, advisers helping taxpayers conduct reportable transactions are required to file Form 8918, Material Advisor Disclosure Statement. Results from the ongoing IRS compliance research may yield insights about existing and emerging abusive schemes involving IRAs. This information could be useful for evaluating the feasibility of requiring greater disclosure by IRA owners and their custodians and advisors. For example, IRS could consider requiring reporting of known abusive IRA arrangements and prohibited transactions as listed transactions. Also, IRS could explore disclosure of high-risk IRA asset types susceptible to gross valuation misstatements, such as LLCs, as transactions of interest. We recently found that IRS’s Research, Analysis and Statistics office had developed the capability to analyze the narrative fields of tax forms. Additional disclosure of potentially abusive IRA transactions coupled with greater use of tax form’s narrative fields may help IRS to select IRA owner tax returns for more detailed review. The cases identified by such detailed review would help IRS better allocate limited audit resources. Enforcing IRA Rules for Unconventional Assets Draws on Expertise and Roles of Multiple IRS Organizational Units Responsibility for addressing IRA noncompliance detected through case- by-case audits is fragmented among multiple IRS organizational units. This fragmentation creates challenges for IRS examiners from different units that may need to share expertise and collaborate on enforcement of complex rules applicable to IRAs that invest in unconventional assets. In February 2018, the IRS cross-divisional team concluded that no one IRS operating division alone can effectively identify and penalize IRA noncompliance regarding unrelated business income and undervaluation of unconventional assets. Unrelated business income. SB/SE and TE/GE officials told us that detecting unrelated business income unreported by an IRA can also require the involvement of multiple IRS divisions. IRS responsibility and expertise in detecting noncompliance with the rules for unrelated business income resides in IRS’s TE/GE division. TE/GE is responsible for enforcing the unrelated business income taxation rules across tax-exempt organizations. Its Exempt Organizations group audits Form 990-T filed by tax- exempt charities and its examiners are required to check if tax exempt charities have reported unrelated business income. Examiners in TE/GE’s Employee Plans group have been trained on how to determine if a tax-exempt employee retirement plan has engaged in activities that constitute unrelated trade or business. SB/SE has primary responsibility for auditing individuals owning IRAs, and its examiners are to verify that all returns within the taxpayer’s sphere of influence are filed. IRS officials told us that when SB/SE examiners discover potential unrelated business income issues when reviewing an individual taxpayer’s IRAs, those examiners can seek assistance from TE/GE examiners via an internal Specialist Referral System used to refer cases to other divisions. Although IRS officials described to us how SB/SE examiners, at their own initiative, can seek out expertise on unrelated business income, the topic is not addressed in SB/SE examiner training materials and job aids on auditing IRAs with unconventional assets. SB/SE officials provided us training slides used to teach examiners how to recognize excess contributions, prohibited transactions, barred collectibles, and valuation issues involving IRAs. While the slides instruct examiners to contact a Senior Program Analyst or Counsel for assistance with complicated issues or cases, there is no information educating SB/SE examiners about unrelated business income or informing examiners that specialized knowledge about this topic resides in the TE/GE division. Without resources, such as training materials or job aides, that provide such information, SB/SE examiners carrying out the ongoing compliance initiative project are not positioned to surface unrelated business income tax issues for referral to TE/GE. Given that IRS plans to use those research results to refine its audit selection criteria, IRS is missing an opportunity to learn more about IRA noncompliance with unrelated business income taxation. Undervaluation of unconventional assets. In February 2018, the cross-divisional IRA team cited undervaluation of unconventional assets as another compliance risk that involves the expertise and enforcement responses from multiple IRS units. If SB/SE examiners determine in auditing an IRA owner that the IRA custodian had inaccurately reported IRA asset values, other IRS divisions can take action against the custodian. LB&I can penalize a large financial institution custodian, although the cross-divisional IRA team reported the $50 penalty for filing an incorrect Form 5498 poses little deterrent effect. For the approximately 75 non-bank IRA trustees approved by IRS, TE/GE can revoke a non-bank’s trustee status for violating any fiduciary, accounting, or financial requirements. The cross-divisional IRA team explored an approach for joint examination to more effectively identify and penalize noncompliance associated with prohibited transactions, unreported unrelated business income, and undervaluation of IRA assets. Based on knowledge from prior examinations, the team identified a small subset of non-bank trustees publicly marketing alternative investments that held IRAs more than $5 million in reported FMV as of tax year 2016. As of February 2018, the team reported that it had been premature for the separate divisions to commit examination resources. As of October 2019, IRS officials said they plan to reconvene the cross-divisional IRA team after the ongoing SB/SE compliance initiative project is complete in 2021. IRS officials said the plan is for the team to use the compliance research results to refine audit selection. Also, the team could continue work on establishing a joint examination approach for IRA noncompliance associated with hard-to-value unconventional assets. Conclusions IRA owners that invest in unconventional assets—such as real estate, certain precious metals, virtual currency, or private equity—assume greater responsibility for navigating complex rules that govern tax-favored retirement investments. To understand these rules, taxpayers are likely to consult IRS Publications 590-A and 590-B. While this two-part publication provides some information on compliance issues likely to affect IRA owners with unconventional assets, the information in the publication as well as on IRS web pages is limited. By assessing options for making such information clearer, IRS could better inform taxpayers and help them comply. This is particularly important because misunderstanding the rules governing IRAs can result in increased tax liability for these taxpayers and jeopardize their retirement savings. Noncompliance associated with nonpublicly traded IRA assets has been difficult for IRS to detect and time consuming to pursue. In contrast to automated enforcement for IRAs with conventional investments, noncompliance involving IRAs with unconventional assets is generally detected on a case-by-case basis through labor-intensive audits of individual taxpayers. In recent years, IRS has begun collect information from IRA custodians that IRS can use to quantify the dollar amounts of specified types of hard-to-value assets held by IRAs. However, the broad IRA asset type data alone may not be sufficient for audit selection and identifying potentially abusive IRAs in a timely manner. When IRS lacks sufficient data to detect abusive transactions, IRS can require taxpayers to self-report certain transactions that have been used by other taxpayers to avoid taxes. Additional disclosure of certain IRA transactions coupled with mining the narrative fields of tax forms could help IRS to efficiently identify potentially abusive IRA activity and better allocate limited audit resources. Fragmented responsibility among IRS operating divisions creates additional challenges for IRA enforcement. The division responsible for tax-exempt entities trains its examiners on how to determine if an employee retirement plan has unrelated business activities subject to taxation. Yet, examiners in the division that audits complex individual tax returns, including those involving IRAs, do not receive similar training. Training for those examiners on unrelated business income tax issues, and how examiners can refer those cases to other divisions for assistance, could help improve collaboration on IRA enforcement. Recommendations for Executive Action We are making the following three recommendations to IRS: The Commissioner of Internal Revenue should assess options for updating Publications 590-A and 590-B to either include more information or direct taxpayers to other resources for IRA owners with investments in unconventional assets. Such information could include: storage requirements for IRA investments in certain precious metals; valuation methods for hard-to-value IRA assets; the Department of Labor’s process for granting exemptions to IRA prohibited transactions rules; and IRA investments with the potential to create unrelated business income tax liabilities. (Recommendation 1) The Commissioner of Internal Revenue, building on forthcoming compliance research using new IRA asset data, should evaluate the feasibility of requiring disclosure for high-risk IRA asset types associated with abusive schemes as transactions of interest. (Recommendation 2) The Commissioner of Internal Revenue should develop resources (such as training materials or job aids) for Small Business/Self-Employed examiners conducting IRA owner audits that explain how IRAs with unconventional assets can generate unrelated business income tax liability, and how examiners can refer cases to unrelated business income experts in IRS for assistance. (Recommendation 3) Agency Comments We provided a draft of this report to the Treasury and IRS for review and comment. In its comments, reproduced in appendix III, IRS generally agreed with our recommendations. For recommendation 1, IRS said it will review its educational publications and web pages for appropriate updates within the scope of the tax code. For recommendation 2, IRS said that it will determine whether there are abusive schemes associated with certain IRA asset types, and if the data indicate such a correlation, it will evaluate the feasibility of requiring disclosure of such arrangements as transactions of interest. For recommendation 3, IRS said it will review and update resources for examiners conducting IRA owner audits, including guidance on how to address unrelated business income tax (UBIT). It will incorporate guidance for agents on how to refer such cases to UBIT experts when assistance is needed. IRS also said that it will renew its efforts at ensuring collaboration with relevant subject matter experts. IRS in consultation with Treasury also provided technical comments which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of the Treasury, the Commissioner of Internal Revenue, the Secretary of Labor, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact James R. McTigue, Jr. at (202) 512-9110 or Charles A. Jeszeck at (202) 512-7215. You may also reach us by email at jeszeckc@gao.gov or mctiguej@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff making key contributions to this report are listed in appendix IV. Appendix I: Status of GAO Recommendations Appendix II: IRS Information for IRA Owners Investing in Unconventional Assets This appendix describes: what information for individual retirement account (IRA) owners with unconventional assets can be found in Publications 590-A and 590-B, what other Internal Revenue Service (IRS) sources provide relevant information, and what information was not readily available on the IRS website for the four compliance areas we identified through prior work as likely to affect IRA owners with unconventional assets. The information below does not include the Internal Revenue Code or detailed and technical resources published in the weekly Internal Revenue Bulletin, such as tax regulations, revenue rulings, and revenue procedures. Barred investments Publications 590-A and 590-B explain what types of IRA investments are barred, such as collectibles, but the publication does not have additional information that could be useful to IRA owners with allowable investments in coins and bullion. The publications define collectibles as including artworks, rugs, antiques, metals, gems, stamps, coins, alcoholic beverages, and certain other tangible personal property. The publications explain that if a traditional IRA invests in collectibles, the amount invested is considered distributed, and that the IRA owner could be subject to an additional tax on early distributions. Publications 590-A and 590-B further explain that an exception exists for IRA investments in certain types of coins and bullion. However, the two-part publication does not indicate that certain types of bullion must be stored by a bank or an IRS-approved non-bank trustee. The two-part publication also does not mention that IRA investments in life insurance contracts are not permitted. Two IRS web pages listing frequently asked questions (FAQs) about retirement plans contain additional information about bullion storage requirements and IRA investments in life insurance contracts. Both web pages state that investing IRA funds in life insurance contracts and collectibles are prohibited, and they also note the exception for certain precious metals. One of the web pages further explains that allowable bullion must be stored with a bank or an IRS-approved non-bank trustee. Prohibited transactions Publications 590-A and 590-B define prohibited transactions in general terms, list examples, and explain the consequences of engaging in a prohibited transaction. The two-part publication also cautions that the risk of engaging in a prohibited transaction in connection with an IRA account may be increased when an IRA owner invests in nonpublicly traded assets or assets that an IRA owner directly controls. However, the publication does not provide any information about applying for an exception to the prohibited transaction rules. We found some limited information about exemptions to the prohibited transaction rules on an IRS web page entitled “Retirement Plan Investments FAQs.” The web page explains that the Department of Labor (DOL) has granted class exemptions for certain types of investments under specific conditions, and that a plan sponsor may apply to DOL to obtain an administrative exemption for a particular proposed transaction that would otherwise be prohibited. However, the web page does not provide any links to DOL information such as a DOL publication that explains the prohibited transactions exemption process. Unrelated business income In February 2018, IRS updated Publications 590-A and 590-B to include information about IRAs with unrelated business income. Publications 590- A and 590-B now explain that an IRA is subject to tax on unrelated business income if the IRA carries on an unrelated trade or business. Publications 590-A and 590-B state that the IRA trustee is required to file a Form 990-T if an IRA has $1,000 or more of unrelated trade or business gross income. For more information, Publications 590-A and 590-B direct taxpayers to consult Publication 598, Tax on Unrelated Business Income of Exempt Organizations. Publication 598 lists IRAs as one of many exempt entities subject to taxes on unrelated business income, and the requirement to file Form 990-T for gross income of $1,000 or more. Publication 598 describes dozens of activities by tax-exempt organizations that would be considered an unrelated business; but the publication does not include any examples specific to IRA investments that could also be considered unrelated business activities and subject to taxes. Our search did not find additional information on IRS.gov relating to IRAs and unrelated business income taxes. Fair market value (FMV) Publications 590-A and 590-B do not provide guidance about how to accurately determine the FMV of hard-to-value unconventional assets. IRS requires custodians to report (on Form 5498) an IRA’s FMV at year’s end as well as some additional information for IRAs with unconventional assets. The instructions for completing Form 5498 explain that IRA custodians are responsible for ensuring that “all IRA assets (including those not traded on established markets or not having a readily determinable market value) are valued annually at their FMV.” However, neither the form’s instructions nor Publications 590-A and 590-B provide guidance or tips on how to determine the FMV of non-publicly traded or other hard-to-value assets. As we previously reported, some unconventional assets may require a third-party appraisal to determine their FMV. One IRS web page titled, “Valuation of Plan Assets at Fair Market Value,” provides some additional FMV information but it is intended more for the valuing assets in employer-provided retirement benefits like traditional pensions and 401(k) plans. The web page states that an accurate assessment of the FMV of retirement plan assets is essential for complying with Internal Revenue Code requirements and avoiding prohibited transactions. The web page also states that for defined contribution plans like a 401(k) plan, investments must be valued at least once per year in accordance with a consistent and uniform method. For traditional pensions (defined benefit plans), tax regulations define FMV for purposes of valuing plan assets as “the price at which the property would change hands between a willing buyer and a willing seller, neither being under any compulsion to buy or sell and both having reasonable knowledge of relevant facts.” The Department of the Treasury (Treasury) plans to issue regulations on IRAs. IRS officials told us that these new regulations would address FMV for certain categories of hard-to-value unconventional assets. IRS officials also told us that it would be premature to publish new guidance for IRA owners and custodians on the FMV of unconventional assets until the new regulations are issued. In their October 2019 update of planned guidance projects, Treasury’s Office of Tax Policy and IRS listed planned IRA regulations. Appendix III: Comments from the Internal Revenue Service Appendix IV: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contacts named above, MaryLynn Sergent and David Lehrer (Assistant Directors), Ted Burik, Susan Chin, Steven Flint, Emily Gruenwald, Mark Kehoe, Jungjin Park, and David Reed made key contributions to this report. James Bennett, Amy Bowser, Jacqueline Chapin, Edward J. Nannenhorn, Andrew J. Stephens, Walter Vance, and Adam Wendel also provided support.
Why GAO Did This Study Unconventional IRA investments—such as real estate, certain precious metals, private equity, and virtual currency—can introduce risks to account owners who assume greater responsibility for navigating the complex rules that govern tax-favored retirement savings. IRS enforces tax rules relating to IRAs and can assess additional taxes. GAO was asked to examine the challenges associated with enforcing rules governing IRAs invested in unconventional assets. This report examines (1) the extent to which IRS offers guidance to help taxpayers understand the rules governing unconventional IRA assets; and (2) the challenges IRS faces in enforcing those rules. GAO identified and analyzed IRS information to help taxpayers understand four compliance areas. GAO reviewed IRS analysis of nonmarket IRA assets reported by IRA custodians, and IRS audit procedures and training materials; and interviewed relevant IRS officials to identify enforcement challenges. What GAO Found The Internal Revenue Service's (IRS) Publications 590-A and 590-B serve as a general handbook for millions of taxpayers with individual retirement accounts (IRA). However, the two-part publication provides limited information for IRA owners with unconventional assets surrounding complex tax rules in four compliance areas: (1) barred investments, (2) prohibited transactions, (3) unrelated business income, and (4) fair market value. GAO found other limited information about these topics on IRS's website. With only about 2 percent of IRAs invested in unconventional assets, adding more pages to Publications 590-A and 590-B may not be practical. By assessing options for informing IRA owners investing in unconventional assets, such as directing them to web pages with specialized information and technical regulations, IRS could better help them comply. Noncompliance involving unconventional IRA assets is difficult to detect and time consuming for IRS to pursue. Whereas IRS relies on automated enforcement for IRAs invested in conventional assets held by custodians and trustees, enforcement for IRAs invested in unconventional assets or under IRA owner control requires labor-intensive audits of individual taxpayers. Using newly compiled information, IRS identified about 2 million IRAs that held certain types of hard-to-value assets as of 2016; however, about 20 percent of the forms were missing fair market value amounts for these assets (see fig.). IRS officials said this type of reporting alone may be inadequate for audit selection and identifying potentially abusive IRAs. When IRS lacks sufficient data to detect abusive transactions, IRS can require taxpayers to self-report certain transactions that have been used by other taxpayers to avoid taxes. Additional taxpayer or custodian disclosure of potentially abusive IRA transactions coupled with IRS analysis of reported details may help IRS to select IRA owner tax returns to audit. Fragmented responsibility among IRS divisions creates challenges for examiners who need to share expertise and collaborate on IRA enforcement. The division responsible for tax-exempt entities trains its examiners on how to determine if an employee retirement plan has engaged in business activities subject to taxation. However, examiners in the division that audits complex individual tax returns, including those involving IRAs, do not receive such training. Training for those examiners could help improve collaboration on IRA enforcement. What GAO Recommends GAO is recommending that IRS (1) assess options for updating its IRA publications to provide more information for taxpayers with unconventional assets, (2) evaluate the feasibility of requiring disclosure for high-risk IRA asset types associated with abusive tax schemes, and (3) develop auditor resources (such as training materials or job aids) that explain how IRAs with unconventional assets can generate unrelated business income tax. IRS generally agreed with GAO's recommendations.
gao_GAO-20-241
gao_GAO-20-241_0
Background Key DOD Cyber Hygiene Initiatives DOD officials identified three key department-wide initiatives that include a number of cybersecurity practices aimed at improving cyber hygiene: the DC3I, the CDIP, and the Cyber Awareness Challenge training. These efforts recognize the importance of command leadership, best practices for DOD network users, and technical countermeasures against cybersecurity threats. DC3I. In September 2015, the Secretary of Defense and the Chairman of the Joint Chiefs of Staff signed the DC3I in an effort to transform DOD cybersecurity culture by enabling and reshaping leaders, cyber providers, personnel who perform cyberspace operations, and general users to improve individual human performance and accountability on DOD’s network. The DC3I memorandum identifies 11 tasks assigned to various DOD components to respond to and implement across the department— such as the development of cybersecurity training briefs for DOD leadership, integration of cybersecurity into operational training and exercises, and the development of a resourcing plan to support scheduled inspections of units conducting cyberspace operations. From September 2015 to December 2016, U.S. Cyber Command was initially responsible for ensuring that relevant components implemented the DC3I. In December 2016, the Deputy Secretary of Defense assigned the DOD CIO as the official responsible for ensuring that components implemented the initiative because, in part, the DOD CIO has DOD-wide oversight authority. CDIP. The CDIP is one of seven actions identified in DOD’s Cybersecurity Campaign to prompt commanders and senior leaders to enforce full cybersecurity compliance and accountability across the department. In October 2015, the Deputy Secretary of Defense signed the CDIP to reinforce basic cybersecurity technical requirements identified in policies, directives, and orders as a means to defend DOD information networks, secure DOD data, and mitigate risks to DOD missions. The CDIP memorandum identifies 17 tasks for all commanders and supervisors to implement across the department. These tasks include removing operating system software that no longer receives security updates from vendors, configuring servers consistent with DOD guidance on secure configurations, and addressing vulnerabilities for servers and network infrastructure in a timely manner. Cyber Awareness Challenge Training. This training is intended to help the DOD workforce (including service members, civilians, and contractors) to maintain awareness of known and emerging cybersecurity threats, reinforce best practices to keep information and information systems secure, and ensure that network users stay abreast of changes in DOD cybersecurity policies. DISA develops the training content and periodically updates the training. In addition, the Cyber Workforce Advisory Group that includes officials from the DOD CIO, DISA, and DOD components, solicit input about ways to improve the training and meets annually to approve updates to the Cyber Awareness Challenge. Increasing Cybersecurity Awareness and Accountability at Leadership Levels Federal law and a DOD initiative and strategy highlight the important role of leadership in improving cybersecurity culture and performance across the department. For example, the Federal Information Security Modernization Act of 2014 (FISMA) requires agency heads—including the Secretary of Defense—to ensure that senior agency officials provide security for the information and information systems that support the operations and assets under their control. Additionally, the DC3I states that leaders will be held accountable by the chain of command for the cybersecurity performance of their organization and the individuals who comprise it, and for the role cybersecurity performance plays in accomplishing assigned missions. It also states that leaders will set an example and help individuals master appropriate cyber behavior, will take action against those who commit gross negligence or errors of commission, and may use all available means, both legal and administrative, as they deem appropriate. Further, the 2018 DOD Cyber Strategy states that reducing the department’s network attack surface (i.e., the different points in a network where attackers can try to enter or extract information) requires an increase in cybersecurity awareness and accountability across the department. The strategy also states that the department would hold DOD personnel accountable for their cybersecurity practices and choices. The 2019 Cybersecurity Readiness Review, directed by the Secretary of the Navy, describes best practices for effective cybersecurity leadership. These best practices, according to the readiness review, require Navy leaders to be informed on cybersecurity issues facing their organization, engaged in ensuring cybersecurity issues are addressed, and hold their organization accountable for cybersecurity performance. Key Cybersecurity Roles and Responsibilities A number of DOD officials and components have key roles and responsibilities for cybersecurity, including the three key cyber hygiene initiatives. For example: Secretary and Deputy Secretary of Defense. FISMA makes the Secretary of Defense responsible for providing information security protections commensurate with the risk and magnitude of harm facing the department. In addition, Executive Order 13800, issued in May 2017, aligns with FISMA by holding agency heads accountable for implementing risk management measures commensurate with the risk and magnitude of the harm that would result from unauthorized access, use, disclosure, disruption, modification, or destruction of IT and data. DOD Chief Information Officer. FISMA requires DOD to develop, document, and implement a program to provide security for information and information systems (commonly referred to as a cybersecurity program) and directs the Secretary of Defense to delegate to the DOD CIO (and military department CIOs) authority to ensure compliance with the law. In addition, the DOD CIO is responsible for overseeing implementation of the three key cyber hygiene initiatives. DOD Component heads. DOD component heads are responsible for ensuring that IT under their purview complies with DOD Instruction 8500.01. In addition, component heads are responsible for ensuring that their network users complete annual security awareness training. DOD Component CIOs. DOD component CIOs are responsible for developing, implementing, maintaining, and enforcing a component cybersecurity program on behalf of their respective component heads. In doing so, component CIOs are responsible for ensuring that their components implement the CDIP tasks. Chairman of the Joint Chiefs of Staff. The Chairman of the Joint Chiefs of Staff is responsible for advising the President and Secretary of Defense on operational policies, responsibilities, and programs. The Chairman also assists the Secretary of Defense in implementing operational responses to cyber threats and ensures cyberspace plans and operations are compatible with other military plans and operations. The staff members who support the Chairman of the Joint Chiefs of Staff are referred to as the Joint Staff, which is comprised of members from all of the military services. U.S. Cyber Command. The Commander of U.S. Cyber Command has the mission to direct, synchronize, and coordinate cyberspace planning and operations to defend and advance national interests in collaboration with domestic and international partners. In addition, the Commander is responsible for, among other things, issuing orders and directives to all DOD components for the execution of global operations aimed at securing and defending the department’s networks. Defense Information Systems Agency. The Director of DISA is responsible for developing, implementing, and managing cybersecurity for the department’s network and works with other components to secure DOD systems. For example, the Director is responsible for developing cybersecurity awareness training for all users on DOD’s network. JFHQ-DODIN. The Commander of JFHQ-DODIN is responsible for, among other things, commanding, controlling, planning, directing, coordinating, integrating, and synchronizing DOD defensive cybersecurity operations. JFHQ-DODIN also performs two types of cyber readiness inspections to ensure DOD units comply with requirements related to network security and to evaluate the ability of units to accurately detect and mitigate vulnerabilities and anomalous activity on DOD’s network. Cybersecurity Is a High- Risk Area The security of federal cyber assets has been on our High-Risk List since 1997. In September 2018, we issued an update to this high-risk area that identified actions needed to address cybersecurity challenges facing the nation—including improving implementation of government-wide cybersecurity initiatives aimed at securing federal systems and information. We also have identified ensuring the cybersecurity of the nation as one of nine high-risk areas that need especially focused executive and congressional attention. In August 2017, we reported on DOD’s progress in implementing the department’s cyber strategies. We found that DOD had implemented the cybersecurity elements of the DOD Cloud Computing Strategy and had made progress in implementing the 2015 DOD Cyber Strategy and DOD Cybersecurity Campaign, which was comprised of multiple initiatives including the CDIP. However, DOD’s process for monitoring implementation of the DOD Cyber Strategy resulted in the closure of tasks before they were fully implemented. We also found that DOD lacked a timeframe and process for monitoring implementation of the DOD Cybersecurity Campaign objective to transition to commander-driven operational risk assessments for cybersecurity readiness. We recommended that DOD (1) modify criteria for closing tasks as implemented and reevaluate tasks previously determined to be completed to ensure they meet modified criteria and (2) establish a timeframe and monitor implementation of the DOD Cybersecurity Campaign objective to develop cybersecurity readiness assessments to help ensure accountability. DOD partially concurred with both recommendations. As of January 2020, neither recommendation had been implemented. DOD Has Not Fully Implemented Key Cyber Hygiene Initiatives and Does Not Know the Extent of Protection DOD has not fully implemented its three cyber hygiene initiatives. Specifically, (1) the DOD CIO and DOD components have not implemented seven of the 11 DC3I tasks due in fiscal year 2016; (2) DOD has implemented six of 10 CDIP tasks that the DOD CIO oversees and does not know the extent that seven other CDIP tasks are implemented; and (3) DOD did not know the extent to which users for selected components completed the Cyber Awareness Challenge training in 2018 and one component did not use the required training. In addition, the department does not know the extent that cyber hygiene practices to protect its networks from key cyberattack techniques have been implemented. DOD Has Not Implemented Seven of the 11 DC3I Tasks Due in Fiscal Year 2016 DOD has not implemented seven of the 11 DC3I tasks despite fiscal year 2016 deadlines for each of the tasks being established by the department. In particular, DOD components have implemented four DC3I tasks and have not implemented the seven remaining tasks, as shown in figure 1. As shown above, DOD has implemented four DC3I tasks. For example, DOD CIO implemented a task that requires that office to assess the effect of cyber workforce shortfalls on DOD’s mission and provide recommendations to address these shortfalls (task 10 in figure 1 above). Specifically, in April 2019, DOD CIO provided a plan to the Office of Personnel Management to address cyber workforce shortages by filling vacant positions, enhancing outreach and recruitment, and expanding on hiring authorities. However, DOD has not implemented the remaining seven DC3I tasks. For example: DOD has not fully implemented leadership cybersecurity training briefs (task 1). In April 2016, U.S. Cyber Command developed two training briefs to be used in leadership training. However, as of October 2019, DOD components have not received either training brief, according to DOD officials. In September 2016, U.S. Cyber Command provided the Deputy Secretary of Defense a DC3I status report and informed him that two products were developed to address this task and that they would be disseminated to DOD components. However, as of October 2019, neither U.S. Cyber Command nor the Office of the DOD CIO had disseminated these leadership training briefs across the department, according to DOD officials. In reviewing the training briefs, we found that, if they had been incorporated into DOD leadership training, leaders would have been better positioned to address cybersecurity risks. For example, they may have learned, among other things, how to understand, assess, and interpret cyber-reportable events and incidents and how they affect military operations. DOD has not developed cyber-provider training (task 2). In February 2019, the office of the DOD CIO completed a review of all military and civilian IT positions to identify the work roles of all cyber providers in the department. However, the office has not developed educational and training requirements for cyber providers. DOD CIO officials told us that, consistent with task 2, they are drafting a DOD Manual, Cyber Workforce Qualification and Management Program, which would document educational and training requirements for the work roles for each cyber provider. DOD CIO officials expect to complete the manual around April 2020. DOD has not fully implemented criteria for assessing cybersecurity in operational training and exercises (task 5). In March 2016, the Joint Staff developed criteria for assessing military service and combatant command efforts to integrate cybersecurity into operational training and exercises. For example, the Joint Staff developed a checklist of cybersecurity elements that should be included in cyberspace-related training objectives and assessed during training events. In May 2016, the Vice Chairman of the Joint Chiefs of Staff required that the criteria be used to assess military service and combatant command efforts to integrate cybersecurity into operational training and exercises. In May 2019, Joint Chiefs of Staff officials told us the criteria was not incorporated into the Chairman’s annual training guidance, citing personnel turnover, and that they do not have plans to incorporate the criteria. According to the DC3I, operational and tactical commanders and leaders need to interpret the effect that cyber insecurity may have on the mission and integrate cyber effects into mission planning. If Joint Staff had updated the Chairman of the Joint Chiefs of Staff guidance for operational training, DOD commanders would have had criteria they could use to assess the effect that cyber insecurity may have on military missions. The lack of progress in implementing the tasks occurred, in part, because the DOD CIO did not take steps to ensure that the DC3I tasks were implemented. DOD CIO officials told us they were not aware of their responsibility to oversee implementation of the DC3I. Initially, U.S. Cyber Command was assigned as the entity responsible for overseeing implementation of the DC3I; however, in December 2016, the Deputy Secretary of Defense approved the transition of the DC3I mission lead to the department’s CIO. According to this transition memorandum, the CIO was to leverage existing authorities and departmental efforts to lead and provide oversight of cybersecurity culture and compliance transformation. Additionally, DOD CIO officials told us that the office is focusing its resources on other CIO initiatives, such as implementing the cyber landscape initiative. However, the DC3I included a task (task 11 in figure 1 above) that required an assessment of the resources needed to ensure that DOD implemented the DC3I and this task had not been completed at the time of our review. If DOD CIO does not take appropriate steps to ensure that the DC3I tasks are implemented, the department risks compromising the confidentiality, integrity, and availability of mission-critical information as a result of human error by users on the department’s networks. DOD Has Implemented Six of 10 CDIP Tasks That the DOD CIO Oversees and Does Not Know the Extent That Seven Other CDIP Tasks Have Been Implemented DOD Has Implemented Six of 10 CDIP Tasks That the DOD CIO Oversees Since 2015, DOD has implemented six of 10 CDIP tasks that the DOD CIO is to oversee, but has not achieved desired performance targets for the remaining four tasks even though there is a requirement to implement all 10 by the end of fiscal year 2018. In the 2015 CDIP memorandum, the Deputy Secretary of Defense directed DOD components to implement all 17 CDIP tasks for all system users, IT hardware, and IT software to remove preventable vulnerabilities from DOD’s network that could allow adversaries to compromise information and information systems. According to a March 2019 memorandum, the Deputy Secretary of Defense challenged the department to achieve 90 percent implementation of the 10 CDIP tasks overseen by DOD CIO by the end of fiscal year 2018. In table 1, we list the 17 tasks and indicate the 10 tasks that the department’s CIO oversees. The department has achieved its performance targets for six of the 10 CDIP tasks that the DOD CIO oversees. For example, in October 2018 DOD achieved its performance target for one task that requires the department to move all of DOD’s web servers into a DOD “demilitarized zone,” or DMZ, according to DOD’s fiscal year 2018 Federal Information Security Modernization Act report to the director of the Office of Management and Budget. Placing these web servers in a DMZ directs web traffic intended for those servers—including malicious traffic—to systems within perimeter firewalls that screen the traffic before allowing access to organizations networks. By implementing the task and moving 11,000 web servers into the DMZ, DOD has reduced the risk that malicious traffic can reach its web servers. However, the department has not achieved the department-wide goal for the four remaining CDIP tasks overseen by DOD CIO. For example, DOD did not achieve its performance target for a task that required components to ensure they were compliant with endpoint security guidance. DOD CIO officials told us that the remaining four CDIP tasks are challenging for the department to achieve the 90 percent performance target because some DOD components use aging information technology systems and these older systems may not be equipped to implement all CDIP tasks. We have previously reported that legacy systems have operated with known cybersecurity vulnerabilities that are either technically difficult or prohibitively expensive to address. In light of the security risks posed by DOD component legacy systems, we stated that it is imperative that agencies carefully plan for their successful modernization. DOD did not achieve the 90 percent goal for four of the 10 CDIP tasks by the end of fiscal year 2018 due in part to DOD components not developing plans with scheduled completion dates to implement these four tasks, according to DOD officials. DOD CIO officials told us that they had not required DOD components to develop plans with scheduled completion dates for the remaining four CDIP tasks. CIO officials believed that the DOD components would implement the CDIP memorandum since it was signed by the Deputy Secretary of Defense and it required them to report on their progress in implementing the CDIP tasks. While the Deputy Secretary of Defense did require DOD components to implement these four tasks and report on their progress, components have not achieved performance targets. If DOD components do not develop plans with scheduled completion dates to implement the remaining four CDIP tasks, the department may fail to remove preventable, well-known vulnerabilities from its network and may allow adversaries to compromise the confidentiality, integrity, or availability of sensitive information and information systems. DOD Does Not Know the Extent that Seven CDIP Tasks Have Been Implemented DOD does not know the extent to which components have implemented the seven CDIP tasks that the CIO does not oversee because the responsible components have not reported on their progress, according to DOD officials. For example, DOD has not reported on the extent to which components have disabled hyperlinks to websites that users receive in email messages. Disabling hyperlinks in email messages can help to prevent phishing attacks. DISA officials told us that the agency implemented a security protocol that disables these hyperlinks in DISA’s email server. Consequently, DOD components that use DISA’s email service are compliant with this task’s requirement; however, not all DOD components use DISA’s email service and the extent to which other email services comply with this task is unknown. The CDIP memorandum signed by the Deputy Secretary of Defense stated that the department’s progress in implementing all CDIP tasks would be reported. However, the department has not reported on the progress it has made implementing the seven CDIP tasks that the CIO does not oversee in part because the Deputy Secretary of Defense did not identify, in the CDIP memorandum, a component to oversee the implementation of these tasks and report on their progress. According to DOD CIO officials, some of these seven tasks are more tactical and may be more appropriately tracked at echelons below the office of the DOD CIO. For example, one of these seven tasks requires that commanders ensure the physical security of their network infrastructure devices. We agree that lower echelons may more effectively track the progress of some tasks; however, information about the progress that components make implementing these tasks is not reported to the CIO or any other DOD component, according to DOD officials. In addition, DOD CIO officials told us that JFHQ-DODIN collects some information from inspections it performs to verify the extent that inspected units implement technical guidance documents, some of which relate to these seven CDIP tasks. However, according to DOD officials, JFHQ-DODIN does not report this information to the CIO or any other DOD component. In addition, JFHQ-DODIN inspects a sample of DOD units and therefore does not have information about the status of these tasks across the department. For those units that are inspected, no DOD component is aggregating data from these inspections to identify the extent to which these seven tasks are implemented. If the Deputy Secretary of Defense does not identify a DOD component to oversee the implementation of the seven CDIP tasks that DOD CIO does not oversee and report on progress implementing them, the department will have less assurance that cybersecurity vulnerabilities are being addressed in a timely manner and systems are being securely configured. DOD Has Not Fully Implemented Its Cyber Awareness Challenge Training Initiative Selected DOD Components Did Not Know the Extent to Which Their Users Implemented the 2018 Cyber Awareness Challenge Training The 16 selected components we included in our sample did not always collect information on the number of users (1) that completed the fiscal year 2018 Cyber Awareness Challenge training, (2) that did not complete the training, and (3) whose network access was revoked for not completing the cyber awareness training. Specifically: Unknown number of users that completed the cyber awareness training. Two of the 16 did not collect information on the number of users that completed the fiscal year 2018 Cyber Awareness Challenge training. In particular, the Army and the Defense Finance and Accounting Service could not provide data on the extent that users had taken the required training in fiscal year 2018. Unknown number of users that did not complete the cyber awareness training. Six of the 16 components did not collect information on the number of users that did not complete the cyber awareness training. In particular, the Navy, Air Force, Marine Corps, U.S. European Command, and the Defense Media Activity did not collect information on the users who did not complete the training in fiscal year 2018. In addition, the Army’s training compliance system did not have records for all Army users in 2018, which limited the Army’s ability to determine if all of its users completed the fiscal year 2018 Cyber Awareness Challenge training. Unknown number of users whose network access had been revoked for not completing the required training. Eight of the 16 components that we contacted did not collect data on the number of users whose network access had been revoked for not completing the required training, as implied by DOD policy. Selected DOD components did not know the extent to which their network users implemented the 2018 Cyber Awareness Challenge training by completing it because the DOD component heads did not ensure that their respective components were accurately monitoring and reporting the necessary information. Navy officials told us that they believed it was not DOD or the military service’s policy for the service headquarters to track whether their network users had completed the training. According to Navy officials, there is also no value for large organizations like the Navy, with over 600,000 users, to track and report these data at the headquarters level. However, DOD policy requires all network users to take the Cyber Awareness Challenge training annually. In addition, DOD policy states that all individuals with network access must complete this training to retain access. NIST also advises that agencies capture training compliance data at an agency level, so data can be used to conduct agency-wide analysis and reporting. Multiple DOD policy and guidance documents—including DOD Manual 8570.01-M, and Chairman of the Joint Chiefs of Staff Instruction 6510.01F—state that the DOD component heads are responsible for ensuring that users complete the Cyber Awareness Challenge training and two of these documents require recording training compliance. For example, according to DOD Manual 8570.01-M, Information Assurance (IA) Workforce Improvement Program, components must document and maintain the status of awareness compliance for each user. Further, service policy and guidance places the responsibility on the DOD component heads or senior-level leaders at the headquarters’ level for ensuring that cybersecurity training is completed and documented. For example, Secretary of Navy Instruction 5239.3C, Department of Navy Cybersecurity Policy (May 2, 2016), states that the Chief of Naval Operations and the Commandant of the U.S. Marine Corps shall ensure all authorized users of Department of Navy information systems and networks receive initial cybersecurity awareness orientation as a condition of access and, thereafter, complete annual refresher training, monitor and report workforce cybersecurity training and maintain supporting records. Similarly, Army Regulation 25-2, Army Cybersecurity (Apr. 4, 2019), states that the Deputy Chief of Staff, G3/5/7 is responsible for ensuring that cybersecurity training is integrated and conducted throughout the Army. If the DOD component heads do not ensure that their respective components accurately monitor and report information on the extent that users have completed the Cyber Awareness Challenge training—as well as have access revoked for not completing the training—the components may be unable to ensure that DOD users are trained in the steps needed to address cybersecurity threats to the department. DARPA Has Not Required its Users to Take DOD’s Cyber Awareness Challenge Training One of the 16 selected components in our review—DARPA—did not require its users to take DOD’s Cyber Awareness Challenge training, according to DARPA officials, even though it is required by policy. Instead, DARPA has required its users to take cybersecurity training that it developed. While DARPA developed its own training program, we found that this training program did not address all of the requirements identified in a DOD staff manual or the cybersecurity training topics identified by the Cyber Workforce Advisory Group. DARPA officials recognized that its cybersecurity training was not equivalent to the DOD’s Cyber Awareness Challenge training program, which according to DOD CIO officials, addressed the training topics identified by the DOD Cyber Workforce Advisory Group. They explained that DARPA designs its courses to be concise to allow their personnel to focus on accomplishing the agency’s mission and that users can obtain additional information from references cited in the course materials. In addition, these officials told us that they were unaware their users were required to take the Cyber Awareness Challenge training that DISA developed. The DOD CIO is responsible for overseeing the implementation of the Cyber Awareness Challenge training, according to DOD CIO officials. However, DOD CIO officials told us they were not aware that DARPA has not required its users to take the Cyber Awareness Challenge training that DISA developed and they did not assess the extent that components complied with the requirement for components to use the DISA- developed training. If the DOD CIO does not ensure that DARPA and any other DOD components take the Cyber Awareness Challenge training developed by DISA, users in these components may take actions that lead to or enable exploitations of DOD information systems. DOD Does Not Know the Extent that Cyber Hygiene Practices Have Been Implemented to Protect DOD Networks from Key Cyberattack Techniques DOD identified key techniques that adversaries use most frequently and that pose significant risk to the department’s networks and identified cyber hygiene practices to protect the department’s networks from these techniques. Specifically, JFHQ-DODIN has identified the cyberattack techniques that the agency observes adversaries using most frequently to attack the department’s networks. In addition, the National Security Agency, the Defense Information Systems Agency, and the DOD CIO identified 177 cyberattack techniques and prioritized the techniques according to the level of risk each posed to the department’s networks. The agencies prioritized the techniques using various criteria including the prevalence of the technique and whether the department could detect the use of the technique. Further, the department has established cyber hygiene practices to mitigate most of the frequently occurring techniques and those that the department identified as the highest priority, according to DISA and JFHQ-DODIN officials. However, the department does not know the extent that these cyber hygiene practices have been implemented across the department to protect its networks from these key cyberattack techniques. Components have visibility of the extent that they have implemented practices within their component, according to DOD officials. For example, DISA officials told us that they require their component to implement cyber hygiene practices to protect DOD networks from key cyberattack techniques and are able to determine the extent that those practices are implemented within DISA. However, no component or office within the department has complete visibility of the department’s efforts to implement these protective practices across the department, according to DOD officials. FISMA states that agency heads shall be responsible for, among other things, providing information security protections commensurate with the risk and magnitude of harm that could result from unauthorized access, use, disclosure, disruption, modification or destruction of such information systems. Executive Order 13800 states that agency heads will be held accountable for managing cybersecurity risk to their enterprises. The order requires agency heads to use the NIST’s Framework for Improving Critical Infrastructure Cybersecurity (commonly referred to as the NIST Cybersecurity Framework) to manage their agency’s cybersecurity risk. The Cybersecurity Framework calls for senior executives to monitor cybersecurity risk in the same context as financial risk and other organizational risks. In doing so, the Cybersecurity Framework calls for agencies to, among other things, assess cybersecurity risks (including threats), prioritize cybersecurity outcomes and requirements based on that risk, and establish processes to assess and monitor the implementation of the cybersecurity outcomes and requirements. The department does not know the extent that practices to protect DOD networks from key cyberattack techniques have been implemented across the department in part because no DOD component monitors the extent to which such practices are implemented, according to DOD officials. Officials from JFHQ-DODIN told us that they are able to detect when adversaries are using techniques to attack the department’s networks. However, detecting an attack after it has commenced may still enable an adversary to inflict harm on the department’s networks and the information therein. If the Secretary of Defense does not direct a component to monitor the extent to which practices to protect its network are implemented, gaps in protection could go undetected. These gaps can jeopardize military operations, performance of critical functions, and protection of information within DOD systems and networks. Senior DOD Leaders Have Not Received Information on Two Cyber Hygiene Initiatives or Cyber Hygiene Practices DOD requirements and best practices recognize that senior DOD leaders need key information to make risk-based decisions. Specifically, the DC3I memorandum requires the commander of U.S. Cyber Command, in coordination with the DOD CIO, to provide quarterly updates to the Deputy Secretary of Defense and the Vice Chairman of the Joint Chiefs of Staff on the progress in implementing the DC3I. Further, the CDIP memorandum requires the department to report progress implementing the CDIP tasks. In addition, NIST Special Publication 800-50, Building an Information Technology Security Awareness and Training Program, states that the CIO should ensure that agency heads and senior managers are informed of the progress of the security awareness and training program’s implementation. Senior DOD leaders receive two recurring reports on the department’s cybersecurity posture that include information on one cyber hygiene initiative. Specifically, the Cyber Hygiene Scorecard (Scorecard) is a report measuring compliance with DOD cybersecurity policies, procedures, standards and guidelines. The Scorecard provides information to the Secretary of Defense, the Deputy Secretary of Defense, and DOD component heads about the extent that the 10 CDIP tasks overseen by the DOD CIO are implemented. In addition, the Cyber Landscape Report is a quarterly report that includes information highlighting cybersecurity risks to DOD networks, U.S. critical infrastructure, DOD weapon systems, the cloud, and DOD’s cyber workforce. Based on our analysis, the Cyber Landscape Report also includes some information from the CDIP initiative. However, senior DOD leaders have not received information on the other two cyber hygiene initiatives or cyber hygiene practices to protect DOD networks from key cyberattack techniques in these recurring reports. Specifically, neither the Scorecard nor the Cyber Landscape Report includes information on the extent that the DC3I and the Cyber Awareness Challenge training have been implemented. In addition, neither of these recurring reports identifies key cyberattack techniques the department faces nor do they include information on the extent that the department has implemented cyber hygiene practices to protect DOD networks from these techniques, according to DOD officials. Senior DOD leaders are not receiving complete information in part because the DOD CIO has not assessed the extent that the missing information could improve senior leaders’ ability to make risk-based decisions. According to DOD officials, DOD CIO has not revised the recurring reports or developed a new report in response to such an assessment. DOD CIO officials told us that they do not believe that senior DOD leaders need to be made aware of all cyber hygiene topics we describe here—and in some cases that information could be managed at lower echelons within the organization. While some cyber hygiene information could be managed by lower-echelon DOD leaders, the DC3I memorandum requires information about its progress to be reported to senior leaders. The NIST guidance calls for similar reporting. Additionally, a DOD official told us that the department uses the Cyber Hygiene Scorecard to respond to the department’s requirement to annually report progress on implementing its information security program to the Office of Management and Budget under FISMA. Further, these officials told us that the Scorecard was not originally designed to include the information from our analysis such as information about the DC3I. They told us that this Scorecard was designed to provide an oversight tool to monitor the progress components made implementing the CDIP tasks overseen by DOD CIO. However, while DOD uses the Scorecard with the intention to meet the FISMA annual reporting requirement, the Scorecard does not provide information about 53 of the 69 risk-management FISMA indicators that are called for by the Office of Management and Budget. In addition, DOD CIO is not precluded from revising the Scorecard to include additional information. As one of two recurring reports sent to senior DOD leaders, the Cyber Hygiene Scorecard may be well positioned to provide additional information reflecting progress made implementing cyber hygiene initiatives and associated cybersecurity practices, including the DC3I and efforts to protect DOD networks from the key cyberattack techniques used by adversaries. Further, a DOD CIO official told us that its officials did not include information about the DC3I in the Cyber Hygiene Scorecard because they believed it would be challenging to measure the culture-related objectives in the DC3I. While the DC3I’s culture-related objectives may be difficult to measure, the extent to which assigned DOD components have taken actions to implement the DC3I tasks is measurable. If the DOD CIO does not assess the extent that the missing information could improve senior leaders’ ability to make risk-based decisions—and does not follow up to revise the recurring reports or develop a new report—senior DOD leaders will not be positioned well to make effective and risk-based decisions and manage cybersecurity risks. Conclusions As DOD has become increasingly reliant on IT systems and networks to conduct military operations and perform critical functions, risks to these systems and networks have also increased because IT systems are often riddled with cybersecurity vulnerabilities—both known and unknown. These vulnerabilities and human error can facilitate security incidents and cyberattacks that disrupt critical operations; lead to inappropriate access to and disclosure, modification, or destruction of sensitive information; and threaten national security. DOD has taken actions to address cyber vulnerabilities in the department through establishing the DC3I, the CDIP, the Cyber Awareness Challenge training, and cyber hygiene practices to protect its networks from cyberattack techniques that adversaries may use. However, the department faces challenges implementing the DC3I and CDIP because the DOD CIO has not taken appropriate steps to ensure that the DC3I tasks are implemented, DOD components have not developed plans with scheduled completion dates to implement the remaining four CDIP tasks overseen by DOD CIO, and the Deputy Secretary of Defense has not identified a DOD component to oversee the implementation of the seven other CDIP tasks and report on progress implementing them. By improving oversight through implementing the DC3I tasks, DOD components developing plans with scheduled completion dates to implement the remaining four CDIP tasks that the DOD CIO oversees, and identifying a DOD component to oversee implementation of the seven other CDIP tasks and report on progress implementing them, the department can be better positioned to safeguard DOD’s network by removing preventable, well-known vulnerabilities. If the components address gaps we identified in the extent that they account for whether their users completed the 2018 Cyber Awareness Challenge training will help the department gain assurance that its workforce is prepared to identify and appropriately respond to cybersecurity risks. Additionally, by ensuring that DARPA, and any other similar DOD components, requires its users to take the required DISA- developed training, DOD users may be more aware of threats and vulnerabilities to the department’s networks and may be better equipped to prevent exploitations of DOD information systems. The department does not know the extent that cyber hygiene practices have been implemented to protect DOD networks from key cyberattack techniques. By directing a component to monitor the extent to which practices to protect DOD’s networks are implemented, DOD would be better positioned to ensure that its networks are secure and decrease potential risks to military operations, critical functions, and information assurance. Finally, the lack of information on two cyber hygiene initiatives and cyber hygiene practices in recurring reports provided to senior DOD leaders is concerning because of the need for those leaders to have a complete picture of the state of the department’s cybersecurity posture. By directing DOD CIO to assess the extent that the missing information could improve senior leaders’ ability to make risk-based decisions and revise the recurring reports or develop a new report, DOD leaders would then be better positioned to make effective decisions and manage cybersecurity risks. Recommendations for Executive Action We are making seven recommendations to the Department of Defense. The Secretary of Defense should ensure that the DOD CIO takes appropriate steps to ensure implementation of the DC3I tasks. (Recommendation 1) The Secretary of Defense should ensure that DOD components develop plans with scheduled completion dates to implement the four remaining CDIP tasks overseen by DOD CIO. (Recommendation 2) The Secretary of Defense should ensure that the Deputy Secretary of Defense identifies a DOD component to oversee the implementation of the seven CDIP tasks not overseen by DOD CIO and report on progress implementing them. (Recommendation 3) The Secretary of Defense should ensure that DOD components accurately monitor and report information on the extent that users have completed the Cyber Awareness Challenge training as well as the number of users whose access to the network was revoked because they have not completed the training. (Recommendation 4) The Secretary of Defense should ensure that the DOD CIO ensures all DOD components, including DARPA, require their users to take the Cyber Awareness Challenge training developed by DISA. (Recommendation 5) The Secretary of Defense should direct a component to monitor the extent to which practices are implemented to protect the department’s network from key cyberattack techniques. (Recommendation 6) The Secretary of Defense should ensure that the DOD CIO assesses the extent to which senior leaders’ have more complete information to make risk-based decisions—and revise the recurring reports (or develop a new report) accordingly. Such information could include DOD’s progress on implementing (a) cybersecurity practices identified in cyber hygiene initiatives and (b) cyber hygiene practices to protect DOD networks from key cyberattack techniques. (Recommendation 7) Agency Comments and Our Evaluation We provided a draft of this report to the department for review and comment. In written comments, reprinted in appendix III, DOD concurred with one of our seven recommendations, partially concurred with four, and did not concur with the remaining two. DOD separately provided technical comments, which we incorporated as appropriate. The department concurred with our recommendation (Recommendation 5) that the DOD CIO ensure all components, including DARPA, require their users to take the Cyber Awareness Challenge training developed by DISA. The department partially concurred with four of our recommendations. The department partially concurred with our recommendation that the DOD CIO take steps to ensure that DC3I tasks are implemented. The department concurred that tasks two and six in the DC3I should be implemented and stated that these two tasks are the only two still actively being pursued. The department stated that the remaining five tasks were either implemented or have been overcome by events. However, the department did not provide evidence that the other five tasks were implemented or demonstrate how these tasks were overcome by events during the audit or in its comments on a draft or our report. In addition, JFHQ-DODIN officials stated that the principles outlined in the DC3I are important for the department to achieve its cybersecurity goals. For example, several of these five tasks were focused on improving cybersecurity awareness and training at all levels within the department. Therefore, it is unclear why DOD believes that these cyber hygiene tasks have been overcome by events; DOD did not elaborate. Implementing all seven DC3I tasks that have not been implemented can better position the department to achieve the goals of the DC3I to (1) mitigate the risks of compromising the confidentiality, integrity, and availability of mission- critical information as a result of human error by users on the department’s networks; and (2) transform DOD cybersecurity culture by enabling and reshaping leaders, cyber providers, personnel who perform cyberspace operations, and general users to improve individual human performance and accountability on DOD’s network. The department partially concurred with our recommendation that DOD components develop plans with scheduled completion dates to implement the four remaining CDIP tasks overseen by DOD CIO. DOD provided classified comments on this recommendation. Thus, we cannot respond in detail to their comments. We plan to respond to DOD’s comments in a classified version of this report, which we plan to issue later in 2020. Developing plans that would facilitate implementation of these four CDIP tasks would better position DOD to meet the Deputy Secretary of Defense’s goal of removing preventable vulnerabilities from DOD’s network that could allow adversaries to compromise information and information systems. The department partially concurred with our recommendation that components accurately monitor and report information on the extent that users have completed the Cyber Awareness Challenge training and information on the number who have been denied access to the network for not completing the training. The department concurred that it should ensure components accurately report the number of users who have completed the training. However, it did not concur that components should report the number of users who have been denied access to the network because they have not completed the training. The department stated that a statistic showing this information would not be meaningful and would be burdensome to collect. We disagree that such a measure would not be meaningful because it would help leaders hold network users accountable and better position DOD components to comply with DOD policy. Recognizing that trained and aware users are the first and most vital line of defense, DOD components should document and maintain the status of awareness compliance for each user. In its current approach, DOD is unable to confirm whether all of its network users have completed the cybersecurity training, as required. For example, as stated above, 8 of the 16 (50 percent) of the DOD components we requested training information from told us they did not monitor whether users who did not complete the annual training were blocked from DOD networks and systems. If the Secretary of Defense does not ensure that DOD components accurately monitor and report information on the number of users whose access to the network was revoked because they have not completed the training, the components will jeopardize the department’s ability to ensure that DOD users are trained in steps needed to address cybersecurity threats to the department. In responding to this recommendation, DOD also stated that the Navy indicated that it provided us data on the number of its users who completed the training and the total number of its users. The department stated that we could compute the number of Navy users who had not completed the training by computing the difference between the total number of users and the number of users who completed the training. We updated our assessment of the Navy in our report. We now indicate that the Navy was able to identify the number of users who had completed the training in fiscal year 2018. However, we disagree that the difference between the total number of users and the number of users who completed the training equates to the number of users who did not take the training. DOD CIO officials told us during our audit that computing the number of users using this method is not reliable because there are multiple explanations for the difference between the total number of users and the number of users who took the training. For example, officials told us that some military users leave the service before they complete the annually required training and are included in the service’s total number of users but are not included in the number of users who took the training. The department partially concurred with our recommendation that the CIO assess the extent to which senior leaders have information to make risk-based decisions and then revise accordingly the recurring reports. The department stated that it will revise the recurring reports by merging the Cyber Hygiene Scorecard and a scorecard related to the Cyber Landscape to assist senior leaders’ decision-making. However, the department stated that it did not fully agree with the recommendation because, as written in the draft report, the department believed the recommendation was stating that DOD should have “complete” information. Based on DOD’s comment, we clarified the recommendation to state that senior DOD leaders should have more complete information to make risk-based decisions. We believe this is critical because the cyber hygiene tasks and practices highlighted in the report were identified by the most senior leaders in the department—including the Secretary of Defense, Deputy Secretary of Defense, and Chairman of the Joint Chiefs of Staff—as being the tasks and practices that were essential to protecting DOD information, systems, and networks from the most common and pervasive cybersecurity risks faced by the department. The department also stated that risk is a function of multiple variables, that are continually evolving. We agree that risk is a function of multiple variables—including threats and vulnerabilities—that are continually evolving. As such, we think that information, such as the extent to which cyber hygiene practices have been implemented across the department to protect its networks from evolving key cyberattack techniques, will position senior leaders to make more effective and risk-based decisions and manage cybersecurity risks. The department did not concur with two recommendations. In particular: DOD did not concur with our recommendation that the Deputy Secretary of Defense identify a component to oversee the implementation of the seven CDIP tasks that the CIO does not oversee and report on progress implementing those tasks. The department stated that, since the CDIP’s approval in 2015, the department has issued new or updated versions of a number of cyber- related strategies, including the DOD Cyber Strategy. The department also stated that the Deputy Secretary of Defense directed DOD to develop a classified top 10 list of cybersecurity critical-risk areas and an associated scorecard that provides the Deputy Secretary a quarterly assessment of the department’s progress in reducing the risk for each of these areas. The department also stated that the cyber landscape is constantly evolving with changes in technology, threats, and vulnerabilities, and that this requires DOD to reassess its cybersecurity priorities. The department stated that implementing our recommendation would override these recent efforts and focus the department’s efforts on monitoring areas with lower levels of risk. We disagree that implementing our recommendation would override the department’s recent efforts. In fact, implementing the seven tasks would align with one of the 2018 DOD Cyber Strategy’s objectives to “secure DOD information and systems against malicious cyber activity.” We agree with DOD that the department should reassess cybersecurity priorities in light of changes in technologies, threats, and vulnerabilities. However, DOD did not provide evidence during the audit or in responding to the draft report that the department had assessed the CDIP tasks required by the Deputy Secretary of Defense in 2015. Specifically, the department has not determined whether they remain valid or aligned with the current cybersecurity threat environment, that the vulnerabilities associated with these seven tasks were mitigated or addressed, and that a senior-level DOD official provided written direction canceling the Deputy Secretary of Defense’ CDIP taskings. More importantly, our analysis of the seven tasks that DOD is not currently tracking progress on are consistent with basic cybersecurity standards established by DOD guidance and NIST— and which DOD is planning to apply to certain defense contractors in future contract awards to protect DOD information that is stored or transits through their networks as a part of the Cybersecurity Maturity Model Certification framework. For example, Task 14 requires commanders and supervisors to ensure physical security of their network infrastructure devices. This task aligns with general NIST guidance regarding physical access protections. NIST guidance states that organizations should manage and protect physical access to assets and facilities where information systems reside. Task 15 requires commanders and supervisors to report all commercially provided internet connections to DOD’s unclassified network. This task aligns with general NIST guidance regarding the use of external networks. NIST guidance states that organizations should catalogue all external information systems. Task 16 requires commanders and supervisors to ensure alignment to a Computer Network Defense Service Provider. This task is consistent with DOD requirements on cybersecurity activities to protect the DOD Information Network. The requirements state that DOD IT must be aligned to DOD network operations and security centers, which provide any required cybersecurity services. Task 17 requires commanders and supervisors with Computer Network Defense Service Provider responsibility to ensure the cyber incident response plan(s) are properly exercised and documented. This task aligns with general NIST guidance regarding incident response. NIST guidance states that organizations should provide incident response handling training and implement incident handling capabilities, as well as a process to ensure that response processes and procedures are executed, and maintained ensuring response to detected cybersecurity incidents. If the Deputy Secretary of Defense does not implement this recommendation, the department will have less assurance that cybersecurity vulnerabilities are being addressed in a timely manner and systems are being securely configured. The department did not concur with our recommendation that a component monitor the extent of implementation of practices to protect the department's network from key cyberattack techniques. The department determined that the information in its response to this recommendation included sensitive information. Therefore, we are redacting the department’s response to this recommendation from DOD’s written comments that we are reprinting in Appendix III. However, we still believe the recommendation is valid. As stated in our report, no component or office within the department has complete visibility of the department’s efforts to implement these protective practices across the department, according to DOD officials. Taking action to implement the intent of this recommendation would help address that gap. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; DOD’s Chief Information Officer; the Secretaries of the Army, Navy, and Air Force; the Commandant of the Marine Corps; the Chairman of the Joint Chiefs of Staff; the Commanding Generals of U.S. Strategic Command, U.S. European Command, U.S. Southern Command, and U.S. Cyber Command; and the Directors of DISA, the National Security Agency, DARPA, the Defense Commissary Agency, the Defense Contract Management Agency, the Defense Finance and Accounting Service, the Defense Media Activity, and the Defense Technology Security Administration. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact us: Joseph Kirschbaum at (202) 512-9971 or kirschbaumj@gao.gov, or Nick Marinos at (202) 512-9342 or marinosn@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Scope and Methodology For the purposes of this review, we adapted a definition of cyber hygiene developed by Carnegie Mellon University’s Software Engineering Institute. The institute defines cyber hygiene as a set of practices for managing the most common and pervasive cybersecurity risks faced by organizations today. We discussed the definition of cyber hygiene with Department of Defense (DOD) officials to identify DOD initiatives aimed at improving cyber hygiene. DOD officials identified the Cyber Discipline Implementation Plan (CDIP) as DOD’s main cyber hygiene initiative aimed at implementing technical improvements to DOD networks. In addition, DOD officials identified the DOD Cybersecurity Culture and Compliance Initiative (DC3I) and DOD’s Cyber Awareness Challenge training as two initiatives designed to establish best practices for DOD network users including military personnel, civilians, and contractors. To determine the extent to which DOD has implemented its three cyber hygiene initiatives and practices to protect its networks from cyberattack techniques that adversaries may use, we conducted analyses for each initiative. To determine the extent to which DOD implemented the DC3I, we reviewed the 11 tasks that require components to take actions that are specified in the DC3I memorandum that the Secretary of Defense and the Chairman of the Joint Chiefs of Staff issued in September 2015. We analyzed documentation we collected from U.S. Cyber Command, the office of the DOD Chief Information Officer (CIO), and the Joint Staff that demonstrate actions these components took in response to each of the 11 DC3I tasks and determined the extent to which each task was implemented. To determine the extent to which DOD implemented the CDIP, we reviewed the 17 tasks that require components to take actions specified in a memorandum that the Deputy Secretary of Defense issued in October 2015. We interviewed officials from the office of the DOD CIO about the extent to which DOD components implemented the CDIP tasks, the reasons the components had not fully implemented all of the tasks, and to determine the extent that the DOD CIO knew if DOD components had implemented the remaining seven CDIP tasks. We also reviewed documentation on the extent that DOD components implemented the tasks overseen by DOD CIO by analyzing data included in the Cyber Hygiene Scorecard. We also assessed the reliability of the data in the Scorecard by reviewing the methods the DOD CIO uses to ensure the data reported to the Scorecard are accurate and interviewing cognizant officials. We determined the data are sufficiently reliable for our purposes. To determine the extent that DOD implemented the Cyber Awareness Challenge training, we analyzed the extent that the DOD CIO and the DOD component CIOs ensured that personnel they oversee completed the fiscal year 2018 Cyber Awareness Challenge training. To carry out this analysis, we collected and analyzed information from the DOD CIO and a sample of 16 DOD components. We selected this sample of components by identifying important groupings of components and selecting from these groups to ensure that our sample represented a significant number of DOD personnel as well as a variety of types of components. These groups were: the military services and the Joint Staff, combatant commands, agencies and field activities, and the Office of the Secretary of Defense. Military services and Joint Staff. We selected the four military services because they are the components within DOD with the most personnel. We also included the Joint Staff because this component reflects the strategic perspective for the department as a whole. Combatant commands. We randomly selected three combatant commands from the group of 11 combatant commands—including geographic (e.g., U.S. Central Command) and functional (e.g., U.S. Transportation Command). We selected three of the 11 combatant commands to include the perspectives of multiple combatant commands in our sample. We selected these combatant commands: U.S. European Command, U.S. Southern Command, and U.S. Strategic Command. Agencies and Field Activities. We assembled a list of non-service and non-combatant command components organized by the types of functions that each component performs. We then organized these components by functional groupings. Specifically, we created functional groupings for the components that fall under each of the six Under Secretaries of Defense because these officials oversee components with similar functions. We also included a seventh functional group of miscellaneous components that are not overseen by any of the Under Secretaries of Defense. We then accounted for the size of the components on this list by identifying the larger agencies and the smaller field activities. From this list, we randomly selected one component from each of the seven groups. In doing so, we selected five of the 20 agencies and two of the eight field activities. We chose this ratio of agencies to field activities to reflect the ratio of agencies to field activities in DOD. That is, DOD agencies are about 71 percent of DOD’s non-service and non-combatant command components and about 71 percent of our sample. We selected these five agencies: Defense Advanced Research Projects Agency, Defense Commissary Agency, Defense Contract Management Agency, Defense Finance and Accounting Service and the National Security Agency. We selected these two field activities: Defense Media Activity and Defense Technology Security Administration. The Office of the Secretary of Defense. We also randomly selected one of 16 components from the Office of the Secretary of Defense. This group included the offices that support the six Under Secretaries we discussed above such as the Under Secretary of Defense for Policy as well as other offices including the Office of Cost Assessment and Program Evaluation and the Office of the DOD Chief Management Officer. We selected one component from this group to ensure we reflected the perspective of components at the DOD headquarters level. We selected the Office of the DOD Chief Information Officer. To collect information from this sample of 16 components, we developed a standard set of questions we provided to each component on topics related to both objectives. In particular, we asked DOD components to provide the number of network users that completed the fiscal year 2018 Cyber Awareness Challenge training, the number of network users that did not complete the training, and the number of network users who had their access to the network removed as a result of not taking the training. We also asked other questions including a question about the information that senior leaders are provided regarding cyber hygiene practices. Each component provided written responses to our questions and in some cases provided documentation corroborating their responses. We conducted a content analysis of the components’ responses and the documentation they provided. To complete this content analysis, two analysts assessed the components’ responses, compared and discussed their separate analyses, and reached agreement on their conclusions about their analysis. We compared the information we collected from these components to a provision in NIST Special Publication 800-50, Building an Information Technology Security Awareness and Training Program, which advises agencies to capture training compliance data at an agency level. Further, we interviewed officials from Defense Information Systems Agency and JFHQ-DODIN to determine the extent to which DOD had implemented cyber hygiene practices that the department has implemented to protect its networks from key cyberattack techniques that adversaries may use. To determine the extent to which senior DOD leaders receive information on the department’s efforts to address cyber hygiene initiatives and practices, we first defined senior DOD leaders as the Secretary of Defense, the Deputy Secretary of Defense, and DOD component heads. To identify the information that could be included in reports that senior DOD leaders receive about DOD efforts to mitigate cyberattack techniques, we identified techniques that are most likely to be used by adversaries against DOD’s networks or that could cause severe adverse effects on DOD’s operations. In particular, we identified 22 key cyberattack techniques from two sources: Joint Force Headquarters DOD Information Network (JFHQ-DODIN) provided a list of eight cyberattack techniques that the agency observed adversaries using most frequently in January 2019. JFHQ- DODIN officials also determined that these data are representative of the cyberattack techniques that they have recently observed. We identified 14 cyberattack techniques by analyzing a review conducted in 2016 by the National Security Agency, the Defense Information Systems Agency, and the DOD CIO. In the review, the agencies identified 177 cyberattack techniques and ranked the techniques according to the level of risk the techniques posed to DOD’s unclassified and Secret-level networks. The agencies used a number of different criteria to rank these techniques, including the prevalence of the technique, visibility of the technique, and whether other, closely associated alternative techniques exist. We selected the 14 cyberattack techniques that the agencies identified as the highest priority. Next, we analyzed the contents of two recurring reports that senior leaders receive on the department’s cybersecurity posture: the Cyber Hygiene Scorecard and the Cyber Landscape Report. In particular, we analyzed these reports to determine if they included information about DOD’s implementation of key cyber hygiene initiatives that we describe in the first objective. We also analyzed the reports to determine if they included the lists of key cyberattack techniques and information about the extent that the department had implemented cyber hygiene practices to protect DOD networks from these cyberattack techniques. We conducted this performance audit from January 2019 to April 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: DOD Cybersecurity Culture and Compliance Initiative Tasks The Department of Defense (DOD) Chief Information Officer (CIO) and other relevant DOD components implemented four of the 11 tasks required in the Cybersecurity Culture and Compliance Initiative (DC3I) and the remaining seven tasks were not fully implemented as of October 2019. Table 2 provides additional information of actions taken to address and implement all 11 DC3I tasks. Appendix III: Comments from the Department of Defense Appendix IV: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the individuals named above, Tommy Baril (Assistant Director), Kaelin Kuhn (Assistant Director), James P. Klein (Analyst-in- Charge), Tracy Barnes, Amy Bush, Peter Casey, Amie Lesser, Carlo Mozo, Richard Powelson, Michael Silver, Andrew Stavisky, and Walter Vance made significant contributions to this report. Kiana Beshir, Chris Businsky, Shaun Byrnes, and Richard Sayoc also contributed to the report.
Why GAO Did This Study DOD has become increasingly reliant on information technology (IT) and risks have increased as cybersecurity threats evolve. Cybersecurity experts estimate that 90 percent of cyberattacks could be defeated by implementing basic cyber hygiene and sharing best practices, according to DOD's Principal Cyber Advisor. Senate Report 115-262 includes a provision that GAO review DOD cyber hygiene. This report evaluates the extent to which 1) DOD has implemented key cyber hygiene initiatives and practices to protect DOD networks from key cyberattack techniques and 2) senior DOD leaders received information on the department's efforts to address these initiatives and cyber hygiene practices. GAO reviewed documentation of DOD actions taken to implement three cyber hygiene initiatives and reviewed recurring reports provided to senior DOD leaders. What GAO Found The Department of Defense (DOD) has not fully implemented three of its key initiatives and practices aimed at improving cyber hygiene. Carnegie-Mellon University defines cyber hygiene as a set of practices for managing the most common and pervasive cybersecurity risks. In discussions with GAO, DOD officials identified three department-wide cyber hygiene initiatives: the 2015 DOD Cybersecurity Culture and Compliance Initiative, the 2015 DOD Cyber Discipline Implementation Plan, and DOD's Cyber Awareness Challenge training. The Culture and Compliance Initiative set forth 11 overall tasks expected to be completed in fiscal year 2016. It includes cyber education and training, integration of cyber into operational exercises, and needed recommendations on changes to cyber capabilities and authorities. However, seven of these tasks have not been fully implemented. The Cyber Discipline plan has 17 tasks focused on removing preventable vulnerabilities from DOD's networks that could otherwise enable adversaries to compromise information and systems. Of these 17, the DOD Chief Information Officer is responsible for overseeing implementation of 10 tasks. While the Deputy Secretary set a goal of achieving 90 percent implementation of the 10 CIO tasks by the end of fiscal year 2018, four of the tasks have not been implemented. Further, the completion of the other seven tasks was unknown because no DOD entity has been designated to report on the progress. The Cyber Awareness training is intended to help the DOD workforce maintain awareness of known and emerging cyber threats, and reinforce best practices to keep information and systems secure. However, selected components in the department do not know the extent to which users of its systems have completed this required training. GAO's review of 16 selected components identified six without information on system users that had not completed the required training, and eight without information on users whose network access had been revoked for not completing training. Beyond the initiatives above, DOD has (1) developed lists of the techniques that adversaries use most frequently and pose significant risk to the department, and (2) identified practices to protect DOD networks and systems against these techniques. However, the department does not know the extent to which these practices have been implemented. The absence of this knowledge is due in part to no DOD component monitoring implementation, according to DOD officials. Overall, until DOD completes its cyber hygiene initiatives and ensures that cyber practices are implemented, the department will face an enhanced risk of successful attack. While two recurring reports have provided updates to senior DOD leaders on cyber information on the Cyber Discipline plan implementation, department leadership has not regularly received information on the other two initiatives and on the extent to which cyber hygiene practices are being implemented. Such information would better position leaders to be aware of the cyber risks facing DOD and make more effective decisions to manage such risks. What GAO Recommends GAO is making seven recommendations to DOD, including that cyber hygiene initiatives be fully implemented, entities are designated to monitor component completion of tasks and cyber hygiene practices, and senior DOD leaders receive information on cyber hygiene initiatives and practices. Of the seven recommendations, DOD concurred with one, partially concurred with four, and did not concur with two. GAO continues to believe that all recommendations are warranted.
gao_GAO-20-427
gao_GAO-20-427_0
Background Federal policy for rental housing has traditionally focused on assisting low-income households through rental assistance and incentives for the development of housing with below-market rents. In fiscal year 2020, Congress appropriated about $43.9 billion for HUD’s three largest federal rental assistance programs: public housing, Housing Choice Vouchers, and Project-Based Rental Assistance. These programs make rents affordable to eligible households, generally by paying the difference between the unit’s rent and 30 percent of a household’s adjusted income. Unlike certain other means-tested programs, federal rental assistance programs are not entitlements. The number of households that the programs can assist is limited by the amount of budget authority that HUD requests and Congress provides through the annual appropriations process. Historically, appropriations for rental assistance programs have not been sufficient to assist all households that HUD has identified as having worst case housing needs—that is, renter households that (1) have very low incomes; (2) do not receive housing assistance; and (3) use more than one-half of their income to pay for rent, live in severely inadequate conditions, or both. In 2017, HUD reported that 8.3 million households had worst case needs in 2015, an increase from 7.7 million in 2013. HUD reported that among very low-income renters in 2015, 25 percent of them received rental assistance, and an additional 43 percent had worst case needs. To determine program eligibility and identify populations in need of assistance, many federal rental assistance programs have specific income eligibility requirements. HUD sets income limits that determine eligibility for its assistance programs based on median family income and market rent estimates. These income limits can vary across different types of localities. Renting Became More Common after the 2007–2009 Financial Crisis but Varied by Demographic Group and Location Renting Expanded after the Financial Crisis The national rentership rate increased from 2001 through 2017 (see fig. 1). In 2004, the estimated rentership rate fell below 33 percent, the lowest in U.S. history, then climbed to 37 percent in 2013, a rate not seen since the 1960s. By 2017, almost 7 million more households rented their homes than in 2001, which brought the rentership rate to an estimated 36 percent. This increase of 7 million households reflects both overall growth in the population as well as the net shift from owning to renting. Many households experienced lasting financial effects of the financial crisis, such as impaired credit or loss of income, which hampered their ability to enter into or transition back into homeownership. Although the national foreclosure rate has slowed significantly in recent years, past research has shown that most households struggle to return to homeownership after foreclosure. Further, median home prices have risen faster than median incomes nationally, which makes achieving homeownership more challenging. Specifically, the gap between rising home prices and wage growth has likely contributed to increases in rentership in many metro areas. Renting Became More Prevalent among Most Age Groups, with Notable Increases among Middle- Aged Households Nationally, the rentership rate increased from 2001 through 2017 across all age categories we analyzed, except for older households (65 years or older), as shown in figure 2. The greatest increase was among early middle-aged households (35–49 years old), an estimated increase of nearly 8 percentage points. In addition, rentership for late middle-aged (50–64 years old) and younger (20–34 years old) households increased by 5 percentage points. Renters are, on average, older than they previously were. The late middle-aged group (50–64 years) experienced the largest estimated increase in the number of renter households—an increase of 4 million households—and accounted for more than half of the total increase in renter households from 2001 through 2017 (see fig. 3). Many of these households have not recovered from the financial crisis, and this group has lower incomes and higher rentership rates than in previous generations, Harvard’s Joint Center for Housing Studies has reported. We previously reported that the homeownership rate for the poorest older households was significantly lower after the financial crisis than before it. Renting Became More Common among Black Households and Declined for Hispanic and Asian Households Black households had higher estimated rentership rates than White, Hispanic, and Asian households, and rentership among Black households increased from 54 percent in 2001 to 58 percent in 2017 (fig. 4). In contrast, rentership among White households was lowest among the race/ethnicity groups and remained generally stable during our analysis period (ranging from 26 to 29 percent from 2001 through 2017). While rentership among Hispanic and Asian households increased slightly in the aftermath of the financial crisis, as of 2017, their rentership rates had returned to levels below those of 2001, although these rates were still higher than those of White households. As of 2017, high-growth and moderate-growth/high-density metro areas we analyzed tended to have more racially diverse renter populations than other areas, and renters in these metro areas were mostly from minority groups. For example, in Dallas, Texas, which is high-growth, an estimated 59 percent of renter households were minority households, and in Miami, Florida, which is moderate-growth/high-density, an estimated 75 percent of renter households were minority households. Higher-Income Renter Households Increased Substantially after 2010 The most significant change in rentership from 2001 through 2017 by income group was for higher-income households (more than 120 percent of area median income), with the greatest change between 2010 and 2017. Nationally, higher-income households were the second smallest renter group in 2001, with an estimated 6.6 million households, or 17 percent of renter households. In 2017, higher-income households were the second largest renter group, with approximately 10.3 million households, approximately 20 percent of renter households (see fig. 5). Consistent with national trends, in all locality types—that is, those with higher and lower population density or rates of growth—the estimated number and proportion of higher-income renter households increased from 2001 through 2017 (see fig. 6). The greatest increase occurred in high- and moderate-growth metro areas. This trend could reflect (1) a change in income, (2) relocation from moderate-growth/high-density metro areas, and (3) consolidation of households—such as having multiple roommates, extended families occupying one housing unit, or households doubling up with relatives or others to make ends meet. There were modest changes in the number and proportion of low-income households during the same period. Rural areas and metro areas with shrinking populations had the highest proportion of renter households with low incomes as of 2017—for example, an estimated 63 percent of renters in negative-growth metro areas had low to extremely low incomes. Population growth and two other factors appear to have contributed to the growth in higher-income renter households. First, many homeowners who experienced foreclosure during the financial crisis became renters. Second, with rising housing costs, there has been a trend toward consolidated households. The share of households with three or more adults was higher in 2017 than in 2001. Some of these households may have chosen to combine as an alternative to eviction or homelessness, and they may have overcrowded or unstable living arrangements. Rent as Share of Income Increased from 2001 through 2017, with Serious Consequences for the Poorest The Percentage of Rent- Burdened Households Increased from 2001 through 2017 Most renter households paid a larger share of their income in rent in 2017 than in 2001. Federal housing policy generally considers rents at or below 30 percent of household income to be affordable, and households that pay more than 30 percent of income in rent are considered to be rent burdened. We found that by 2017, an estimated 48 percent of renter households were rent burdened, 6 percentage points higher than in 2001. Severe rent burden, where more than 50 percent of household income is paid in rent, also became more common. Of the households that were rent burdened in 2017, about half were severely rent burdened. These households represented 24 percent of all renter households—an increase of 4 percentage points from 2001 (see fig. 7). The rising rent burden is part of a long-term trend in rental unaffordability, as supply has not kept pace with demand for rental units. With fewer affordable apartments available, rent burdens increased among lower- income households, who were forced to spend a greater proportion of income on rent. Government, academic, and industry research has identified several factors that contribute to this trend: Local regulation and geography have long constrained where and how much rental housing can be built. Cities have adopted zoning and land use regulations that can prohibit or increase costs for new rental units. Metro areas, particularly those in coastal or mountainous regions, have limited available land for new housing. Construction of new rental units has been limited since the 2007– 2009 financial crisis, in part because developers struggled to rebuild workforce capacity after layoffs of skilled construction workers. As a result, since 2009, the construction industry has focused on building luxury apartments, which have higher profit margins, and produce few units affordable to lower-income households. Conversion of lower-rent units to higher-rent units through renovation also reduced the number and share of rental units affordable to lower-income households. Demographic changes, particularly the aging of the millennial and baby boomer generations, have increased demand for rental units. As previously discussed, we found that renters were, on average, older in 2017 than in 2001. In addition, Harvard’s Joint Center for Housing Studies has reported that late middle-age renters (50–64 years) have lower incomes and higher rentership rates than previous generations. Populations with higher rentership rates—including minority households—are forecasted to continue growing through 2030. The spike in foreclosures during the financial crisis resulted in millions of households entering the rental market, increasing competition for available units. Tighter credit standards after the financial crisis have kept many of those who lost their home due to foreclosure from qualifying for a new mortgage. In the United States, rent burden has been most common among minorities and older adults and in dense metropolitan areas (see fig. 8): Rent burden was about 10 percentage points more common among Black and Hispanic households than White households in 2017. This disparity was due to sizable differences in median income. In 2017, estimated median income was $63,704 for White households, $49,793 for Hispanic households, and $40,232 for Black households. Rent burden was more than 10 percentage points more common among older adult (65 and over) households than working-age (20– 64) households in 2017. This disparity was also due to sizable differences in median income, as older adults were less likely to be in the workforce. In 2017, median income was $69,459 for households age 25–64 and $43,735 for households age 65 and over. Rent burden was nearly 10 percentage points more common among renters in high-density metro areas than in nonmetro areas in 2017. According to the Urban Institute, the shortage of affordable rental housing was more acute in urban areas than rural areas in 2014. See appendixes III and IV for more detailed information on rent burden by age, race/ethnicity, and locality type. Lower-Income Households Commonly Experienced High Rent Burdens from 2001 through 2017 In 2017, moderate and severe rent burdens were common among low- to extremely-low income households and relatively rare among moderate- to higher-income households (see fig. 9). From 2001 through 2017, the estimated number of renters with moderate or severe rent burdens increased across all income levels, but the increase was more pronounced among lower-income groups (see fig. 10). Specifically, we found the following: The estimated number of higher-income renters increased by more than 3.6 million households from 2001 through 2017, but relatively few of these households experienced rent burden. In contrast, the numbers of low-income, very low-income, and extremely-low income renters also increased over this period, and these groups saw significant increases in rent burden. In more recent years, the estimated number of extremely low-income renter households with severe burden actually decreased—from 7.4 million in 2011 to 6.6 million in 2017. This decrease, however, does not necessarily indicate improved conditions for these households because it was not accompanied by a corresponding increase in either (1) the number of extremely low-income households that were less burdened or (2) the number of very low-income households (the next highest income group). An increase in either of these groups could indicate that the poorest, most burdened households experienced either an increase in income or a decrease in rent burden. However, because these other groups did not increase, it is possible that some of these extremely low-income, severely burdened households moved in with other households or experienced some other form of homelessness. Rent burdens affect households differently depending on their income. Households with lower incomes may pay the same percentage of income in rent as moderate- or higher-income households but have less income left over for other necessities. Even relatively inexpensive units may not leave enough money for lower-income households to cover other necessities like food, clothing, transportation, or medical care. These households may also be sensitive to shocks, such as job loss and health emergencies, and may be at heightened risk of eviction and homelessness. Challenges that lower-income households face can vary across cities and regions due to differences in local market rents and incomes. For example, as figure 11 shows, in the San Francisco area in 2017 a very low-income family of four would experience a severe rent burden if it paid the fair market rent for a two-bedroom apartment ($3,018 per month). Such a family would struggle to pay the rent and afford other necessities even with two or three full-time minimum wage jobs. In contrast, a very low-income family of four in the St. Louis area in 2017 would experience a moderate or no rent burden if it paid the fair market rent for a two- bedroom apartment ($896). Such a family with at least two full-time minimum wage jobs would have relatively more money left over for other necessities. See appendixes III and IV for more detailed information on rent burden by household income. For moderate-income households, the consequences of rent burden are less dire than for lower-income households, but they are still significant. For example, a family of four earning the median income in San Francisco that paid fair market rent for a two-bedroom apartment would be rent burdened. A housing unit that would be considered affordable to them would cost at least $135 per month below fair market rent (or approximately $2,882 or less). Money that a family could save on a unit below fair market rate could help reduce household debt, add to retirement savings, or pay for necessities like child care. Rent burden among moderate-income households tends to be more common in large cities with strong economies and significant geographic and regulatory constraints on new housing, such as San Francisco and New York. The lowest-income households face challenges securing affordable rental units. There are not enough rental units that are affordable to the lowest- income households without rental assistance. Specifically, according to HUD, lower-income households face competition from moderate- or high- income households to rent affordable units. HUD’s analysis showed that although there were enough affordable units nationwide to house 66 percent of extremely-low income renters in 2015, 43 percent of those units were occupied by renters with higher incomes. We also found that for all income groups, rents rose faster than incomes and therefore became less affordable to varying degrees. Specifically, estimated median rent-to-income ratios, which indicate the median proportion of income devoted to rent, generally increased from 2001 through 2017, according to our analysis (see fig. 12). For the lowest- income households, even small declines in affordability have a big impact because these households face the highest rent burdens and have the fewest options in the housing market. See appendix III for more detailed information on rent-to-income ratios. About 15 Percent of Rental Units Had Serious Deficiencies in 2017 Based on two indexes we created to analyze rental housing conditions using American Housing Survey data, we found that an estimated 15 percent of renter households—more than 5 million—lived in units with serious deficiencies in 2017. Specifically, an estimated 12 percent of renter households (more than 4 million households) lived in units with substantial quality issues. These units typically had a combination of issues, such as cracked walls and the presence of rodents, or multiple heating problems and the presence of rodents. An additional 3 percent of renter households (more than 1 million households) lived in incomplete units—that is, units lacking essential components of a dwelling (such as heating equipment or hot and cold running water). Further, an estimated 28 percent of households—nearly 10 million—rented units with less substantial quality issues. Table 2 presents these findings and how our indexes described different types of rental housing conditions. The proportion of rental units with the three types of deficiencies— substantial quality issues, less substantial quality issues, and absence of essential components of a dwelling—generally remained stable from 2001 through 2017 (see fig. 13). The proportion of rental units that had at least one of these deficiencies ranged from an estimated 39 to 47 percent from 2001 through 2017. Serious Deficiencies More Often Affected Lower- Income and Rent- Burdened Households We analyzed rental housing conditions by renter household and rental unit characteristics. Households with low incomes (those with low, very low, or extremely low incomes) or with rent burdens comprised half or more of renters living in units with substantial quality issues and incomplete housing units (those lacking essential components of a dwelling). Although incomplete housing units represented a small percentage of rental units overall (about 3 percent), there were more than an estimated 1 million such units in 2017. Low-income renters have fewer affordable options and, as a result, may end up in units with deficiencies out of necessity. Households with low, very low, or extremely low incomes represented an estimated 62 percent of renters overall in 2017. These households occupied an estimated 67 percent of units that had substantial quality issues and nearly 80 percent of incomplete units. Similarly, rent-burdened households represented an estimated 50 percent of renters overall in 2017 and occupied an estimated 53 percent of units with substantial quality issues and 60 percent of incomplete units. There were some notable differences in housing conditions by age and race/ethnicity. Older households (65 and older) were the most likely age group to live in rental units with no deficiencies in 2017. About half of renting households were White in 2017, and White households comprised the largest share of renters in each quality or completeness category we analyzed. The proportions of Hispanic and Asian households that rented incomplete units (estimated at 31 percent and 11 percent, respectively) were higher than the overall proportions of Hispanic and Asian renter households (estimated at 20 percent and 6 percent, respectively). In addition, the proportion of Black households that rented units with substantial quality issues (estimated at 24 percent) was slightly higher than the overall proportion of Black renter households (estimated at 21 percent). Older and Single-Family Rental Units Were More Likely to Have Deficiencies Rental housing conditions by unit age or type were generally consistent from 2001 through 2017. As expected, units built after 2000 had fewer deficiencies than those built before. Older rental housing—units built prior to 1980—were more likely to have substantial quality issues than those built after. An estimated 63 percent of units in high-growth metro areas had no quality issues as of 2017, compared to an estimated 55 to 57 percent of units in other types of localities. There was little other variation in housing conditions by locality type. We also found that detached single-family homes and mobile homes were somewhat more likely to have serious deficiencies than multifamily units. The proportion of units with these deficiencies remained relatively steady from 2001 through 2017. One reason for this is single-family units lack on-site building managers and other benefits of shared maintenance that multifamily units may provide. Some researchers and industry participants have noted possible maintenance challenges for a growing number of investor-owners of single-family rentals that manage thousands of properties of varying size, age, and condition. From 2001 through 2017, the proportion of single-family units with serious deficiencies (rental units lacking essential components of a dwelling or units with substantial quality issues) ranged from around 13 to 20 percent (see fig. 14). During the same period, the proportion of single-family units with less substantial quality issues ranged from an estimated 28 to 34 percent. We also analyzed household crowding trends based on American Community Survey data. We defined crowded households as those having more than two people per bedroom. From 2001 through 2017, the incidence of renter household crowding decreased, with the greatest percentage point declines for Hispanic households prior to the housing crisis. Generally, households that were younger, Hispanic or Asian, or had lower incomes were more likely to experience crowding. In addition, crowded households were more common in high-density and high-growth metro areas. Appendix VI includes information on household crowding by race/ethnicity, age, household income, and locality type. Agency Comments We provided a draft of this report to HUD for review and comment. HUD officials told us that they had no comments on the draft report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Housing and Urban Development, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4529 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to analyze trends in (1) the share of households that rent and their characteristics, (2) the affordability of rental housing, and (3) rental housing conditions. Data Used in Our Analysis We analyzed 2001–2017 data from the American Community Survey and American Housing Survey to describe renter household characteristics, rent affordability, rental housing conditions, and trends at the national level and across different types of localities. The American Community Survey is an ongoing survey administered by the Census Bureau of around 3.5 million households across the United States; the data we used in our analysis were current as of 2017, the most recently available data at the time of our review. The survey collects data on the economic, social, housing, and demographic characteristics of communities at various geographic levels, including metropolitan areas, states, and counties. The American Housing Survey is a biennial survey sponsored by the Department of Housing and Urban Development (HUD) and administered by the Census Bureau that collects a range of housing information, including the size and composition of the U.S. housing inventory, physical condition of housing units, characteristics of occupants, and other information. Findings from each survey are subject to sampling errors. To assess the reliability of the data, we reviewed technical information for each survey. In addition, we interviewed HUD and Census Bureau officials to identify differences across survey years and understand geographic limitations of publicly available data. We determined that the surveys were sufficiently reliable for purposes of reporting at the national level on renter household characteristics. However, we determined that additional, nonpublic data were needed from each survey to analyze renter household characteristics, rent burden, and rental housing conditions for smaller geographic units. To address this limitation, staff from HUD’s Office of Policy Development and Research provided us with aggregated Census Bureau data. To assess the reliability of these data, we analyzed the underlying programming code and related documentation from agency officials and reviewed for missing data, outliers, and errors. We determined that the data were sufficiently reliable for purposes of analyzing renter household characteristics, rent burden, and rental housing conditions from 2001 through 2017 at the national level and for different types of localities. Locality Types For all objectives, to describe common trends and differences across localities—that is, localities with different population growth rates and densities—we developed metro area groupings. The groupings provide a general framework for describing metro areas that experienced varying degrees of population growth from 2000 through 2017 and how trends in renter household characteristics, rent affordability, and rental housing conditions compared to trends in other types of areas. To identify the locality types, we analyzed core-based statistical areas by population growth from 2000 through 2017 and population density as of 2017. We identified three growth categories (high, moderate, and negative) and further categorized the moderate growth group by density (high and moderate). We also identified a group of nonmetro areas consisting of all counties in each state that are outside the boundaries of any metro area. These areas included micropolitan areas, small towns, and low-density rural areas. The five locality types were high-growth metro areas, moderate-growth/high-density metro areas, moderate- growth/moderate-density metro areas, negative-growth metro areas, and nonmetro areas. Renter Household Characteristics To describe trends in the share of households that rent and their characteristics, we analyzed American Community Survey data from 2001 through 2017 at the national level and across different types of localities, with a focus on renter household age, race/ethnicity, and income. We defined four head-of-household age categories: younger (20–34 years old), early middle age (35–49 years old), late middle age (50–64 years old), and older (65 years and older). We reported on five race/ethnicity categories, combining some Census categories for our analysis: White, Black, Hispanic (an ethnicity that applies to individuals of any racial background), Asian (includes Asian, Native Hawaiian, and Other Pacific Islander), and Other (includes American Indian, Alaska Native, two or more races, and some other race). We defined five income categories based on income ranges that HUD uses for determining rental assistance eligibility or reporting to Congress on worst case needs: extremely low income (up to 30 percent of HUD area median family income (HAMFI)); very low income (more than 30, up to 50 percent of HAMFI); low income (more than 50, up to 80 percent of HAMFI); moderate income (more than 80, up to 120 percent of HAMFI) and higher income (greater than 120 percent of HAMFI). Rent Affordability To describe trends in the affordability of rental housing, we analyzed American Community Survey data on gross rent as a percentage of household income from 2001 through 2017 at the national level and across different types of localities. Consistent with other housing research and HUD policies, we defined rent burden as spending more than 30 percent of household income on rent, moderate rent burden as spending more than 30 and up to 50 percent of household income on rent, and severe rent burden as spending more than 50 percent of household income on rent. Further, as described in appendix IV, we developed a supplementary analysis of rental housing affordability for rural areas by state. Rental Housing Conditions To describe trends in rental housing conditions, we analyzed data from the American Community Survey and American Housing Survey from 2001 through 2017 at the national level and across different types of localities. HUD designed the American Housing Survey to include indicators of housing quality. HUD analyzes and reports periodically on a housing adequacy measure as part of its worst case housing needs assessments for Congress. HUD’s adequacy measure and related research informed our methodology for developing two indexes to analyze rental housing conditions. We developed the indexes to more specifically define the range of housing conditions. The two indexes include 13 quality-related variables and nine variables we identified as essential components of a dwelling from the American Housing Survey, described in table 3. Appendix II includes more detailed information about our methodology. Appendix V includes information on the similarities and differences between HUD’s adequacy index and the indexes we developed for this report. With our indexes, we analyzed trends in rental housing conditions by renter household characteristics and rental unit characteristics. The renter household characteristics we analyzed included household income and affordability, race/ethnicity, and age. The rental unit characteristics we analyzed included location, age, and structure type. In addition, from American Community Survey data, we analyzed household crowding as another aspect of housing conditions. Further, we reviewed reports and studies on housing conditions and interviewed stakeholders including federal agency officials, academic experts, and research organizations. To further describe trends in renter household characteristics, rent affordability, and rental housing conditions during our review period, we reviewed reports and studies by federal agencies and research organizations and interviewed a variety of stakeholders selected for their knowledge of these issues, including federal agency officials from HUD, the Census Bureau, Congressional Research Service, the Department of Agriculture, the Federal Housing Finance Agency, and the Department of the Treasury; academic experts, including researchers from Harvard’s Joint Center for Housing Studies and others; research organizations that included the Bipartisan Policy Center, various researchers associated with the Board of Governors of the Federal Reserve System, Brookings Institute, Center on Budget and Policy Priorities, Housing Assistance Council, National League of Cities, National Rural Housing Coalition, Urban Land Institute, and Urban Institute; and industry groups that included the National Association of Home Builders, National Association of Realtors, and the National Housing Conference. We conducted this performance audit from February 2018 to May 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. Appendix II: Statistical Analysis of Rental Housing Conditions This appendix provides additional details on our analysis of the conditions of the national rental housing stock between 2001 and 2017. Data To assess rental housing conditions, we used data from the national American Housing Survey (AHS), which is administered by the Census Bureau and conducted every odd year. Specifically, we considered two concepts, unit completeness and unit quality, and relied on questions which were consistently asked over the 2001 to 2017 period to define nine completeness and 13 quality variables. These are described in table 4. The survey questions underlying uncomfortably cold periods and heating equipment breakdowns were only asked of respondents who occupied their unit in the winter prior to the survey year, so our main analysis of rental unit quality only considered cash-rent housing units occupied by households since the prior winter, while the analysis of unit completeness considered all cash-rent units. See table 5 for the distributions of the completeness and quality variables in 2017. Methodology and Results We used each set of variables to construct two indexes, one of unit completeness (an indicator) and one of unit quality (a continuous measure). We collapsed the continuous quality index into three categories (no quality issues, less substantial quality issues, and substantial quality issues) to facilitate a summary of rental housing quality trends. Completeness To determine rental unit completeness, we first summed the number of missing components contained in the nine completeness components for each rental unit in the surveys. We obtained an estimate of Cronbach’s alpha associated with this sum of 0.40, which was low enough to suggest that a simple indicator would be a more appropriate summary measure. We therefore determined that each of the nine completeness components was essential for us to consider a unit livable, and assigned a completeness score to each rental unit based on the absence of any of them. The resulting index therefore measured incompleteness, where a score of 1 indicated that a unit was missing one or more of the essential components. See table 6 for the distribution of the completeness index in the survey years between 2001 and 2017. Quality for inference that is robust to non-normal distributions of the latent continuous variables. Finally, we obtained quality score estimates by empirical Bayes, which selects the mode of the posterior distribution 𝑝𝑝(𝜂𝜂̂|𝑦𝑦𝑖𝑖) evaluated at the estimates of the model’s parameters. Note that because all quality variables increased in the presence or number of issues, the quality index correspondingly increased in poor quality. We first estimated two variants of the factor model, one accounting for the sample weights assigned to units in each survey (our preferred specification), and one that did not account for these weights. The robust root mean square error of approximation from the latter of 0.020 suggested that our single factor model provided an appropriate representation of the AHS data. Estimates of the polychoric correlation matrix and of the factor loadings are reported in tables 7 and 8 respectively. We assessed stability by estimating the model on each AHS year separately and broadly found that factor loadings estimates varied little over time. Given the estimates from our preferred specification, which accounts for the survey design, we then relied on empirical Bayes estimation to assign a quality score to each unit for which responses to all 13 quality variables were observed. The full distributions of the resulting quality index in the survey years between 2001 and 2017 are reported in figure 15. We then selected thresholds in the distribution of the continuous quality index to distinguish between units without any quality issues, units with a quality score indicating the presence of less substantial issues, and units with a score denoting more substantial issues (those with either a combination of some of the most severe issues as determined by the model, or a large number of issues of varying severities). The first threshold between units with no quality issues and units with at least one issue occurred at a score of -0.2280. Units with no issues represented between 54 and 62 percent of the rental units to which we were able to assign quality scores. To further separate units experiencing any issues into two groups, we inspected the quality score distribution for local minima in its density to find a score around which small perturbations in threshold choice would have little effect on the share of units falling into each of the two groups. We examined all quality issue profiles experienced in units with scores in the region around two candidates where the density nearly reached 0, and selected a score of 0.5240 as the second threshold, immediately above which were units with one or more holes in the floor large enough to catch someone’s foot. All units with a quality score of 0.5240 or higher were therefore considered to have substantial issues. Table 9 reports the share of cash-rent, previous-winter-occupied units for each quality level in the survey years between 2001 and 2017, and table 10 reports the most common quality issue profiles in 2017. Limitations Our analysis is subject to several limitations. In determining both unit completeness and quality, we were limited to the variables consistently available across all survey years. Therefore, we could not include features not observed in the AHS which could be deemed to be important components of either unit completeness (such as a unit’s access to an internet service provider) or quality (such as the presence of major defects in the structure of the unit’s building). In the quality factor model, we assumed that quality was uncorrelated with the error term from each measurement equation and that the error terms were uncorrelated with each other to obtain estimates of the model’s parameters, and ultimately the quality scores. A violation of these assumptions would bias the estimates. For example, if rental units located in regions with harsh weather were of systematically worse quality than units in fairer weather regions, the estimated effect of poor quality on a variable like the number of outdoor leaks could be overstated, which would in turn overweight the importance of outdoor leaks in the estimation of the quality scores, resulting in overly poor quality score estimates for units experiencing outdoor leaks. Conversely, if units in harsh weather regions were of systematically better quality than those in fairer weather regions (e.g. as a measure of resilience) the estimated effect of poor quality on outdoor leaks would be understated, biasing down the importance of outdoor leaks in the estimation of quality scores. In general, any systematic linear relationship between latent quality (𝜂𝜂) and the unobserved factors (𝜀𝜀𝑖𝑖) affecting one of the 13 unobserved latent continuous variables, or between the unobserved factors themselves, would be a violation of the model’s assumptions. Since the two quality variables recording uncomfortably cold periods and heating equipment breakdowns were only asked of respondents who occupied their unit in the winter prior to the survey year, we could not assign quality scores to the 10 to 25 percent of rental units across years which were occupied by recent movers. To assess potential biases on the quality distribution of the full cash-rent-occupied rental housing stock introduced by excluding this group, we therefore compared both groups along the remaining 11 dimensions Of the 11 observable quality variables, three exhibited an incidence of issues that differed meaningfully across the two groups. These differences were persistent throughout survey years and consistent in their direction: units whose respondents moved in later than the winter prior to the interview were between 5 to 10 percentage points less likely to experience any outside leaks, inside leaks, and to report evidence of rodents. The differences were meaningful in that they corresponded to over a halving of the incidence of the evidence of rodents, and up to a halving of the incidence of both types of leaks in the recent-mover units relative to the units for which all quality variables were available. To evaluate the effect of these differences on the quality distribution of the full universe of cash-rent units, we estimated a modified quality factor model in which we dropped the uncomfortably cold periods and heating equipment breakdowns variables. This allowed us to obtain quality scores for both the units with the original scores and the recent-mover units. The distributions of the modified quality indices in the two groups reached their largest difference at the share of units without any of the set of 11 quality issues, and we estimated that across all survey years, 1.3 to 2.4 percentage points more units would likely have no measured quality issues in the full cash-rent universe than we found in that which excludes the recent movers. Furthermore, the distributions of the modified indices truncated to exclude the respective units without any of the 11 quality issues were largely comparable. In the full universe of cash-rent units, we would therefore expect decreases in each of the shares of less substantial issues and substantial issues units proportional to their respective shares in the partial universe, and in sum corresponding to the magnitude of the increase in units with no issues each year. The alternative of including recent movers in our main model at the expense of the uncomfortably cold periods and heating equipment breakdowns variables would have yielded a share of units without any other quality issues that we estimated to be 3 to 4 percentage points higher than the share calculated using the original index in the partial universe. Because we believed that these variables should ultimately be included in the quality index, and because we considered the biases we estimated to be relatively small, we retained the original index. Appendix III: Additional Information on Rentership and Affordability Trends In this appendix we present our analysis of rentership and housing affordability by age, race/ethnicity, locality type, and income from 2001 through 2017. The data on renter households are from the American Community Survey’s 1-year estimates. Appendix IV: Estimated Rent Burden in Statewide Rural Areas In this appendix we present state-level analysis of housing affordability for rural renter households. While rental affordability is a challenge in both rural and urban areas, differences in demographics, economies, housing stock, and federal rental assistance programs make rural rental affordability issues unique We defined rural areas using the U.S. Department of Agriculture’s 2010 rural-urban commuting area (RUCA) codes. The data on renter households living in these areas are from the American Community Survey’s 5-year estimates for 2013 through 2017. While renter households lived in rural areas of all 50 states, generally the most populous states had the largest populations of renter households in rural areas (fig. 16). From 2013 through 2017, more than an estimated 2.2 million renter households lived in rural areas. The states with the largest estimated populations of renter households in rural areas were Texas (119,000), Missouri (96,000), Wisconsin (96,000), and Kentucky (93,000). The prevalence of rural renter households varied significantly by state. While only about 5 percent of renter households lived in rural areas from 2013 through 2017, some states had significantly larger proportions of renters in rural areas. States with higher estimated proportions of rural renter households generally had small populations and were in northern New England or along the Missouri, Mississippi, or Ohio Rivers (fig. 17). The states with the largest estimated proportions of renter households in rural areas were Vermont (39 percent) and Montana (32 percent). Renter households in rural areas generally had lower incomes than other renter households. From 2013 through 2017, while the median income for renter households overall was an estimated $36,653, nearly three in five rural renter households had incomes lower than $35,000. For context, a household with two full-time jobs earning the federal minimum wage in 2017 would earn approximately $30,160. In general, Southern states had the highest estimated proportion of rural renter households with incomes less than $35,000 (fig. 18). The states with the smallest proportion of rural renter households with incomes lower than $35,000 were New Jersey (25 percent), Rhode Island (32 percent), Alaska (35 percent), Hawaii (39 percent), and Connecticut (39 percent). Rent burden was common among renter households in rural areas, but prevalence varied by state. Rent burden was slightly less common among rural renter households from 2013 through 2017 (45 percent) than renter households in general in 2017 (48 percent). In eight of 48 states, at least 50 percent of rural renter households were rent burdened (fig. 19). In general, rural rent burden was most common in the Northeast, South, and West Coast, and least common in the U.S. interior. Louisiana had the highest estimated rate of rent burden among rural renter households (55 percent) and Wyoming had the lowest (33 percent). Rent burdens were more common among rural households with incomes below $35,000. From 2013 through 2017, an estimated 70 percent of these households were rent burdened, and in no individual state were less than 50 percent of these households rent burdened (fig. 20). The five states with the highest proportion of lower-income rural renter households that were rent burdened were Alaska (81 percent), Massachusetts (83 percent), Hawaii (83 percent), California (83 percent), and Delaware (85 percent). As discussed previously in this report, lower-income households with rent burdens may struggle to pay for essential needs like food, transportation, health care, and clothing. Rent burdens were uncommon among rural households with incomes of $35,000 or greater. From 2013 through 2017, only an estimated 9 percent of these households were rent burdened, and in no state were more than 30 percent of these households rent burdened (fig. 21). In 40 of 48 states, less than an estimated 15 percent of rural renter households with incomes of $35,000 or greater were rent burdened. The four states with the highest proportion of rural renter households with income $35,000 or greater that were rent burdened were Connecticut (28 percent), Hawaii (26 percent), California (24 percent), and Massachusetts (22 percent). Appendix V: Comparison of GAO Housing Conditions Indexes and HUD Adequacy Index This appendix describes how the indexes we developed to analyze rental housing conditions compare to an index the Department of Housing and Urban Development (HUD) uses to measure housing adequacy. Although our index uses many of the same American Housing Survey variables as HUD’s adequacy index, differences in our analytic methods allowed us to produce more detailed results on housing conditions. HUD measures housing adequacy as part of its ongoing efforts to analyze and report on worst case housing needs The adequacy index is a measure that is based on 19 variables in the American Housing Survey. It categorizes housing units as severely inadequate, moderately inadequate, or adequate based on whether a surveyed housing unit meets certain conditions or criteria. Severely inadequate housing units represented 2 to 3 percent of all rental units from 2001 through 2017. We developed two indexes based on a factor analysis of 13 quality- related variables and nine variables we identified as essential components of a dwelling. We determined that two indexes were needed to describe rental housing unit conditions based on American Housing Survey data, as relevant variables fell into two categories that required different statistical treatment and interpretation. Figure 22 provides a detailed comparison between the variables and scoring techniques of our indexes and HUD’s adequacy index. We compared HUD’s 2017 housing adequacy findings to the results of our indexes and identified some notable differences. Among rental units that HUD considered adequate in 2017, an estimated 8 percent had substantial quality issues as measured by our quality index—affecting 2.7 million households. In addition, another estimated 9.7 million units had less substantial quality issues. These units did not satisfy HUD’s scoring criteria for inadequate or moderately inadequate units, but they had a combination of issues that exceeded our statistical thresholds for substantial and less substantial quality issues. Figure 23 provides a detailed comparison of how our results compare to HUD’s. Appendix VI: Additional Information on Rental Housing Conditions This appendix provides additional information on rental housing conditions by household income, affordability (rent burden), race/ethnicity, age, rental unit age, and structure type, based on two indexes we developed to analyze American Housing Survey data The appendix also includes information on household crowding based on our analysis of American Community Survey data by household income, rent burden, race/ethnicity, and age. Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Heather Chartier (Analyst in Charge), Jeremy Anthony, Daniel Benson, Abigail Brown, Stephen Brown, Nina Daoud, Davis Judson, John McGrail, Yann Panassie, Dae Park, Lena Richenberg, Paul Schmidt, Jennifer Schwartz, Jena Sinkfield, Farrah Stone, and Jeff Tessin made key contributions to this report.
Why GAO Did This Study Since the 2007–2009 financial crisis, growth in the share of renter households has reversed a decades-long trend toward homeownership. This change has underscored concerns about the availability, affordability, and condition of rental housing, especially for low-income households. The federal government subsidizes rents for around 4.4 million households per year, but more households qualify for assistance than receive it. GAO was asked to provide a comprehensive assessment of the housing market. This report examines trends in the housing market prior to the COVID-19 pandemic and does not account for the profound impact it will likely have on renter households. This report, one of several GAO plans to issue, focuses on rental housing from 2001 through 2017 and analyzes (1) the share of households that rent, (2) the affordability of rental housing, and (3) rental housing conditions. GAO analyzed American Community Survey and American Housing Survey data from 2001 through 2017 (the most recent data available at the time of this review) at the national level and for different types of localities. GAO also reviewed recent reports by the Department of Housing and Urban Development (HUD), research organizations, and academic researchers on rental housing and obtained views from a variety of stakeholders selected for their knowledge of these issues, including federal agency officials, academic experts, research organizations, and industry groups. What GAO Found In 2017, almost 7 million more households rented their homes than in 2001, which brought the share of households that rent from an estimated 34 percent to 36 percent. Renting became more common after the 2007–2009 financial crisis as foreclosures and changes in household characteristics reduced the proportion of homeowners. Renting was more prevalent across most age and race/ethnicity groups in 2017 than in 2001, with notable increases among higher-income households. Rental affordability declined from 2001 to 2017. In 2017, 48 percent of renter households were rent burdened—that is, they paid over 30 percent of income for rent—which is 6 percentage points higher than in 2001. Rent burden was most common and most severe among lower-income households (80 percent or less than area median income), with almost three-quarters of extremely low-income households (30 percent or less than area median income) paying over half of their income in rent (see figure). Affordability declined because of a range of factors, including more households competing for rental units and the supply of low-cost rental units not keeping up with demand. Note: Estimates in this figure have a margin of error of ±2 percentage points or fewer, at the 95 percent confidence level. An estimated 15 percent of rental units in 2017—more than 5 million—had substantial quality issues (such as cracked walls and the presence of rodents) or lacked essential components of a dwelling (such as heating equipment or hot and cold running water), according to GAO's analysis of American Housing Survey data. The share of units with deficiencies was relatively stable from 2001 to 2017. Serious deficiencies more often affected households with extremely low incomes or rent burdens. In addition, lower-income households rented approximately two-thirds of the units with substantial quality issues and nearly 80 percent of units lacking essential components.
gao_GAO-20-323
gao_GAO-20-323_0
Background Overview of the Goldwater-Nichols Department of Defense Reorganization Act of 1986 and Relevant PME Statutes The Goldwater-Nichols Department of Defense Reorganization Act of 1986, in part, was intended to improve joint officer management policies, otherwise enhance the effectiveness of military operations, and improve DOD’s management and administration. With the Goldwater-Nichols Act, Congress also intended to, consistent with the congressional declaration of policy in section 2 of the National Security Act of 1947 and among other things, reorganize DOD and strengthen civilian authority in DOD. The Goldwater-Nichols Act, as amended, also: established various joint officer management policies, including requiring JPME for certain joint assignments and promotion categories; required officers to successfully complete an appropriate program at a JPME school, among other things, to be designated as joint qualified—a prerequisite for promotion to brigadier general or rear admiral lower half rank except under certain circumstances; and required the Secretary of Defense, with the advice and assistance of the Chairman of the Joint Chiefs of Staff, to periodically review and revise the curriculum of JPME schools to enhance the education and training of officers in joint matters. In addition, the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 required the Secretary of Defense to implement a comprehensive framework for officer JPME. Overview of the Intermediate- and Senior- level Officer PME Continuum, Programs, and Locations The PME continuum consists of five military educational levels that correspond to the five phases of a military officer’s career: (1) precommissioning, (2) primary, (3) intermediate, (4) senior, and (5) general/flag officer. As figure 1 indicates, intermediate- and senior-level PME and JPME programs—the focus of our review—are designed for officers at pay grades O-4 through O-6. As identified in figure 2 below, the military services’ intermediate- and senior-level PME programs tailor curricula according to their respective services’ needs. For example, the Army, Navy, and Marine Corps PME programs focus on land, maritime, and maneuver warfare, respectively. Further, the Chairman’s instruction concerning officer PME and JPME (hereinafter referred to as the Officer Professional Military Education Policy, or “OPMEP”) requires that JPME be integrated across a diverse array of academic topics, including history and political science, and, where appropriate, be offered in conjunction with PME. Collectively, PME and JPME prepare officers, throughout their careers, to increase their knowledge and develop the necessary skills to operate in joint environments, such as a combatant command. PME and JPME also are offered through distance learning and satellite education programs for non-resident students. Office of the Secretary of Defense, Chairman of the Joint Chiefs of Staff, and Military Service PME and JPME Oversight Responsibilities The OSD, Chairman, and military services are responsible for overseeing the services’ PME and JPME programs. OSD: Within OSD, the Secretary of Defense has delegated responsibility for, among other things, military readiness, total force management, and military and civilian personnel training to the Under Secretary of Defense for Personnel and Readiness. Under DOD Directive 5124.02 the Under Secretary is responsible for, among other things, developing education policies, plans, and programs for the education of all DOD personnel, including PME and JPME programs. Within OUSD(P&R), the Deputy Assistant Secretary of Defense for Force Education and Training (DASD(FE&T)) was established in 2015. The DASD(FE&T) is responsible for developing policies, plans, programs, budgets, and other activities necessary to develop, guide, measure, implement, assess, and oversee all aspects of education and training for military personnel following basic officer and enlisted training, which includes PME and JPME programs. The USD(Comptroller) is the principal staff assistant and advisor to the Secretary of Defense on budgetary and financial matters. The USD(Comptroller) focuses on budgetary formulation and execution; financial management and oversight; and accounting policy; among other things. The USD(Comptroller), among other things, directs the formulation and presentation of DOD budgets; and establishes and supervises the execution of uniform DOD policies, principles, and procedures, including terminologies and classifications, as necessary for certain budgetary and financial matters. Chairman: With the advice and assistance of the Chairman, the Secretary of Defense periodically reviews and revises the JPME curriculum to enhance the education and training of officers in joint matters. The OPMEP outlines the Chairman’s roles and responsibilities as they relate to PME and JPME. According to the OPMEP, the Chairman formulates polices for coordinating military education and advises and assists the Secretary of Defense through the designation and certification/accreditation of JPME. The Chairman accredits military service programs through periodic Process for the Accreditation of Joint Education (PAJE) reviews. Further, the Joint Staff Directorate for Joint Force Development is responsible for, among other things, reviewing the Chairman’s PME policies, overseeing the Military Education Coordination Council, and coordinating PAJE reviews. Military services: The military services provide PME to develop officers with expertise and knowledge appropriate to their grade, branch, and occupational specialty. Each military service is responsible for funding, developing curriculum for, and administering their respective PME programs. In addition, for programs accredited to award JPME, each military service is responsible for meeting the Chairman’s PAJE accreditation requirements and providing qualified military students and faculty to the other military services’ PME programs in accordance with the OPMEP. Membership on PAJE teams, which accredit military services’ PME programs, will be tailored to provide the appropriate balance of expertise in JPME learning areas, objectives, criteria, and standards. The Military Services’ PME Programs Are Accredited, but Not All Programs Met the JPME Seminar Student Mix Accreditation Requirement The Military Services’ Intermediate- and Senior- level PME Programs Are Accredited to Award Master’s Degrees All of the military services’ intermediate- and senior-level PME programs are accredited to award master’s degrees. Each program undergoes a Department of Education-governed civilian accreditation process generally every 10 years, depending on the accreditor and the program. Civilian accreditation for the military services’ PME programs occurs at the institution level and includes multiple programs. For example, the civilian accreditation of Marine Corps University includes the Marine Corps’ intermediate- and senior-level PME programs, as well as other programs such as its School for Advanced Warfighting. According to PME program and civilian accreditation officials, the civilian accreditation process starts with the institution conducting a detailed self-evaluation of its performance, and preparing and providing a self-evaluation report to the accreditation officials. This is followed by a site visit by the accreditation officials and a report describing the institution’s compliance with applicable academic quality standards. The accreditation process concludes with the accreditor’s decision on the institution’s accreditation status. Table 1 shows when each of the military services’ intermediate- and senior-level PME program was last accredited (at the institutional- level) for civilian accreditation. Accreditation bodies assess academic quality by applying and enforcing standards in the following areas required, generally, by the Department of Education: (1) success with respect to student achievement; (2) curricula; (3) faculty; (4) facilities, equipment, and supplies; (5) fiscal and administrative capacity; (6) student support services; (7) recruiting and admissions practices; (8) measures of program length and objectives of the degrees or credentials offered; (9) record of student complaints received by, or available to, the accreditation body; and (10) record of compliance with certain federal student loan program responsibilities. Within these areas, civilian accreditation bodies develop their own accreditation standards, which can vary (see table 2). The military services’ intermediate- and senior-level PME programs are assessed against the applicable accreditation standards to enable the PME programs to award master’s degrees. There is no Chairman or OSD requirement for the military services’ PME programs to have civilian accreditation status, but officials reported several benefits related to civilian accreditation. Specifically, DOD and civilian accreditation officials stated that civilian accreditation provides additional assurance from a recognized external authority that the military services’ PME programs are meeting educational standards required of DOD and non-DOD programs alike. In addition, we previously reported that the U.S. accreditation system’s use of peer review offers the relevant expertise to assess academic quality and provides institutions with feedback for improvement as a key strength of the system. Furthermore, DOD officials said that the ability to award master’s degrees from an accredited program helps the programs attract and retain high- quality faculty. The Military Services’ Intermediate- and Senior- level PME Programs Are Accredited to Award JPME Credit, but Not All Programs Met the Seminar Student Mix Requirement All Military Service Intermediate- and Senior-level PME Programs Are Accredited to Award JPME Credit All of the military services’ PME programs have been accredited by the Chairman to award JPME credit. The OPMEP outlines the JPME program accreditation requirements and processes that are to occur at least every 6 years. DOD’s process for accrediting the military services’ JPME programs is through the Chairman’s PAJE. The PAJE is based on accepted civilian accreditation standards and practices. According to the OPMEP, the PAJE serves three purposes: (1) oversight, (2) assessment, and (3) improvement. Once JPME programs are initially accredited, accreditation is reaffirmed through subsequent PAJEs every 6 years. In advance of a PAJE accreditation, the military service PME program submits an OPMEP-required self-assessment, which the PAJE team reviews prior to conducting the on-site accreditation. The PAJE team prepares a report on its findings, and includes a full, conditional, or no accreditation determination. PME programs receiving a conditional accreditation or reaffirmation must demonstrate improvements in particular areas within a specific timeframe in order maintain their accreditation. Any program that fails to achieve accreditation, reaffirmation, or conditional accreditation/reaffirmation is no longer a JPME provider. According to the OPMEP, accreditation or reaffirmation is awarded when programs are judged satisfactory overall and have no significant weaknesses. Table 3 shows the date of the most recent JPME accreditation for each of the military services’ intermediate- and senior-level PME programs. Additionally, the military services’ PME programs have (1) met or partially met all of the required joint learning areas, such as joint command and control; and (2) met or partially met all required common educational standards, such as periodically assessing their JPME programs. First, the OPMEP requires intermediate- and senior-level PME programs to fulfill the appropriate joint learning areas and objectives and common educational standards, and generally have a curriculum that includes the required JPME content prescribed in statute. The PAJE review of the joint learning areas and common educational standards includes a combination of objective and subjective assessment based on peer expertise. Specifically, the OPMEP requires intermediate-level PME programs to fulfill the following six joint learning areas: (1) National military capabilities strategy, (2) Joint doctrine and concepts, (3) Joint and multinational forces at the operational level of war, (4) Joint planning and execution processes, (5) Joint command and control, and (6) Joint operational leadership and the profession of arms. The OPMEP requires senior-level PME programs to fulfill the following five joint learning areas: (1) National strategies; (2) Joint warfare, theater strategy and campaigning for traditional and irregular warfare in a joint, interagency, intergovernmental and multinational environment; (3) National and joint planning systems and processes for the integration of joint, interagency, intergovernmental and multinational capabilities; (4) Command, control and coordination; and (5) Strategic leadership and the profession of arms. According to the most recent Joint Staff PAJE accreditation reports, all of the military services’ intermediate- and senior-level PME programs met all of these mandatory joint learning areas, with the exception of the Marine Corps intermediate-level PME program which received a partially meets in the joint learning area for joint planning and execution processes. Second, the OPMEP also requires intermediate- and senior-level PME programs to meet seven common educational standards that the Chairman considers essential in awarding JPME credit. Table 4 describes these seven common educational standards. The most recent Chairman’s accreditation review found that each of the military services’ PME programs met or partially met all seven OPMEP- required common educational standards, as shown in table 5. According to Joint Staff officials, to be assessed as “met,” the program must meet all of the criteria for that common educational standard. On the other hand, if a program does not meet all of the criteria then it “partially met” the criteria for the accreditation standard. When a PAJE team determines that a program “partially met” a standard, the team suggests corrective actions for the program to consider. Receiving a “partially met” on a particular standard does not exclude a program from being accredited, as accreditation is based on the program being judged satisfactory overall and having no significant weaknesses. We identified the following examples of common educational standards that were met or partially met by the military services’ intermediate- and senior-level PME programs during our analysis of the Chairman’s most recent accreditation reports for those programs. Standard 2: Employ Predominantly Active and Highly Effective Instructional Methods – The PAJE team found that the College of Naval Warfare met this standard during its most recent review in May 2015. This standard states that instructional methods should be appropriate to the subject matter and desired levels of learning, and should employ active student learning whenever feasible. In addition, the standard requires that the goals of the educational offerings be rigorous and challenging, requiring that students engage in critical thinking and active interaction. Specifically, the PAJE team found that the College of Naval Warfare employed a preponderance of active instructional methods to achieve desired learning outcomes. The team found that the effective combination of Socratic discussion, case studies, practical exercises, written assignments, and lectures followed by seminar discussions, engaged students in critical thinking and were appropriate to the desired levels of learning. The PAJE team also found that active student discourse occurred both inside and outside of seminars. Lastly, the team found that the effectiveness of the curriculum in refining critical thinking skills was reflected in both student and alumni surveys. Standard 3: Assess Student Achievement – The PAJE team found that the Marine Corps Command and Staff College met this standard during its most recent review in September 2014. This standard states that each college should aggressively assess its students’ performance, clearly state educational goals and objectives, and measure students’ performance against defined standards using direct and indirect assessment tools to identify whether desired educational outcomes are being achieved. Specifically, the PAJE team found that the Marine Corps Command and Staff College clearly identified program outcomes, student learning outcomes, and lesson educational objectives. The PAJE team also found that student assessments were directly linked to student learning outcomes, joint learning areas, and joint learning objectives. Additionally, the team found that results were carefully tracked and used for educational outcome achievement verification, curriculum improvement, and faculty development feedback. Lastly, the PAJE team found that the College used a variety of student assessments—including research papers, exams, staff papers, oral presentations, exercises, practicums, oral defenses, and seminar participation—to provide feedback and verify learning outcome achievement. Standard 4: Assess Program Effectiveness – The PAJE team found that the Army’s Command and General Staff College partially met this standard during its most recent review in February 2014. This standard states that colleges should survey students, graduates, and their supervisors to determine curricula and educational effectiveness of their academic programs. The standard also states that leadership should periodically assess the intended educational outcomes of programs for currency, relevancy, and completeness, and the results of these analyses should be used to refine or develop curricula that continue to meet evolving mission requirements in the context of an ever-changing world. Specifically, the PAJE team found that there is a robust evaluation and assessment process for the common core courses but that neither the electives nor the Command and General Staff College-level outcomes were assessed. Additionally, the PAJE team found that there did not appear to be a process for evaluating the overall curriculum either directly or indirectly. The PAJE team suggested that the Army’s Command and General Staff College develop a capstone evaluation to assess outcomes of its common core curriculum. Army’s Command and General Staff College officials told us that in 2016 the college developed a capstone evaluation for its common core curriculum, consisting of an online examination and a faculty member oral examination. Standard 5: Conduct Quality Faculty Recruitment: Selection, Assignment, and Performance Assessment Program – The PAJE team found that the Air War College partially met this standard during its last review in October 2014. This standard states that faculty should have the academic credentials, teaching skills, and experience in joint and professional matters necessary to teach in the colleges. This standard also states that faculty roles and responsibilities should be clearly documented, and that colleges should hold faculty accountable to clearly defined and measurable performance criteria and standards. Specifically, the PAJE team found that the Air War College did not meet the OPMEP standard for its student-to-faculty ratio, but acknowledged that the college had a plan to meet this requirement by the spring of 2015. The Air War College met the student-to-faculty ratio in academic year 2015. The review also found that delays in hiring presented challenges in maintaining the requisite number of qualified faculty. The PAJE team suggested that the Air War College continue its efforts to reduce the time to complete civilian hiring actions. Air War College officials stated that as part of a wider Air University effort to streamline the civilian hiring process they were able to ameliorate this challenge by making the process more transparent, predictable, and shorter. Most Military Services’ Senior- level PME Programs Met the JPME Seminar Student Mix Accreditation Requirement, but Some Intermediate-level Programs Did Not Most of the military services’ senior-level PME programs met the OPMEP JPME seminar student mix accreditation requirement, which is part of the develop joint awareness, perspective, and attitude common educational standard (Standard 1) that pertains to joint acculturation. However, not all of the military services’ intermediate-level PME programs met the seminar student mix accreditation requirement. The OPMEP requires that each intermediate- and senior-level JPME seminar contain at least one student from each of the two non-host military departments: the Department of the Army, the Department of the Navy (which includes the Marine Corps), and the Department of the Air Force. DOD defines joint acculturation as the process of understanding and appreciating the separate military service cultures resulting in joint attitudes and perspectives, common beliefs, and trust. All but one of the military services’ senior-level PME programs met the seminar student mix accreditation requirement from academic years 2014 through 2018. During that timeframe there were approximately 300 senior-level seminars, and only one did not meet the requirement. Specifically, during academic year 2017, the Air Force’s senior-level PME program lacked sufficient Navy representation for one seminar. However, not all of the military services’ intermediate-level PME programs met the seminar student mix accreditation requirement. Specifically, the Air Force’s and the Army’s intermediate-level PME programs had less than the required Sea Service representation for 3 years between academic years 2014 and 2018. For academic years 2016 and 2018, the Air Force’s intermediate-level PME program had less than the OPMEP-required Sea Service representation for about 24 percent of its seminars (totaling 288 students), as shown in table 6 below. During the 3- year timeframe, the Army’s intermediate-level PME program had less than the required Sea Service representation for about 22 percent of its seminars (totaling 664 students). On the other hand, the Navy’s and the Marine Corps’ intermediate-level PME programs generally met their respective seminar student mix accreditation requirement for each of the last 5 academic years (2014 – 2018). According to Navy officials and documentation, the Navy stated that it was unable to provide the other military services’ intermediate-level PME programs with the required numbers of officers during academic years 2016 – 2018 because of competing staffing priorities, such as its forward presence mission. However, we found that the Navy provided sufficient officers to its own intermediate-level PME program (College of Naval Command and Staff) during each of these academic years so that it could have instead assigned the required number of officers to the Air Command and Staff College and the Army’s Command and General Staff College to meet their respective Sea Service requirements. For example, the Navy sent 121 Navy officers to the College of Naval Command and Staff in academic year 2018 for 27 seminars when the Air Command and Staff College and the Army’s Command and General Staff College needed a cumulative total of 32 officers to meet their OPMEP seminar student mix requirement. As a result, most of the College of Naval Command and Staff’s seminars would have only been reduced by one Navy Officer. Officials from all of the military service PME programs told us that students interacting with students from other military departments is critical for joint acculturation. Officials from the Joint Staff Directorate for Joint Force Development reinforced the importance of the seminar student mix requirement, stating that satisfying the OPMEP common educational standard of developing joint awareness, perspective, and attitude (Standard 1) is dependent on time and intensity of student interaction with students from other military departments. Military service and Joint Staff officials stated it was difficult for Air Force and Army officers to gain a full appreciation of the Navy’s contribution to joint military operations when there were no Sea Service students in the seminar. In the situations where a seminar did not have Sea Service representation, Joint Staff officials told us that a decision was made to award students JPME credit. Furthermore, officials told us that it was decided to not “punish” military service PME programs for not meeting the OPMEP’s JPME seminar student mix requirement as military services’ programs cannot control the number of in-bound students assigned by the other military services. Officials from the Air Force’s and the Army’s intermediate-level PME programs told us that when they are unable to meet the OPMEP seminar student mix requirement, they take steps to compensate for the lack of Sea Service student representation, such as using faculty to provide Sea Service perspectives. Similarly, a 2010 Congressional report noted the value of in-residence officer PME programs because of the acculturation opportunities that they offer. Other than Joint Staff officials requesting that the Navy meet the OPMEP’s JPME seminar student mix requirement, no other actions have been taken by the Chairman, OSD, or the Navy to resolve the issue concerning Navy participation in the Air Force’s and Army’s intermediate- level PME programs. Specifically, according to DASD(FE&T) officials, as of November 2019, OSD has not been involved in addressing the Navy’s failure to meet the OPMEP’s JPME seminar student mix requirement. Additionally, Joint Staff officials told us that the Chairman cannot direct a Secretary of a military department to comply with provisions of a Chairman’s publication. However, Standards for Internal Control in the Federal Government state that management should identify, analyze, and respond to risks related to achieving defined objectives. Given that joint acculturation is a key component of intermediate-level PME programs, the lack of action to resolve or mitigate the issues at hand has the potential to negatively affect students’ opportunities to increase their knowledge and develop the necessary skills to operate in joint environments. Without DOD taking steps to determine whether the appropriate number of Navy officers can be assigned to intermediate-level PME programs of the Air Force and Army, the officers participating in these programs lack the perspectives of Sea Service participants, which diminishes the quality of the educational experience. Furthermore, neither the Chairman nor OUSD(P&R) has evaluated or approved the mitigation steps, either before or after-the-fact, when a PME program lacks representation to meet the joint acculturation requirement. Although the OPMEP requires that each intermediate- and senior-level JPME seminar contain at least one student from each of the two non-host military departments, the OPMEP does not contain guidance on what PME programs should do when they do not meet this requirement. Developing guidance concerning actions, if any, the military services can take to mitigate JPME seminar student mix shortfalls and still meet the intent of the OPEMP’s joint awareness common educational standard could better position DOD and the military services to ensure that DOD’s JPME programs are meeting their objectives. OSD Is Taking Steps to Exercise Oversight of the Military Services’ PME Programs, but Its Ability to Assess the Effectiveness of These Programs Is Limited OSD has had PME and JPME statutory oversight responsibilities for more than 30 years; however, while it has taken some steps to strengthen its oversight, it is not well-positioned to assess the effectiveness of the military services’ PME programs. The Goldwater-Nichols Act, as amended, states that the Secretary of Defense shall, with the advice and assistance of the Chairman of the Joint Chiefs of Staff, periodically review and revise the curriculum of JPME schools, and require that the PME schools periodically review and revise their intermediate- and senior-level PME curriculums to strengthen the focus on joint matters and preparing officers for joint duty assignments. Moreover, DOD Directive 5124.02 requires the Under Secretary of Defense for Personnel and Readiness to develop policies, plans, and programs for educating DOD personnel. According to several DOD officials with whom we spoke, prior to the establishment of DASD (FE&T), OUSD(P&R) unofficially relinquished its responsibilities for PME and JPME to the Chairman, whose office issued the first version of the OPMEP in 1996. As mentioned earlier, the OPMEP outlines the Chairman’s process for meeting statutory responsibilities for overseeing officer JPME, which is a subset of PME. For example, the OPMEP states that JPME is a Chairman approved body of objectives, outcomes, policies, procedures, and standards supporting the educational requirements for joint officer management. As recently as 2017, OUSD(P&R) reported to Congress that it had no formal process for exercising its authority to periodically review and revise the curricula of officer JPME. In the same report, OUSD(P&R) stated that DOD was reviewing JPME and the DOD Joint Officer Management Program. OUSD(P&R) also reported that with the reorganization of its office to include a Deputy Assistant Secretary of Defense for Force Education and Training (DASD(FE&T)) in 2015, OSD was now organized to exercise its statutory authorities with respect to PME and JPME and would do so in line with the Secretary of Defense’s direction in the National Defense Strategy. According to the 2015 implementation plan detailing the reorganization, the Deputy Assistant Secretary’s responsibilities include measuring, assessing, and overseeing all aspects of education and training, which includes PME and JPME. In 2019, DOD issued guidance stating that the Assistant Secretary of Defense for Readiness is the principal advisor to the Under Secretary of Defense for Personnel and Readiness on all matters related to the readiness of the Total Force, including by developing policies and plans, providing advice, and making recommendations for PME to include alignment to the National Defense and National Military Strategies and talent management and utilization. OUSD(P&R) is drafting its first DOD instruction (the draft instruction) that covers PME and JPME, which DASD(FE&T) officials told us it plans to issue in February 2020. According to DASD(FE&T) officials, once issued, the DOD instruction will be the prevailing policy document for PME and JPME at the OSD-level. While we believe these steps will improve OSD’s oversight of the military services’ PME and JPME programs, we identified areas that could continue to impede DOD’s ability to assess the effectiveness of these programs. Specifically: DOD lacks a mission statement and performance measures for its PME and JPME programs. DASD(FE&T) officials stated that prior to the draft policy OUSD(P&R) had not developed a mission statement and performance measures for PME, but told us that the draft instruction would include a mission statement and examples of performance measures. However, we did not identify a mission statement for PME that clearly defines the respective key purposes for this program when we reviewed the draft instruction. According to leading training and development practices, a mission statement is important to an organization’s success because it explains the organization’s purpose and goals and is the basis for goal-directed performance measures. The draft instruction proposes the performance measures the military services should track and assess as part of their required annual program reviews, such as graduate assignments and retention rates. Performance measures are important because they assess an organization’s progress toward achieving results that are aligned with its mission. However, without a department-wide mission statement for PME and JPME, OUSD(P&R) is not well-positioned to propose performance measures for the military services to track and enable OUSD(P&R) to assess the effectiveness of these programs. Further, our review of the draft instruction found no examples of cost- related performance measures. DASD(FE&T) officials confirmed that cost-related performance measures were not included in the draft instruction, but told us that they planned to coordinate with officials from the Joint Staff Directorate for Joint Force Development to refine the performance measures sometime in the future. DOD’s Financial Management Regulation states that performance measurement is a means of evaluating efficiency, effectiveness, and results, and that a balanced performance measurement scorecard includes nonfinancial and financial measures focusing on quality, cycle time, and cost. Moreover, leading training and development practices state that performance measures should include both qualitative and quantitative measures to assess training results, and include the identification and tracking of costs. These same leading practices state that organizations should compare associated costs and monetized benefits of training programs to determine return on investment. DASD(FE&T) officials told us that having cost information on the military services’ PME and JPME programs to determine return on investment would enable their office to compare and make well- informed decisions about these programs. DOD lacks a requirement for the military services to report periodically on PME and JPME programs. OUSD(P&R) has not established a requirement for the military services’ to periodically report information to its office on the military services’ respective PME and JPME programs. For example, the Chairman’s PAJE reports that document accreditation findings and include a full, conditional, or no accreditation determination are not provided to OUSD(P&R). According to the OPMEP, PAJE reports will be forwarded to the Chief of the applicable military service, the Director of the Defense Intelligence Agency, or the President of the National Defense University for appropriate action. A Joint Staff official confirmed that PAJE reports are not provided to OUSD(P&R). Our review of the draft instruction found no requirement for the Chairman to provide PAJE reports to OUSD(P&R), nor is there a requirement for the military services to report information on their PME and JPME programs—such as their annual program reviews—to OUSD(P&R). According to DASD(FE&T) officials, reporting requirements were omitted from the draft instruction because their office lacks the personnel to review and assess the information the military services would be required to collect and report. However, without a requirement for the military services’ to periodically report information on their PME and JPME programs, OUSD(P&R)’s ability to assess the effectiveness of these programs and perform meaningful oversight will continue to be limited. Leading training and development practices state that organizations should collect appropriate performance data during implementation and establish accountability for the results of these efforts. Additionally, Standards for Internal Control in the Federal Government state that management relies on quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives and addressing risks. These same standards state that management should receive quality information about the entity’s operational processes to help management achieve the entity’s objectives. Because OUSD(P&R) does not require the military services to periodically report information on their respective PME programs, it does not have information that would help it assess the effectiveness of these programs. We believe that addressing these limitations will enhance the ability of OUSD(P&R) and its subordinate office (i.e., DASD(FE&T)) to oversee and assess the effectiveness of the military services’ PME programs. USD(Comptroller’s) Ability to Monitor the Military Services’ PME Program Budgets Is Limited USD(Comptroller’s) ability to monitor the military services’ PME programs is limited because the military services’ budget request data are incomplete and lack uniformity. DOD’s Financial Management Regulation requires the military services to submit separate budget request data on PME programs in support of DOD’s annual budget request, and this data is included in DOD’s annual congressional budget justification exhibits. While the Financial Management Regulation requires the military services to submit separate annual budget request data exhibits for most of their intermediate- and senior-level PME programs, it does not require the Marine Corps to submit an exhibit for its senior-level PME program, the Marine Corps War College. Based on our review of the Marine Corps’ fiscal years 2014 through 2020 budget request data exhibits and according to the USD(Comptroller) and Marine Corps officials, the Marine Corps did not submit a budget request data exhibit for the Marine Corps War College during this 7-year period. USD(Comptroller) and Marine Corps officials could not explain why the Marine Corps War College was omitted from the DOD Financial Management Regulation, where DOD last updated the chapter requiring this submission in September 2008. In addition, the data the military services include in their annual budget requests varies. DOD Directive 5118.03 outlines USD(Comptroller) responsibilities, requiring the Comptroller to, among other things: (1) direct the formulation and presentation of DOD budgets; and (2) establish and supervise the execution of uniform DOD policies, principles, and procedures, including terminologies and classifications, as necessary, for budget formulation, presentation, and execution, and certain other topics. Additionally, section 2162 of title 10, U.S. Code, requires the Secretary of Defense, with the advice and assistance of the Chairman of the Joint Chiefs of Staff, to promulgate a uniform cost accounting system for use by the Secretaries of the military departments in preparing budget requests for the operation of PME schools. However, the DOD Financial Management Regulation does not specify how the military services should account for the data required for the military services’ budget request data submissions. Consequently, the budget request data reported by the military services varies. For example, in their fiscal year 2020 budget request data submissions the Army and the Air Force combined distance education and in-residence education programs, the Navy reported this data in separate exhibits, and the Marine Corps omitted distance education costs for its intermediate-level PME program. Additionally, according to DOD officials, the extent to which the military services accounted for costs to operate and maintain their PME colleges—such as security, facility maintenance, and information technology support—varies. In 1987, the year following the passage of the Goldwater-Nichols Act, the House Armed Services Committee established a panel on PME led by Representative Ike Skelton (the Skelton Panel). The Skelton Panel undertook a comprehensive congressional review of PME, and published its findings and recommendations in a 1989 report (the Skelton Report). Although the Skelton Panel did not take a comprehensive look at how well PME institutions were funded to accomplish their mission, the panel inquired into cost per student at each school and reported receiving from OSD raw data submitted by each PME institution, which included considerable differences in scope and cost methodology used by the PME institutions. The Skelton Report recommended that DOD establish a uniform cost accounting system for the PME schools, and that the annual report of the Secretary of Defense provide data on PME costs beginning in 1990. A 2010 congressional report focused on PME developments since the Skelton Panel’s review, investigated whether a uniform cost accounting system existed, among other things. The congressional report found that DOD did not have a uniform cost accounting method for PME schools, and that it had not provided cost data to support useful comparisons among PME schools. The report included a recommendation for DOD to report its PME funding to Congress using a standardized accounting method for cost per student at each of the PME institutions, as recommended by the Skelton Panel in 1989. According to DASD(FE&T) and Joint Staff officials, the department has not collected or reported PME program cost information to Congress as the 1989 Skelton Report and the 2010 congressional report both recommended. Without complete and uniform budget request data, USD(Comptroller)’s ability to monitor the military services’ PME programs, identify program trends within the Marine Corps and among the other military services’ PME programs, and formulate meaningful inter-service comparisons is limited. Conclusions DOD relies on PME to prepare its military personnel for the intellectual demands of complex contingences and major conflicts that typically involve more than a single military service. While all the military services’ intermediate- and senior-level PME programs have met or partially met the accreditation requirements established by civilian accreditation bodies and the Chairman to award master’s degrees and JPME credit, respectively, not all service programs have met the seminar student mix requirement. The Navy, for example, has not provided the requisite representation of officers in Army and Air Force intermediate-level seminars during the 2016 – 2018 academic years. Requiring DOD to determine whether the requisite number of Navy officers can be assigned to the military department’s JPME programs and to develop policy to mitigate student mix shortfalls would address persistent student mix imbalances and align with the joint acculturation goal of JPME. OUSD(P&R)’s draft DOD instruction, expected to be finalized in February 2020, will be the prevailing policy document for PME and could improve OSD’s oversight of the military services’ PME and JPME programs. However, OUSD(P&R)’s ability to assess the effectiveness of the military services’ PME programs is limited by the absence of a department-wide mission statement that explains the purpose and goals of PME that aligns with the proposed performance measures in the draft instruction; the absence of a requirement for the military services to track program costs as a performance measure; and the absence of a requirement for the military services to report data on their PME and JPME programs—such as their annual reviews of PME programs. Addressing these limitations would better position OUSD(P&R) to oversee and assess the effectiveness of the military services’ PME and JPME programs. Finally, USD(Comptroller)’s ability to monitor the military services’ PME programs is limited because the services’ budget request data are incomplete and lack uniformity. Although the military services are required to submit separate budget request data exhibits for most PME institutions, the Financial Management Regulation does not require the Marine Corps to submit an annual budget request data exhibit for its senior-level PME program. Moreover, the data the military services include in their annual budget requests vary because the Financial Management Regulation does not specify how to account for costs. Requiring the Marine Corps to report budget request data on its senior-level PME program annually, and specifying how to account for costs in the exhibits would enhance the USD(Comptroller)’s ability to monitor the military services’ PME programs and also enhance Congress’s ability to identify trends among these programs. Recommendations for Executive Action We are making a total of seven recommendations to the Secretary of Defense. Specifically: The Secretary of Defense should ensure that the Under Secretary of Defense for Personnel and Readiness, in coordination with the Chairman of the Joint Chiefs of Staff and the Secretary of the Navy, determine whether it can assign the required number of Navy officers to the other military departments’ JPME programs, consistent with Chairman of the Joint Chiefs of Staff guidance. (Recommendation 1) The Secretary of Defense should ensure that the Chairman of the Joint Chiefs of Staff, in coordination with the Under Secretary of Defense for Personnel and Readiness and the military services, develop policy concerning actions, if any, the military services can take to mitigate JPME seminar student mix shortfalls and still meet the intent of the OPMEP’s joint acculturation requirement. (Recommendation 2) The Secretary of Defense should ensure that the Under Secretary of Defense for Personnel and Readiness, in coordination with the Chairman of the Chiefs of Staff, develop and issue a department-wide mission statement for PME that will explain the program’s purpose and goals, and serve as a basis for performance measures. (Recommendation 3) The Secretary of Defense should ensure that the Under Secretary of Defense for Personnel and Readiness, in coordination with the Chairman of the Joint Chiefs of Staff, issue and implement performance measures—to include the tracking of costs—that align with the department-wide mission statement for PME. (Recommendation 4) The Secretary of Defense should ensure that the Under Secretary of Defense for Personnel and Readiness, in coordination with the Chairman of the Joint Chiefs of Staff, require the military services to periodically report information to its office about the military services’ PME and JPME programs—such as results of program reviews. (Recommendation 5) The Secretary of Defense should ensure that the Under Secretary of Defense(Comptroller) updates the DOD Financial Management Regulation to require the Marine Corps to include a budget request data exhibit for the Marine Corps War College in support of DOD’s annual budget request. (Recommendation 6) The Secretary of Defense should ensure that the Under Secretary of Defense (Comptroller), in coordination with the military services and the Chairman of the Joint Chiefs of Staff, issue guidance to standardize the cost data that the military services should include in their annual PME budget request data submissions. (Recommendation 7) Agency Comments and Our Evaluation We provided a draft of this product to DOD for comment. In its comments, reproduced in Appendix II, DOD concurred with all of our recommendations and stated that it will be implementing our recommendations by issuing policy, among other actions. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Chairman of the Joint Chiefs of Staff, and the Secretaries of the Army, Navy, and Air Force. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Status of the Joint Special Operations University Pursuing Additional Accreditation The Joint Special Operations University (JSOU) was established in September 2000 and is located at MacDill Air Force Base, Florida. The mission of JSOU is to prepare special operations forces to shape the future strategic environment by providing specialized Joint Professional Military Education (JPME); developing applicable undergraduate- and postgraduate-level equivalent curriculum; and fostering special operations research, analysis, and outreach in support of Special Operations Command objectives. JSOU staff and faculty include active duty, active reserve, and temporary duty reserve military personnel; government civilians; civilian contractors; private consultants; and guest lecturers and speakers. JSOU’s active duty military personnel are assigned to the university by Special Operations Command and the military services. JSOU’s professional military education vision is to prepare warfighters to solve ambiguous, complex problems across the spectrum of conflict by providing dynamic and adaptive professional education opportunities. In August 2015, the Accrediting Council for Continuing Education and Training accredited JSOU through December 2019. As of January 2020, officials stated that they are currently undergoing reaccreditation and expect reaffirmation notification by the end of February 2020. While JSOU offers a number of courses, seminars, and programs, officials from JSOU and the Office of the Assistant Secretary of Defense for Special Operations/Low-Intensity Conflict stated the university has no near-term plans to award master’s degrees; therefore, no additional civilian accreditation is necessary. JSOU officials said that they are contemplating offering senior-level JPME in the future, but stated that such an endeavor would take approximately at least 10 years to accomplish. Consistent with its mission of preparing special operations forces to shape the future strategic environment, JSOU laid out the following seven goals in its 2019 academic guidance: 1. Continue to refine target audiences in all courses, assuring the right curricula is provided to the right student at the right time. 2. Implement a title 10, U.S. Code, civilian faculty hiring process that leverages the DOD professional military education community, fully supports the JSOU vision, and retains control to rapidly hire faculty with expertise in required disciplines. 3. Establish and complete a comprehensive building improvement plan that provides a quality learning environment conducive to educational excellence and student success. 4. Establish and complete a comprehensive education technology plan that brings all classrooms and auditoriums up to planned capability inherent in a state-of-the-art learning institution. 5. Facilitate the Technology Review Committee to define and develop the JSOU advanced classroom concept, capable of a wide variety of innovative teaching methodologies. 6. Develop and sustain academic programs in the emerging mission areas of artificial intelligence/machine learning, countering weapons of mass destruction, cyberspace, sensitive activities, and joint unconventional warfare that directly support special operations. 7. Develop highly effective academic instructors and distinguished experts in their individual fields of knowledge. Remain sensitive to individual needs and career development as JSOU embarks on new hiring processes and classroom innovations. According to the JSOU Fact Book for 2018, the newly authorized title 10, U.S. Code, civilian faculty hiring authorities will allow JSOU faculty to attain new heights of excellence with expertise not normally found within the military or civil service communities. The handbook states that the title 10, U.S. Code, faculty hiring authority will have a major impact on shaping JSOU’s curriculum, and will directly add to special operations forces’ readiness and capability. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Brenda S. Farrell, (202) 512 -3604 or farrellb@gao.gov. Staff Acknowledgments In addition to the contact named above, Marc Schwartz (Assistant Director), Norris “Traye” Smith (Analyst in Charge), Rebecca Guerrero, Edward Malone, Stephanie Moriarty, Patricia Powell, Carter Stevens, and Lillian M. Yob made significant contributions to this report. Related GAO Products Higher Education: Expert Views of U.S. Accreditation. GAO-18-5. (Washington, D.C.: December. 22, 2017). Higher Education: Education Should Strengthen Oversight of Schools and Accreditors. GAO-15-59. Washington, D.C.: December 22, 2014. Joint Professional Military Education: Opportunities Exist for Greater Oversight and Coordination of Associated Research Institutions. GAO-14-216. Washington, D.C.: March 10, 2014. Joint Military Education: Actions Needed to Implement DOD Recommendations for Enhancing Leadership Development. GAO-14-29. Washington, D.C.: October 23, 2013.
Why GAO Did This Study DOD relies on PME and JPME to prepare its military personnel throughout their careers for the intellectual demands of complex contingences and major conflicts that typically involve more than a single military service. However, according to DOD's summary of the 2018 National Defense Strategy, PME “has stagnated, focused more on the accomplishment of mandatory credit at the expense of lethality and ingenuity.” The Conference Report accompanying the John S. McCain National Defense Authorization Act for Fiscal Year 2019 included a provision for GAO to evaluate DOD PME and JPME institutions. This report examines the extent to which (1) the military services' PME programs have met civilian and JPME accreditation requirements, (2) OSD has assessed the effectiveness of the military services' PME programs, and (3) USD (Comptroller) has monitored the military services' PME program budget data. GAO analyzed applicable laws and policy, analyzed accreditation and budget information, and interviewed officials from the military services' intermediate- and senior-level resident PME programs. What GAO Found All of the military services' intermediate- and senior-level officer Professional Military Education (PME) programs have met civilian and met or partially met Joint PME (JPME) accreditation requirements. However, not all of the military services' PME programs met the JPME seminar student mix requirement of at least one student from the nonhost military department. For example, the Army's intermediate-level PME program did not meet its Sea Service (i.e., Navy, Marine Corps, and, in certain instances, Coast Guard) requirement (see table). GAO's analysis found that the Navy could have assigned officers to Air Force and Army programs while not harming participation in its own seminars. Without taking steps to improve Sea Service participation, students lose opportunities to interact with students from other military departments, which officials have identified as critical to joint acculturation. The Office of the Secretary of Defense (OSD) has taken steps to improve its oversight of the military services' PME programs, but is limited in its ability to assess their effectiveness. Department of Defense (DOD) guidance states that performance measurement is a means of evaluating efficiency, effectiveness, and results and that a balanced performance measurement scorecard includes nonfinancial and financial measures focusing on quality, cycle time, and costs. While OSD is in the process of developing some performance measures, it is not planning to require the military services to track program costs. Implementing its planned measures and establishing costs as a performance measure will better position OSD to assess the effectiveness of PME programs. The Under Secretary of Defense (USD) (Comptroller's) ability to monitor the military services' PME programs is limited by incomplete and inconsistent reporting of service budget request data. DOD guidance does not require the Marine Corps to submit an annual budget request data exhibit for its senior-level PME program and existing guidance for programs that are reported does not specify how to uniformly account for costs. Without complete and uniform budget request data, USD(Comptroller) is challenged in monitoring these programs. What GAO Recommends GAO is making seven recommendations, including that DOD take steps to determine its ability to assign Navy officers to PME programs of other services, implement performance measures–including tracking of costs, and issue guidance for service reporting of PME budget request data. DOD concurred with all of GAO's recommendations.
gao_GAO-20-437
gao_GAO-20-437_0
Background State and local governments rely on a range of revenue sources to support their activities, including federal grants, user charges, and taxes. The share of revenue generated from different types of state and local taxes and user charges—also referred to as own-source revenue—varies by state or local government. State and local governments face fiscal pressures when, taken as a whole, spending exceeds revenues. Fiscal pressures may reflect growth in selected expenditure categories without corresponding revenue growth or other spending reductions. To alleviate fiscal pressures and comply with balanced budget requirements, state and local governments may seek to reduce spending, increase revenues, or both. For example, state and local governments may offset increased costs in one program by making cuts to other programs where they have more flexibility to adjust certain types of spending. Alternatively, if their ability to adjust spending is limited, they may seek additional revenue by increasing existing taxes or user charges or imposing new ones. For example, some programs may have spending that is defined or required in state law and must be funded annually, regardless of broader economic circumstances. Other spending may not be subject to legal or other requirements and is thus subject to decisions influenced by current fiscal pressures. Changes in the makeup of state and local government services, spending, and revenues may reflect economic or demographic changes, a change in spending priorities, or changes in federal policy. Fiscal pressures can result from spending growth or revenue declines that are not the direct result of current state and local policy choices. These choices may instead reflect automatic spending growth (for example, in response to population shifts or an increase in the number of people eligible for government programs) or declines in revenue due to changes in the economy (for example, a shift from goods to services without a corresponding shift in the tax base). Individual expenditure categories can also face fiscal pressures. For example, employee pension funds can experience investment returns below the rates of return assumed in budget forecasts, which can then become underfunded liabilities. State and Local Governments Experienced Overall Growth in Expenditures and Revenues during the Past 20 Years State and Local Government Expenditures and Revenues Increased in Most Categories from 1998 to 2018 From 1998 to 2018, state and local government expenditures increased from about $1.7 trillion in 1998 to about $2.8 trillion in 2018. Figure 1 shows that most state and local government expenditure categories experienced slight shifts during this period. While some categories declined as a share of total spending, inflation-adjusted spending increased in all expenditure categories. Health expenditures reflected the largest increase in inflation-adjusted spending, increasing from $288 billion in 1998 to $670 billion in 2018. As a share of total expenditures, health spending increased by 7 percentage points, from 17 percent in 1998 to 24 percent in 2018. Inflation-adjusted spending on education—the largest share of state and local expenditures—increased by more than $300 billion from 1998 to 2018. However, as a share of total spending, education expenditures decreased by 2 percentage points during the period, in large part, because of the sizable growth in health expenditures during this time period. From 1998 to 2018, state and local government revenues increased from about $1.6 trillion in 1998 to about $2.6 trillion in 2018 (see figure 2). In every year between 1998 and 2018, state and local government taxes (i.e., personal income, sales, excise, property, corporate, and other taxes) comprised the largest category of receipts for the sector, providing about $1.8 trillion or 69 percent of total revenues in 2018. With the exception of interest receipts, all revenue categories increased in inflation-adjusted dollars from 1998 to 2018. Interest receipts decreased from $108 billion or 7 percent of total revenues in 1998 to $72 billion or 3 percent of total revenues in 2018. Federal grants comprised the second largest category of state and local government revenues in both 1998 and 2018 (see figure 2). As a share of total revenues, federal grants increased from $288 billion or 17 percent of total revenues in 1998 to $569 billion or 22 percent in 2018, an increase of $281 billion or 5 percentage points. Figure 3 provides a more detailed breakdown of federal grants to state and local governments from 1998 to 2018. Compared to other grant categories, health grants reflected the only increase in state and local government federal grants, increasing from 53 percent in 1998 to 70 percent in 2018. Most of this growth occurred after 2010, following the enactment of the Patient Protection and Affordable Care Act (PPACA), which offered federal Medicaid funding for states choosing to expand their programs to low-income adults. As a share of total federal grants, income security grants reflected the largest decrease—from 26 percent in 1998 to 17 percent in 2018. However, income security grants increased in inflation-adjusted dollars, from $75 billion in 1998 to $96 billion in 2018. The decline in income security grants, as a share of total federal grants, reflects shifts in federal grants to state and local governments resulting from faster growth in health grants during the 20-year time period. State and Local Government Expenditures and Revenues Grew Faster Than State Gross Domestic Product and Varied by Type In most states, growth in both state and local government expenditures and revenues exceeded growth in state gross domestic product (GDP) from 1997 to 2017. As shown in table 1, growth in expenditures equaled or exceeded growth in state GDP in each of the 5-year periods from 1997 to 2017. Revenues grew faster than state GDP, on average, during the 20-year period, though they grew somewhat slower than state GDP from 2008 to 2012. Table 1 also shows that state and local government expenditures, revenues, and state GDP all experienced more robust growth during the first half of the 20-year period (1997 to 2007) than in the second half of the period (2008 to 2017). On average, growth in state and local government expenditures outpaced growth in state and local government revenues by about 0.3 percentage points per year. As shown in figure 4, expenditures grew faster than revenues in 43 states from 1997 to 2017. We have previously reported on state and local government expenditure growth trending in excess of revenue growth and its implications for increasing state and local government fiscal pressures. For example, our most recent simulations suggest that the state and local government sector could continue to face a gap between expenditures and revenues during the next 50 years. Because many state and local governments are required to balance their operating budgets, they will most likely need to make policy changes involving some combination of reduced spending and increased revenue. Most Types of Expenditures Grew among States, with Public Welfare Spending Showing the Fastest Growth Spending in most expenditure categories grew faster than or at the same rate as state GDP in a majority of states from 1997 to 2017 (see table 2). State and local government expenditures, as a whole, grew at an average annual rate of 2.8 percent from 1997 to 2017 and faster than state GDP in 43 states. Public welfare spending showed the fastest growth among all state and local government expenditure categories, growing at an average annual rate of 4.9 percent per year during the period. Public welfare. Public welfare—which includes Medicaid and welfare programs, such as Temporary Assistance to Needy Families—grew faster than all other spending categories from 1997 to 2017. Public welfare grew faster than state GDP in all but two states at an average annual rate of 4.9 percent during the period. The Centers for Medicare & Medicaid Services (CMS) Office of the Actuary projected that Medicaid spending would grow at an average rate of 5.7 percent per year, from fiscal years 2017 to 2026, with projected Medicaid expenditures reaching more than $1 trillion by fiscal year 2026. Since Medicaid is a matching formula grant program, the projected growth rate reflects expected increased Medicaid expenditures that will be shared by state governments. Furthermore, our long-term simulations of the state and local government sector’s fiscal outlook have shown that health expenditures are expected to continue to increase faster than the economy during the next 50 years. Hospitals and health. Expenditures on hospitals and health—which include state and local government spending on public health and hospitals, but not Medicaid—grew at an average rate of 2.6 percent per year from 1997 to 2017. Across all states, average annual growth in spending on hospitals and health ranged from -2.8 percent per year to 7.8 percent per year, reflecting the largest spread of any spending category. Further, growth in spending on hospitals and health was not distributed evenly across this range. In eight states, hospital and health expenditures grew at an average annual rate of less than 1 percent, while the average annual growth rate exceeded 3 percent in 20 states. Education services. Spending on education services (i.e., schools, colleges, other educational institutions, educational programs for adults, veterans, and other special classes) grew at an average rate of 2.6 percent per year and faster than state GDP in 36 states from 1997 to 2017. This average annual growth rate reflects faster growth of 4.1 percent per year, on average, from 1997 to 2007 and slower growth of 0.7 percent per year, on average, from 2008 to 2017. During the second half of the 20-year period, from 2008 to 2017, spending on education services grew more slowly than state GDP in 39 states. Public safety. Spending on public safety, which includes state and local government services, such as police, fire protection, and corrections, grew in all states at an average rate of 2.5 percent per year from 1997 to 2017. In 34 states, public safety spending grew faster than state GDP during the period. Further, public safety expenditures grew faster than 3 percent in 13 states and slower than 1 percent in three states during the same period. Transportation. Spending on transportation grew at an average annual rate between -1.4 percent and 7.2 percent from 1997 to 2017. In 35 states, transportation spending grew between 1 percent and 3 percent per year, on average, during this period. Transportation spending grew slower than 1 percent per year on average in seven states, while in nine states, transportation spending grew faster than 3 percent, on average, per year. Environment and housing. Expenditures on environment and housing, which include functions related to natural resources and housing and community development programs, grew, on average, at a rate equal to state GDP from 1997 to 2017 and ranged from a low of 0.3 percent to a high of 6.4 percent. Environment and housing spending exceeded state GDP growth in 24 states, while these expenditures grew more slowly than state GDP in 27 states. From 1997 to 2007, environment and housing spending grew at an average rate of 4.3 percent per year. From 2008 to 2017, this spending category grew at an average annual rate of .03 percent. Government administration. Government administration includes functions related to managing the government’s day-to-day work, such as financial administration, judicial and legal costs, and central staff services and personnel agencies. Spending in this category grew slightly slower than state GDP at an average rate of 2.1 percent per year from 1997 to 2017. Government administration spending grew faster from 1997 to 2007 (at an average rate of 3.6 percent per year) than from 2008 to 2017 (at an average rate of 0.4 percent per year). Other selected expenditures. Interest on debt spending (i.e., all spending on borrowed money except utility debt) grew slower than state GDP in 48 states, while annual growth ranged from -5.1 to 2.5 percent across states. From 2008 to 2017, spending on debt interest decreased by an average annual rate of 2.1 percent from 2008 to 2017. Insurance benefits and repayment expenditures, which include retirement benefits, was the fastest growing category of selected expenditures. Average annual growth in interest paid to finance debt equaled -0.1 percent. Salaries and wages for state and local government employees grew slower than state GDP in 46 states and slower than 1 percent per year in seven states. Growth in the Sector’s Revenues Driven by Federal Grants and User Charges General revenues, as a whole, grew faster than state GDP in 35 states from 1997 to 2017 with the fastest growth in federal grants (3.5 percent per year) and user charges (3.1 percent per year). Table 3 shows state and local government revenue broken down into two larger categories: (1) federal grants, which include all federal fiscal aid to state and local governments; and (2) own-source revenue, which includes all general revenue state and local governments generate from their own sources, such as taxes and user charges. In the following section, we discuss trends in selected revenue categories identified in table 3. These selected revenue categories—federal grants, user charges, and property taxes—represent the three largest categories of revenue for the state and local government sector. Federal grants. Federal grants were the fastest growing source of revenue for the sector from 1997 to 2017, growing in every state and faster than state GDP in 45 states at an average annual rate of 3.5 percent. During the same period, state and local governments’ own- source revenue (i.e., taxes and user charges) grew at an average rate of 2.2 percent per year and ranged from -1.7 percent to 3.9 percent per year. However, state and local governments’ own-source revenue grew faster than state GDP in about half of the states. At the same time, this revenue growth varied among grant categories and across states. User charges. State and local government user charges comprised the second fastest growing revenue category for the sector from 1997 to 2017. User charges grew faster than state GDP in 40 states, at an average rate of 3.1 percent per year. In addition, user charges grew in every state, at an average rate between 0.9 percent and 6.2 percent per year. Total taxes. State and local government taxes, the largest category of own-source revenue, grew slower than state GDP from 1997 to 2017. Specifically, state and local government total tax revenues grew at a rate of about 2.1 percent per year, on average. As shown below, for the three major tax categories—property, sales, and individual income—growth varied overall and across states. Property taxes. Property taxes were the fastest growing category for the sector—growing in nearly all states at an average rate of 2.6 percent per year from 1997 to 2017. Property taxes grew faster than state GDP in 36 states and faster than 3 percent per year in 17 states. Property taxes drove own-source revenue growth during this time period. Compared to other tax revenue categories, property taxes have been a relatively stable revenue source for local governments. In addition, property taxes grew at an average rate of 1.4 percent per year from 2008 to 2012, while both sales and income taxes showed negative growth during the period. Sales taxes. Sales taxes grew at an average rate of 2 percent per year from 1997 to 2017, ranging from a low of -0.6 percent to a high of 4.1 percent. Revenue from sales taxes grew slower than state GDP in 28 states and slower than 1 percent per year in six states. Slower sales tax growth could reflect a shrinking sales tax base for state and local governments. Many states do not levy a tax on services—which represents more than two-thirds of all consumption. These states must therefore raise sales tax revenue from a smaller base. Individual income taxes. From 1997 to 2017, growth in individual income taxes showed greater variation across states and over time than either property or sales taxes. Similar to the growth in sales taxes, individual income taxes grew at an average rate of 2 percent per year, but reflected a wider range of growth from 1997 to 2017. Individual income taxes grew slower than state GDP in 26 states and slower than 1 percent per year in six states. From 2008 to 2017, growth in individual income taxes slowed to an average rate of 0.3 percent per year—representing a more than 3-percentage- point slower growth rate compared to the period from 1997 to 2007. Table 4 shows that public welfare grants to state and local governments—which include Medicaid—grew faster than state GDP in 47 states. Public welfare grants grew faster than 3 percent per year in 45 states from 1997 to 2017. During this period, public welfare grants grew in all states at an average rate of 4.6 percent per year, ranging from 1.8 percent to 9.5 percent per year. Grant funding for education and highways grew faster than state GDP at an average annual rate of 2.6 percent and 2.4 percent, respectively. Although a relatively small share of federal grants, natural resources grants had the largest average annual growth rate—4.9 percent—and grew faster than state GDP in 37 states from 1997 to 2017. Federal grants grew faster than own-source revenue overall and in a majority of states from 1997 to 2017. Figure 5 compares the rate of growth in own-source revenue to the rate of growth in federal grant revenue during the period. Figure 5 shows that, for the majority of states, revenue from federal grants grew faster than own-source revenue. State Rainy Day Fund Balances Fluctuated During the Past 20 Years and Experienced Consistent Growth Since 2010 State rainy day fund balances fluctuated as a median percentage of general fund expenditures from 1998 to 2018 and experienced consistent increases since 2010. Rainy day funds include state budget stabilization or reserve funds that state governments may use to supplement general fund spending during a revenue downturn or other unanticipated shortfall. Every state has some type of rainy day fund, though deposit and withdrawal rules vary considerably. Robust rainy day fund balances alone do not necessarily indicate strong fiscal positions, but they are one of the primary mechanisms available to states to offset a budget gap, along with spending reductions or tax increases. However, these funds will not necessarily relieve longer-term structural fiscal pressures. Median state rainy day fund balances as a percentage of total general fund expenditures increased to their highest level in the last 20 years in 2018. Figure 6 shows that states’ median rainy day fund balances increased from 1.6 percent of general fund expenditures in 2010 to 6.4 percent in 2018. Further, the median balance of state rainy day funds declined significantly after each of the last two recessions, while states gradually restored their balances each time. From 2016 to 2018, the majority of states maintained rainy day fund balances in excess of 5 percent of their general fund expenditures. The number of states with rainy day fund balances that exceeded 5 percent of their general fund expenditures doubled from 1998 to 2018, from 16 states in 1998 to 32 states in 2018 (see figure 7). Specifically, nearly half of the states maintained rainy day fund balances greater than 5 percent and less than 10 percent of their general fund expenditures in 2018. Six states had rainy day fund balances equal to 1 percent or less of their general fund expenditures, down from 11 states in 1998. Experts Identified Federal Policies and Other Considerations That Affect State and Local Governments’ Fiscal Conditions Experts we interviewed identified a range of federal policies and other considerations that could affect the fiscal condition of state and local governments. While there are other issues that affect the state and local sector’s fiscal condition, this section focuses on the issues that emerged most frequently during the interviews related to the effects of federal policies on the sector’s fiscal condition, and the fiscal pressures facing states and localities that could require a federal policy response to ensure effective delivery of federal programs implemented by these governments. Those issues include: health care, federal budget uncertainty, physical infrastructure, tax policy, and natural disasters. Health care. Most experts agreed that health care costs and, in particular, Medicaid, have placed fiscal stress on state and local governments. A number of experts expressed concerns about the long- term sustainability of Medicaid and the states’ ability to meet future demand, given current demographic trends and expectations for escalating enrollment. As we discussed earlier, Medicaid has been the fastest growing category of state spending and, based on our simulations, is expected to rise faster than GDP during the next 50 years. Some experts noted that growth in Medicaid affects states’ fiscal conditions as it has become a larger portion of states’ budgets. They pointed out that even though states have experienced a recent leveling off in Medicaid enrollment, states have also experienced a faster rate of growth in spending. Two experts attributed this growth largely to the aged and disabled enrollment groups that account for a larger share of program spending. A number of experts said that states that expanded their Medicaid programs have seen the largest increases in enrollment—driven by adults who are newly eligible for the program. CMS’s Office of the Actuary projected that Medicaid enrollment is expected to grow by as many as 13.3 million newly eligible adults by 2026—as additional states may expand their Medicaid programs to cover certain low-income adults under the Patient Protection and Affordable Care Act (PPACA). The Congressional Budget Office also reported that Medicaid spending increased 36 percent from fiscal years 2015 to 2019, largely because of state Medicaid expansions. As of January 2020, 36 states and the District of Columbia expanded eligibility for their Medicaid programs under PPACA. Some experts noted that, while enrollment has grown for the expansion states, the federal government bears responsibility for a large portion of the costs. Specifically, the federal government reimbursed 100 percent of the costs of the expanded population beginning in 2014. The federal reimbursement then decreased to 94 percent in 2018, and to 90 percent in 2020. One expert told us that states had the benefit of anticipating the decrease in funding and the corresponding increase in the state share of the costs. At the same time, a number of experts generally agreed that states are not financially positioned to meet the future demands of Medicaid during a recession or economic downturn, given projected increases in enrollment. In particular, experts pointed to the costs of recession-related Medicaid enrollment increases and the resulting fiscal pressures this would place on federal and state governments to fund Medicaid obligations. One expert shared concerns related to the uncertainty of federal funding should a recession occur. Two experts also pointed to the pressures local governments, and more specifically, county governments, face from implementation of certain federal health care policies. Specifically, these experts pointed to the health care costs that county governments must incur as a result of local jails housing pretrial inmates who have medical needs and require treatment. Federal law prohibits the use of federal health benefits by inmates who are pending trial. Thus, to the extent that an inmate cannot afford to pay the costs of health care services, counties must assume the related health care expenses for providing the necessary treatment for the inmate without reimbursement for those expenses. Federal budget uncertainty. A number of experts told us that states continue to grapple with uncertainty stemming from unpredictability in the amount of federal assistance and timing of federal appropriations— including continuing resolutions and federal government shutdowns—and effects on states’ ability to plan and implement programs. Some experts raised concerns related to the federal government’s current fiscal condition and the potential effects on state and local governments. Specifically, experts noted that states are aware of the federal government’s current fiscal condition—including federal debt and deficit levels—and the level of support the federal government may or may not choose to provide in the event of an economic downturn or recession, as it has during past recessions. In light of the uncertainty, some states have engaged in “stress tests” of their own budgets using various revenue and expenditure scenarios to determine if they are in sufficient fiscal health to weather a mild-to-severe recession. Moody’s Analytics reported in 2019 that, based on the results of stress tests it performed on all fifty states, 28 states have the level of cash reserves necessary to manage a moderate recession without having to raise taxes or cut spending. Some experts further noted that state and local governments that have not been able to strengthen their cash reserves could undergo more severe fiscal stress in an economic downturn and require a greater level of assistance. Some experts also raised concerns related to the effects of federal government shutdowns and continuing resolutions on state and local governments and their ability to plan for and implement federally-funded programs. In all but 4 of the last 42 years, Congress has passed continuing resolutions to keep government services in operation until an agreement is reached on final appropriations bills. In some years, when new appropriations or a continuing resolution have not been enacted on time, this lapse in appropriations—or funding gap—caused the government to partially shut down, which halted some activities at federal agencies until appropriations were passed. A number of experts told us that interruptions in appropriations and subsequent delays in federal grants caused by shutdowns, for example, may require states to spend additional unbudgeted funds to ensure continuity of services in certain federally-funded programs, such as food and nutrition and transportation. According to one expert, not all state or local governments are in a position to access those funds in a timely manner. Furthermore, one expert noted the impacts of continuing resolutions on local governments by compressing the time available for federal grant applications. As a result, some applicants (e.g., cities or other localities) may not apply or miss deadlines for certain federal grant programs. We and others have reported on the effects of government shutdowns and its impact on some states. For example, we reported on the partial shutdown of the federal government in October 2013, which lasted for a period of 16 days due to a lapse in appropriations. Our report showed that even if a state wanted to use its funds to continue services for a federally-funded program, it might not have had sufficient liquid assets to do so quickly. At that time, at least 12 states publicly reported that funding for certain grant programs was only confirmed through October, meaning the funding may not have been available if the shutdown had continued into November. Some of these states expected to discontinue certain federally-funded programs or services if the shutdown had extended into November, while others expressed uncertainty regarding how they would have proceeded if the shutdown had been longer. Physical infrastructure. Physical infrastructure at the state and local government level includes a broad range of systems—including highways, mass transit, rail, water, and sewer systems. A number of experts pointed to concerns related to an aging infrastructure and the fiscal pressures that infrastructure demands place on state and local governments. The cost of repairing and upgrading the nation’s surface transportation infrastructure to meet current and future demands is estimated in the hundreds of billions of dollars. Further, our 2017 report noted that estimates from the Environmental Protection Agency put drinking water and wastewater infrastructure needs at approximately $655 billion nationwide during the next 20 years. State and local governments own a large portion of the nation’s physical infrastructure, while the federal government provides support to the sector in the form of grants, bonds, and loans. Funds made available from the Highway Trust Fund are distributed to states in the form of grants for eligible projects. The federal government also supports additional infrastructure spending through tax-exempt or tax-credit bonds, which provide a tax exclusion or tax credit to owners of municipal bonds issued by state and local governments. Further, through various loan programs, such as the Transportation Infrastructure Finance and Innovation Act program, the government supports project financing. State and local governments also generate revenues for transportation projects through their own sources including user fees and taxes. A number of experts shared concerns about the future of federal funding for state and local surface transportation needs. One expert acknowledged the benefits of highway grant programs provided through the Fixing America’s Surface Transportation Act. However, this expert also recognized that the act is set to expire in 2020 and its future, along with other sources of federal funding through the Highway Trust Fund, remains uncertain. We have also reported that traditional federal funding sources for surface transportation, such as the Highway Trust Fund, are eroding and the federal government lacks a long-term sustainable strategy for funding surface transportation. Moreover, experts noted that physical infrastructure needs represent only one among many competing priorities for state and local government spending. One expert expressed concern that the availability of state and local discretionary spending on infrastructure needs and maintenance will increasingly be affected by growing pressures from other mandatory spending categories, such as Medicaid. Many states have looked to modify or enhance other sources of revenue, such as the gas tax, to help meet highway transportation costs. According to the National Conference of State Legislatures (NCSL), since 2013, 31 states and the District of Columbia have enacted legislation that will or may increase their motor fuel tax to support surface transportation costs. Even so, two experts raised concerns about the viability of the gas tax as a reliable revenue source since gasoline consumption has declined. Further, NCSL reported that many states have received federal funding to study and pilot user-based alternative mechanisms through the Surface Transportation System Funding Alternative Program. We and others have also reported that some states have recognized the need for an alternative funding mechanism to meet future revenue demands. Some options that states have considered or implemented include tying gas tax rates to inflation or population, taxes based on the price of fuel, and taxing miles traveled instead of gas purchased—also referred to as mileage-based user fees. Further, experts pointed to the lack of a clearly articulated federal highway infrastructure policy and the implications for state and local governments. For example, one expert noted that states need the ability to plan multiyear programs for large-scale transportation projects and that an open dialogue about federal program implementation or renewal with all three levels of government could help state and local governments better plan for the future. This expert added that the uncertainty that state and local governments experience could be reduced if the federal government could better inform and communicate with state and local governments regarding legislative policy developments and was willing to engage in conversations with state and local governments. Tax policy. Experts discussed selected provisions of the law commonly known as the Tax Cuts and Jobs Act (TCJA) and other tax-related issues that could exacerbate or help ease fiscal pressures for state and local governments. Enacted in December 2017, TCJA included significant changes to corporate and individual tax law, with implications for state and local government tax collections. In particular, for individual taxpayers, for tax years 2018 through 2025, tax rates were lowered for nearly all income levels, some deductions from taxable income were changed (personal exemptions were eliminated, while the standard deduction was increased), and certain credits, such as the child tax credit, were expanded. A number of experts agreed that with just over 2 years since its passage, it is still too early to fully assess the effect of TCJA’s provisions on state and local government revenues. States are continuing to incorporate some of the provisions of TCJA into their own tax codes. Some states have adopted the federal definition of taxable income as a starting point for state tax calculations, while other states use the federal definition of adjusted gross income as a starting point. The choices states make regarding their linkage to these definitions have implications for their state tax revenues. Further, because TCJA placed a $10,000 annual cap on the federal deduction for taxpayers’ state and local taxes (SALT) from taxable income beginning on January 1, 2018, some high-income taxpayers prepaid their personal income and property taxes to take advantage of the uncapped SALT deduction in 2017. As a result, some states experienced an increase in revenues in late 2017. According to S&P Global Ratings, the imposition of SALT caps incentivized many taxpayers to accelerate their income tax payments into December 2017, but consequently made December 2018 tax payments look smaller by comparison. It also further reduced December 2018 payments by lessening the incentive for many taxpayers to make early income tax payments. Most experts raised the TCJA’s elimination of advance refunding for tax- exempt municipal bonds as a potential source of fiscal stress for the state and local government sector. State and local governments use these tax- exempt bonds to finance a broad range of projects and activities, including public infrastructure. Prior to its elimination, the provision allowed state and local governments to take advantage of favorable interest rates to reduce borrowing costs, restructure debt, and free up resources for other projects or investments. A number of experts explained that the elimination of the provision could result in increasing project costs—ultimately increasing infrastructure and debt costs over time. Some experts highlighted overall concerns about states’ eroding sales tax base. For example, the country has transitioned to a more service-based economy, due to changes in consumption. As services have begun to represent a larger and growing share of GDP, there has been an erosion of states’ sales tax bases. In contrast, a number of experts pointed to the outcome of the U.S. Supreme Court’s ruling in South Dakota v. Wayfair, Inc. and its potential for stimulating growth in sales tax revenue. The Court in South Dakota v. Wayfair, Inc. held that states could require out-of-state sellers to collect and remit sales taxes on purchases made from those out-of-state sellers, even if the seller does not have a substantial physical presence in the taxing state. A number of experts noted that remote sales taxes will likely increase state and local sales tax revenues, but that states are still realizing the effects of the ruling on their revenues. Following the U.S. Supreme Court’s decision, numerous states that levy a sales tax and the District of Columbia have taken some kind of action to enforce remote sales tax collections. According to NCSL, as of January 2020, 43 states and the District of Columbia currently require remote sales tax collection. Some states have taken legislative action to change their state laws in response to the outcome of the Wayfair case, while some collection efforts have been led by departments of revenue if statutory authority was already provided. However, it is too soon to determine the full effects of the Wayfair case on states’ sales tax revenue. Natural disasters. A number of experts pointed to the increasing fiscal pressure that state and local governments are under and will continue to face, given the increasing frequency, severity, and cost of natural disasters. We and others have reported on the increasing trend in the number of natural disasters and related costs. For example, in 2018 alone, there were 14 weather and climate disaster events with losses exceeding $1 billion each across the United States with total costs of at least $91 billion, according to the National Oceanic and Atmospheric Administration. Further, disaster costs are projected to increase as extreme weather events become more frequent and intense because of climate change as observed and projected by the U.S. Global Change Research Program and the National Academies. A number of experts acknowledged that the federal government plays a critical role in providing disaster assistance to state and local governments and stressed the need for continued financial support. Some experts discussed the importance of federal assistance since states may need to pay for immediate disaster costs, such as debris removal, out of current expenditures and may not have the funds available to cover those costs. Local governments in particular are generally the first responders in the event of a disaster, often times using their own personnel and funding in these circumstances. Some experts noted that these localities and communities may lack the available cash reserves needed for disaster response-related resources, such as public safety overtime and other types of public assistance. One expert underscored the federal government’s role as an economic stabilizer in providing assistance to local governments during disasters. Given the increase in federal disaster spending, we and others have underscored the importance of finding ways to address the growing costs of natural disasters, citing investment in mitigation as one approach. Some experts we interviewed also pointed to the importance of states’ adoption of mitigation strategies as a way to help states and localities reduce the environmental and fiscal effects of natural disasters. For example, the Pew Charitable Trusts reported in 2020 that a number of states and localities are looking to invest in infrastructure projects that will help mitigate the potential effects of disasters before they occur. For example, according to Pew Charitable Trusts, one state plans to limit development and move residents out of areas most prone to flooding, while improving infrastructure in communities on higher ground that are likely to receive displaced populations from neighboring towns. Another state plans to invest its federal funds in flood control, removing homes from high-risk areas and helping local governments pay for projects. Further, one locality plans to spend $500 million on infrastructure upgrades over the next few years, after its residents voted to authorize a bond to address flooding and other concerns. Further, one expert stressed the importance of the Disaster Recovery Reform Act of 2018 (DRRA) in developing state and local mitigation programs, in addition to strengthening federal, state, and local relationships in disaster response and recovery efforts. Among other things, the act increases the federal investment in predisaster mitigation, increases reimbursement caps for state and local governments on a range of disaster costs, and allows state and local governments to administer housing assistance grants. We reported in 2019 that it is too early to tell what effect implementation of DRRA will have on state and local resilience. In addition, economic literature we reviewed highlighted the potential long-term implications of natural disasters and climate change on state and local governments’ municipal bond ratings. For example, credit rating firms—Fitch Ratings, Moody’s Investors Service, and S&P Global Ratings—indicated that they are considering the effects of climate change in their credit analyses of state and local governments. Specifically, S&P Global Ratings has identified risk factors related to the environment, among other credit risk factors, such as extreme weather events and flooding that can affect an issuer’s ability to meet full and timely debt service. We are sending copies of this report to the appropriate congressional committees and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6806 or sagerm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology This report examines fiscal pressures for state and local governments. Specifically, the objectives of our review were to (1) examine recent trends in state and local government expenditures and revenues; and (2) synthesize expert views regarding the effect of federal policy on state and local government fiscal pressures. To describe recent trends in state and local government expenditures and revenues, we analyzed categories of aggregate data on state and local expenditures and revenues using inflation-adjusted data from the Bureau of Economic Analysis’s (BEA) National Income and Product Accounts (NIPA) from 1998 to 2018. We analyzed changes in the shares of state and local expenditures and revenues as a percent of total expenditures and revenues respectively from 1998 to 2018. We determined that the NIPA data were the most recent available data for the purpose of examining aggregate state and local government revenue and expenditure trends. The NIPA data do not always match state and local government budget data due to methodological differences between how BEA calculates NIPA data and how state and local governments compute their budget data. We also reviewed our prior reports and those of others to identify what is known about these trends and the factors that affect them. To analyze trends in state and local government revenues and expenditures among states, we used the U.S. Census Bureau (Bureau) government finance data and gross domestic product (GDP) price index data from BEA to calculate inflation-adjusted values of selected expenditure and revenue categories for each state (including the District of Columbia) and for the United States for 1997 to 2017. Data for 1997, 2002, 2007, 2012, and 2017 are based on the Bureau’s Census of Governments, which surveys all state and local governments in the United States. Data for the other years are based on the Bureau’s Annual Survey of Government Finances. In these years, local government finance statistics are based in part on a sample of local governments in the United States. We determined that the Bureau’s data were the most comprehensive for the purpose of examining trends in state and local government expenditures and revenues. However, due in part to definitional differences among the states, such as those of coverage (what constitutes a government entity) or measurement (cash or accrual accounting), the data cannot be used as financial statements to measure a government’s fiscal condition or to calculate a surplus or deficit. We assessed the reliability of the data we used for this analysis and determined that BEA NIPA and the Bureau’s data were sufficiently reliable for our purposes. Our data reliability assessment included reviewing relevant documentation, interviewing knowledgeable BEA and Bureau officials, and reviewing the data to identify obvious errors or outliers. We examined patterns between state and local government revenue growth and growth in overall state and local government spending using data from the Bureau. For each state and the District of Columbia, we assessed how fast each expenditure and revenue category grew between 1997 and 2017 and calculated the average annualized growth rate based on year-to-year changes for each selected expenditure and revenue category. For each expenditure and revenue growth rate calculation, we identified the U.S. average annualized growth rate and the minimum and maximum growth rates across states. Because changes in the levels of expenditure and revenue categories can be affected by changes in state fiscal capacity—such as increased tax revenues due to population growth—we compared the average annual compound growth rate for each category of spending and revenues to the average annual compound growth rate in state gross domestic product (GDP). We chose state GDP as a proxy for each state’s resources or fiscal capacity. We determined state GDP to be the most appropriate representation of a state’s total resources or fiscal capacity. To compare the growth in these categories relative to growth in each state’s resources, we compared the growth rate for each selected expenditure and revenue category to the growth rate in each state’s GDP resources from 1997 to 2017. When expenditures in a state are growing faster than GDP, the share of the state’s resources that are dedicated to state and local government services is growing. Over the long run, such growth could create a fiscal pressure. This analysis also identified the number of states where growth in a category was (1) greater than GDP for that state or (2) less than GDP for that state. We also examined patterns between state and local revenue growth and growth in state and local spending and federal grants using data from the Bureau. For each state and the District of Columbia, we plotted the average annual growth rate in general revenues against the average annual growth rate in general expenditures from 1997 to 2017. We then counted the number of states in which spending grew faster, slower, and at the same rate as general revenues. We also analyzed growth in own- source revenues against growth in federal grant revenues using the same approach. We then counted the number of states in which own-source revenue grew faster, slower, and at the same rate as federal grants. To identify expenditure categories in the Bureau’s data, we selected all of the Bureau’s general expenditure categories. We included other expenditure categories, such as interest on debt and salaries and wages to document their low growth rates. We included insurance benefits and repayments because of its high growth rate and its inclusion of pension benefits, which experts identified as a growing expense in some states. As part of our analysis of trends in state and local government expenditures, we analyzed data from the National Association of State Budget Officers (NASBO) on state rainy day fund balances and general fund expenditures. NASBO’s Fiscal Survey of States surveys state budget officers in 50 states on general fund receipts, expenditures, annual tax and revenue changes, and balance data, which includes rainy day fund balances. We calculated state rainy day fund balances as a percentage of state general fund expenditures among states from 1998 to 2018. We then plotted the median state rainy day fund balances for each year from 1998 to 2018. We assessed the reliability of the data we used for this analysis and determined that NASBO’s data were sufficiently reliable for our purposes. Our data reliability assessment included reviewing relevant documentation and consulting knowledgeable officials about the data. To obtain expert views regarding the effect of federal policy on state and local government fiscal pressures, we conducted a series of structured interviews by telephone or in person with a nongeneralizable sample of individuals representing organizations with recognized expertise in state and local budgeting and finance economics, public policy, and intergovernmental issues. To select these experts, we reviewed their published or other publicly available work, professional affiliations, or recommendations by other experts. These considerations informed whether the experts we selected would be knowledgeable or have expertise related to state and local government fiscal and intergovernmental issues. We identified three categories of experts and selected individuals within each category. These three categories included: (1) officials representing state and local government organizations; (2) providers of financial and credit risk information, such as credit rating agencies; and (3) researchers representing think tanks with expertise in state and local government finance, including taxes, budgeting, and intergovernmental relations. We spoke with representatives from the following 17 organizations as part of our structured interviews: 1. The Council of State Governments 2. Federal Funds Information for States 4. International City/County Management Association 5. Moody’s Analytics 6. National Association of Counties 7. National Association of State Auditors, Comptrollers, and Treasurers 8. National Association of State Budget Officers 9. National Conference of State Legislatures 10. National Governors Association 11. National League of Cities 12. Pew Charitable Trusts 13. S&P Global Ratings 15. Urban-Brookings Tax Policy Center 16. The United States Conference of Mayors The results from the structured interviews are not generalizable and represent the opinions of the individuals from the 17 organizations we interviewed. However, we took steps to obtain opinions from experts with different types of expertise and perspectives. For each question in the structured interview, we coded, organized, and analyzed the responses to develop common themes among the responses, based on the issues that emerged most frequently. We use the terms “a number of,” “some,” and “most” to describe the number of experts who responded on a particular issue. We defined “a number of” or “some” as three or more experts and “most” as nine or more experts. To provide context on these themes and supplement our understanding of this information, we reviewed related research, literature from those interviewed and other organizations, including ourselves, and included relevant examples as appropriate. We conducted this performance audit from January 2019 to March 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Brenda Rabinowitz (Assistant Director), Keith O’Brien (Analyst-in-Charge), Colin Ashwood, and Dylan Stagner made key contributions to this report. David Dornisch, J. Andrew Howard, Courtney LaFountain, Silda Nikaj, Robert Robinson, Ardith Spence, and Frank Todisco also provided support.
Why GAO Did This Study State and local governments work together with the federal government to deliver a broad range of public services. GAO's prior work has shown that the state and local government sector will likely face fiscal pressures during the next 50 years due to a gap between spending and revenues. The fiscal sustainability of the state and local government sector is essential to effectively implement intergovernmental programs. GAO was asked to review recent trends in state and local government expenditures and revenues, fiscal pressures for state and local governments with intergovernmental implications, and the implications of federal policy for these pressures. This report (1) examines trends in state and local government expenditures and revenues during the past two decades; and (2) synthesizes expert views regarding the effects of federal policy on state and local government fiscal conditions. GAO analyzed data from the Bureau of Economic Analysis National Income and Product Accounts, the U.S. Census Bureau and the National Association of State Budget Officers. GAO also interviewed a nongeneralizable sample of experts from organizations that represent state and local governments, professionals who provide financial and credit risk information (credit rating agencies), and researchers from think tanks to better understand how federal policies affect state and local government fiscal conditions. What GAO Found During the past two decades, the state and local government sector experienced overall growth in spending and revenue. Specifically, inflation-adjusted spending increased from about $1.7 trillion in 1998 to about $2.8 trillion in 2018. Health spending accounted for the largest increase. Inflation-adjusted revenues increased from about $1.6 trillion in 1998 to about $2.6 trillion in 2018. Taxes comprised the largest revenue category. From 1997 to 2017, state and local government expenditures and revenues grew faster than state gross domestic product in most states. On average, growth in expenditures outpaced growth in revenues by 0.3 percentage points per year during the period. Increases in public welfare spending drove spending growth (spending largely for states' share of Medicaid), while federal grants and user charges drove revenue growth. Domestic Product (GDP) in Most States from 1997 to 2017 Source: GAO analysis of U.S. Census Bureau and Bureau of Economic Analysis data. | GAO 20-437 Experts identified a range of issues facing state and local governments that could affect the sector's fiscal condition. Those most frequently mentioned included: Health care. Experts expressed concerns regarding their ability to meet future Medicaid enrollment demands in an economic downturn. Federal budget uncertainty. Uncertainty in the future of federal assistance as well as the timing of federal appropriations, including federal government shutdowns, affected state and local governments' program planning. Physical infrastructure. Aging infrastructure costs and uncertainty in federal funding sources placed pressure on the sector to identify alternative revenue sources for transportation projects. Tax policy . Provisions of the law known as the Tax Cuts and Jobs Act had varied effects on the sector, but most experts agreed it is still too early to assess the act's full effects on state and local government revenues. Natural disasters . Experts acknowledged the important contribution of federal financial support for disaster response and recovery and noted some states' mitigation efforts to address the increasing frequency and cost of disasters. Credit rating firms are considering the effects of climate change in their credit analyses of state and local governments.
gao_GAO-20-279
gao_GAO-20-279_0
Background According to OMB, federal agencies reported that they operated 432 data centers in 1998, 2,094 in July 2010, 5,607 in August 2016, and 5,916 in August 2018. As previously mentioned, operating such a large number of centers has been, and continues to be, a significant cost to federal agencies. For example, in 2007, the Environmental Protection Agency (EPA) estimated that the annual cost for electricity to operate federal servers and data centers across the government was about $450 million. Further, according to the Department of Energy (Energy), a typical government data center has 100 to 200 times the energy use intensity of a commercial building. However, in 2009, OMB reported server utilization rates as low as 5 percent across the federal government’s estimated 150,000 servers. These factors contributed to OMB recognizing the need to establish a coordinated, government-wide effort to improve the efficiency, performance, and environmental footprint of federal data center activities. Subsequently, OMB launched the Federal Data Center Consolidation Initiative in 2010 to reduce the growing number of federal data centers and we have reported extensively on federal agencies’ efforts to implement the initiative’s requirements. Among other things, OMB required agencies to consolidate inefficient infrastructure, optimize existing facilities, improve their security posture, and achieve cost savings. For example, each agency was required to maintain a complete inventory of all data center facilities owned, operated, or maintained by or on its behalf, and measure progress toward defined optimization performance metrics on a quarterly basis as part of its data center inventory submission. IT Acquisition Reform Law Enhanced Data Center Consolidation and Optimization Efforts Recognizing the importance of reforming the government-wide management of IT, Congress enacted FITARA in December 2014. Among other things, the law required agencies to: Submit to OMB a comprehensive inventory of the data centers owned, operated, or maintained by or on behalf of the agency. Submit, by the end of fiscal year 2016, a multi-year strategy to achieve the consolidation and optimization of the agency’s data centers. The strategy was to include performance metrics that were consistent with the government-wide data center consolidation and optimization metrics. Report progress toward meeting government-wide data center consolidation and optimization metrics on a quarterly basis to OMB’s Administrator of the Office of Electronic Government. In addition, according to FITARA, the Office of Electronic Government at OMB was to: Establish metrics applicable to the consolidation and optimization of data centers (including server efficiency), ensure that information related to agencies’ progress toward meeting government-wide data center consolidation and optimization metrics was made available to the public in a timely manner, review agencies’ inventories and strategies to determine whether they were comprehensive and complete, and monitor the implementation of each agency’s strategy. Develop and make publicly available not later than December 19, 2015, a goal broken down by year for the amount of planned cost savings and optimization improvements that were to be achieved through the FDCCI; and, for each year thereafter until October 1, 2020, compare reported cost savings and optimization improvements against those goals. OMB Established DCOI to Provide Oversight of FITARA Data Center Consolidation and Optimization Requirements In August 2016, OMB issued Memorandum M-16-19, which established DCOI and included guidance on how to implement the data center consolidation and optimization provisions of FITARA. The memorandum directed each agency to develop a DCOI strategic plan that defined its data center strategy. Among other things, this strategy was to include a timeline for agency consolidation and optimization activities, with an emphasis on cost savings and optimization performance benchmarks that the agency could achieve between fiscal years 2016 and 2018. For example, each agency was required to develop cost savings targets due to consolidation and optimization actions and report any realized cost savings. OMB required each agency to publicly post its DCOI strategic plan to its agency-owned digital strategy website. In addition, OMB’s memorandum included a series of performance metrics in the areas of data center closures, cost savings, and optimization progress. The guidance further noted that agency progress was to be measured by OMB on a quarterly basis, using agencies’ data center inventory submissions and OMB-defined closures, cost savings, and optimization targets. Further, the memorandum stated that OMB was to maintain a public dashboard (the IT Dashboard) to display government-wide and agency- specific data center consolidation and optimization progress. In this regard, OMB began including such progress information on the IT Dashboard in August 2016. GAO Previously Made Recommendations on Agencies’ Consolidation and Optimization Efforts Since the enactment of FITARA in December 2014, we have reviewed and verified the quality and completeness of each covered agency’s inventory and DCOI strategy annually. We have also published reports documenting the findings from each of these reviews. In addition, we have examined and reported on agencies’ efforts to optimize their data centers, as well as the challenges encountered and successes achieved. As of December 2019, 75 of the 117 recommendations from these reports had not been fully addressed. The results and recommendations of our previous reviews are detailed in appendix II. OMB Updated DCOI in 2019 and Revised the Definition of a Data Center In June 2019, OMB issued a memorandum, M-19-19, that updated DCOI and redefined a data center as a purpose-built, physically separate, dedicated space that meets certain criteria. The memorandum also revised the priorities for consolidating and optimizing the federal data centers. Specifically, OMB directed agencies to focus their efforts on their tiered data centers and to stop reporting on spaces not designed to be data centers (i.e., non-tiered data centers) as part of their inventory. The guidance outlined a process by which agencies could request, and OMB would approve, that these facilities be dropped from reporting. The guidance also noted that OMB would set agency-specific data center closure and cost savings targets in collaboration with each agency and in alignment with that agency’s mission and budget. In addition, OMB described criteria for designating certain data centers as mission critical facilities, which would be exempt from new agency-specific closure targets. Those mission critical designations are to be assumed to be granted unless OMB specifically overturns them. OMB’s revised June 2019 DCOI guidance also directed agencies to stop reporting on spaces not designed to be a data center as part of their inventory, and to focus their efforts on their remaining purpose-built data centers. This is a change from the previous DCOI guidance, which required agencies to report on a much wider range of facilities. OMB’s new memorandum also replaced the previous optimization metrics with revised measures that focused on (1) reporting the number of agencies’ virtualized hosts, underutilized servers, and data centers with advanced energy metering; and (2) the percentage of time that data centers were expected to be available to provide services. In contrast to the previous DCOI guidance, the new memorandum did not specify government-wide performance targets for the optimization metrics, such as setting a target for server utilization of 65 percent for all agencies. Instead, OMB worked with agencies to establish agency-specific targets that were also identified in agency DCOI strategic plans and on the IT Dashboard. In addition, the guidance described how agencies could apply for an optimization performance exemption for data centers where typical optimization activities (consolidation of data collection, storage, and processing to a central location) were technically possible but increased the response time for systems beyond a reasonable limit. Agencies Have Continued to Close Data Centers and Achieve Cost Savings, but Oversight and Cybersecurity Risks Need to be Addressed As in previous years, the 24 agencies participating in DCOI continued to report progress in closing unneeded data centers and achieving related additional cost savings. The agencies reported closing a total of 102 data centers in fiscal year 2019, as of August 2019, and reported plans to close an additional 184 data centers by the end of fiscal year 2019. According to agencies’ data center inventories, almost all of the 24 agencies met or planned to meet their fiscal year 2019 closure targets. In addition, agencies reported that their DCOI-related activities had either achieved, or planned to achieve, the $241.5 million in total planned savings for fiscal year 2019. However, recent OMB DCOI policy changes will reduce the number of data centers covered by the policy and both OMB and agencies may lose important visibility over the security risks posed by these facilities. Almost All 24 Agencies Met, or Planned to Meet, OMB’s Fiscal Year 2019 Targets for Data Center Closures For fiscal year 2019, 23 of the 24 agencies reported that they met or planned to meet their fiscal year data center closure targets, as established under OMB’s June 2019 guidance. Of those 23 agencies: three agencies reported that they did not have any agency-owned data centers and had a target of zero closures; these agencies were listed on the IT Dashboard as having completed their closure efforts; five agencies were not expected to close any of their operating data centers during the fiscal year, and their target was zero; 13 agencies reported meeting or exceeding their target closures by two agencies—the Departments of Defense (Defense) and Veterans Affairs (VA)—reported closing a number of data centers and had additional closures planned that were expected to meet their respective fiscal year targets. In addition, one agency—the Office of Personnel Management (OPM)— did not submit a DCOI strategic plan and, consequently, did not report a data center closure target. Table 1 details, for each of the 24 agencies, the number of data centers open at the start of fiscal year 2019, the agency’s fiscal year 2019 closure target, the number of data centers closed, and the number planned for closure during the remainder of the fiscal year, as of August 31, 2019. Agencies reported a total of 102 fiscal year 2019 data center closures through August 31, 2019, with an additional 184 planned closures by the end of that fiscal year. Figure 1 aggregates this information to show agencies’ overall fiscal year 2019 progress against the reported total number of federal data centers. In regard to the remaining data centers, as of August 2019, 12 of the 24 agencies reported plans to close 37 data centers in fiscal year 2020 and beyond. Specifically, 10 agencies reported plans to close 31 additional data centers in fiscal year 2020. Further, two agencies—Energy and the Social Security Administration (SSA)—reported plans to close a total of five data centers in 2021, and one agency—the Department of Homeland Security (DHS)—reported plans to close one data center in 2022. Based on our past work reviewing agencies’ DCOI strategic plans, this total number of planned closures is likely to increase when agencies submit their annual DCOI strategic plans in the spring of 2020. However, the ability to track agencies’ progress against their goals is hampered because the agencies are not reporting their planned and achieved closures on a fiscal year basis, and in one case, the agency had not submitted a plan. As of September 2019, neither the agencies’ strategic plans nor the IT Dashboard provided a specific breakdown of the planned and achieved closures for each fiscal year. OMB’s guidance on DCOI strategic plans only requires reporting cumulative numbers, and staff in OMB’s Office of the Federal CIO confirmed that the IT Dashboard is now intended to report agencies’ cumulative numbers of actual and planned data center closures, rather than numbers broken out by fiscal year. This lack of visibility into exactly how many closures the agencies expect to achieve every fiscal year jeopardizes OMB’s and Congress’ ability to effectively oversee agencies’ data center consolidation efforts. OMB’s Policy Changes Will Reduce Oversight of Certain Key Data Centers In August 2016, OMB expanded its definition of a data center to include many smaller facilities that OMB cited as consuming significant amounts of resources. Specifically, OMB included rooms with at least one server, providing IT-related services, and categorized data centers into two groups: tiered (which had to meet specific characteristics defined by OMB) and non-tiered. We previously reported that, based on this definition, as of August 2018, the 24 agencies planned to have a total of 4,907 operating data centers at the beginning of fiscal year 2019. However, OMB’s June 2019 revised DCOI reporting requirements further changed the definition of a data center, including no longer requiring agencies to report most of the facilities previously categorized as non- tiered data centers. As noted previously, OMB directed agencies to stop reporting on spaces not designed to be data centers as part of their inventory. As a result, agencies are no longer required to report on about 2,000 facilities, some of which are considerable in size and will continue to operate. Based on OMB’s revised definition of a data center, agencies revised their data center inventory counts and now reported 2,727 operating data centers at the beginning of fiscal year 2019. Specifically, our analysis identified 20 data centers of more than 1,000 square feet that agencies had previously reported as planned for closure, but will not be reported under the current definition. In addition, our analysis found 260 data centers over 1,000 square feet, previously categorized as non-tiered, that agencies plan to continue operating, but which will no longer be reported as part of DCOI. This includes SSA, which plans to no longer report on, but to continue operating, five data centers that are each over 8,000 square feet. Similarly, the Department of State (State) plans to no longer report on, but to keep operating, two facilities that are each at least 10,000 square feet in size. Further, many of the smaller facilities that are now exempt from DCOI reporting represent what OMB has said in the past are the types of data centers that should be included in DCOI because of the risks they posed. Specifically, in its 2016 guidance memorandum, OMB stated that these smaller facilities posed a cybersecurity risk, and consequently, identified them as data centers that needed to be included in consolidation efforts under DCOI. In particular, OMB called out server rooms and closets as security risks that should be targeted for closure. However, while OMB’s 2019 guidance noted the need to address security at these locations and encouraged agencies to continue working to consolidate and optimize them, there is no requirement for agencies to continue to track and report on their progress in closing these smaller facilities. In July 2019, we found that IT systems supporting federal agencies, such as those found in the government’s data centers, are inherently at risk. Specifically, we reported that because these systems can be highly complex and dynamic, technologically diverse, and often geographically dispersed, these factors increase the difficulty of protecting their security. Since each physical location represents a potential access point to an agency’s interconnection with other internal and external systems and networks, each location also poses a risk as a point of potential attack. We also noted that IT systems are often riddled with security vulnerabilities—both known and unknown. Cybersecurity vulnerabilities, such as unsecured access points, can facilitate security incidents and cyberattacks that disrupt critical operations; lead to inappropriate access to and disclosure, modification, or destruction of sensitive information; and threaten national security, economic well-being, and public health and safety. Because of OMB’s decision to remove these types of data centers from DCOI reporting, agencies may lose track of the security vulnerabilities that these facilities present due to the consequent reduction in overall visibility and oversight into all data centers. In its June 2019 guidance, OMB also outlined a process by which agencies could request, and OMB approve, that specific facilities be removed from reporting. As part of this process, agencies were allowed to identify data centers to be removed in one reporting period and then actually remove them in the next, unless OMB provided a written denial within 30 days of the original request. Similarly, agencies could request an exemption for mission critical facilities from their closure target; that request also allows 30 days for OMB to object to the request before an agency should consider the request approved. However, there is currently no documentation of OMB’s decisions on requests to remove specific data centers from reporting, or to exempt the data centers from closure targets because the facility is mission critical. Although an agency’s data center inventory included fields for documenting OMB’s decisions with regard to potential exemptions to optimization, there is no requirement or mechanism to document OMB’s approval that a data center could be dropped from reporting or exempt from closure. There is also no mechanism that would allow a third party to determine whether OMB is providing any denials within the 30 days specified in the DCOI guidance. Staff in OMB’s Office of the Federal CIO acknowledged that someone without access to OMB’s repository of agencies’ data center inventories could not determine whether OMB completed its review within the required time period. We recognize that OMB’s data center definition and reporting revisions are an effort to focus agency closure and optimization efforts on certain types of facilities. However, OMB’s own past guidance has acknowledged the security risks posed by the types of facilities that agencies can now exclude from DCOI. While agencies are best positioned to determine whether these locations should be closed or optimized, it is important that these facilities, previously covered by DCOI, continue to be reported on quarterly, regardless of whether they are subject to closure or optimization. Further, the lack of transparency into OMB’s approval process for removing certain facilities from reporting due to a lack of documentation hinders its ability to understand how and why those decisions are made. This, in turn, jeopardizes OMB’s and Congress’ ability to effectively oversee agencies’ data center consolidation and optimization efforts. Almost All DCOI Agencies Met, or Planned to Meet, OMB Fiscal Year 2019 Cost Savings Targets, with More Savings Planned in 2020 Since 2013, federal agencies have been required to report on data center cost savings. In this regard, OMB provided guidance regarding how agencies were to report cost savings and avoidances. Specifically, it required agencies to report both data center consolidation cost savings and avoidances, among other areas, as part of a quarterly data collection process known as the integrated data collection. FITARA also called for each agency to submit a multi-year strategy for achieving the consolidation and optimization of data centers that includes year-by-year calculations of investment and cost savings through fiscal year 2018, which has now been extended to 2020. In addition, OMB’s June 2019 memorandum, M-19-19, noted that agency-specific targets would be set in collaboration with each agency and aligned to that agency’s mission and budget. In their fiscal year 2019 DCOI strategic plans, agencies identified a collective goal of achieving $241.5 million in savings. As of August 2019, the 24 DCOI participating agencies had collectively identified in their quarterly reports to OMB a total of $202.36 million in data center-related cost savings for fiscal year 2019, with an additional $39.14 million expected to be realized in the remaining month of the fiscal year. Specifically, 18 agencies reported that they had met or exceeded their cost savings targets, including seven agencies that did not have a cost savings target and did not report achieving any cost savings. Further, 12 agencies reported plans to achieve about $264 million in data center- related cost savings for fiscal year 2020. Five agencies that had cost savings targets—the Departments of Agriculture (Agriculture), Commerce (Commerce), DHS, and State; and the National Aeronautics and Space Administration (NASA)—reported that they had not yet met their targets, but planned to do so. Additionally, as noted previously, OPM had not submitted its DCOI strategic plan as of August 2019 and, therefore, did not identify cost savings targets for fiscal year 2019 and beyond. Table 2 provides a breakdown of each agency’s planned and achieved cost savings for fiscal year 2019, as of August 2019, and planned savings for fiscal year 2020, according to their DCOI strategic plans and quarterly reporting. Agencies that did not report achieving any cost savings provided a variety of reasons for why they had not done so. For example, officials in the Department of Veterans Affairs’ (VA) Office of the CIO reported 12 data center closures, but said they did not report any achieved cost savings because the majority of those data centers were within multi-use facilities that were still owned and maintained by the agency. However, according to VA’s DCOI strategic plan, the agency plans to achieve cost savings in fiscal year 2020 because it expects to stop leasing two data centers, which is expected to reduce data center spending. In addition, officials from three agencies—the Department of Housing and Urban Development (HUD), the General Services Administration (GSA), and the United States Agency for International Development (USAID)— reported that they did not have any agency-owned data centers and had limited opportunity to achieve cost savings related to closing and optimizing their data centers. According to OPM officials, the agency did not have a savings target due to the lack of a fiscal year 2019 DCOI strategic plan, which was attributed by the officials to an oversight that resulted from changes in OPM CIO leadership at the time the plan was due. The officials reported that the agency continued to execute on a plan that was already in place and they did not anticipate any meaningful changes in the agency’s DCOI strategy for 2020. The officials said they expect OPM to submit its fiscal year 2020 strategic plan on time in April 2020. Overall, the 24 participating DCOI agencies have reported a total of $4.7 billion in cost savings and avoidances from fiscal years 2012 through 2019. We have previously stressed that identifying and reporting the savings resulting from agencies’ data center consolidations was an important indicator for monitoring the progress of DCOI. Until OPM submits a plan that identifies its cost savings targets to OMB, the agency’s ability to plan how to achieve DCOI’s expected benefits will be limited. In addition, until the five agencies that still expect to achieve savings establish and meet their cost savings targets, DCOI may not deliver the expected financial benefits. Agencies Reported Progress against DCOI’s Revised Optimization Metrics, but Metrics Lacked Appropriate Information on Performance Parameters FITARA required OMB to establish metrics to measure the optimization of data centers, including server efficiency, and to ensure that agencies’ progress toward meeting those metrics is made available to the public. Pursuant to this requirement, OMB has used several different sets of performance measures that have changed over time. Most recently, and as previously noted, OMB issued revised DCOI guidance in June 2019 that defined a set of three revised and one new data center optimization metrics to replace the five previous metrics. According to the OMB memorandum that published these changes, the current metrics were intended to focus optimization efforts in key areas where agencies can make meaningful improvements and achieve further cost savings through optimization. Table 3 provides a description of the four data center optimization metrics and how each metric is to be calculated. According to the June 2019 revised DCOI guidance, agencies are to focus their optimization efforts on their remaining open, agency-owned, tiered data centers. OMB also included in the guidance its plans to work with the agencies to set agency-specific optimization performance targets for each fiscal year. According to staff in OMB’s Office of the Federal CIO, these targets are to be established by fiscal year and progress toward meeting them is expected to be provided via the IT Dashboard. For three of OMB’s June 2019 optimization metrics, 19 of the 24 DCOI agencies reported progress in meeting OMB’s fiscal year 2019 data center optimization targets identified on the IT Dashboard. Specifically, as of September 2019: 11 reported that they had met their target for virtualization, 11 reported that they had met their advanced metering target, and 18 reported that they had met their server utilization target. Of the remaining five agencies, OPM had not submitted a DCOI strategic plan as of September 2019 and consequently, did not have established optimization targets or a basis to measure and report optimization progress. The remaining four agencies—the Department of Education (Education), HUD, GSA, and USAID—reported that they did not have any agency-owned data centers in their inventory and, therefore, the optimization metrics were not applicable. In addition, Justice had not established a target for the server utilization metric and, therefore, did not have a basis to measure and report progress. Figure 2 summarizes the DCOI agencies’ progress in meeting each optimization target, as of September 2019. Of the 19 agencies with a basis to report against OMB’s optimization targets, eight agencies—Energy, DHS, the Department of the Interior, State, NASA, NSF, NRC, and SSA—reported meeting three targets as of September 2019. Also, five agencies reported that they had met two targets, and six agencies reported meeting one target. Table 4 lists the DCOI agencies and their status on meeting their OMB optimization performance targets. Of the current DCOI metrics, as shown in table 4, agencies reported greater success in meeting their agency-specific optimization targets than we had reported in our previous reviews, as detailed in appendix II. As of September 2019, the IT Dashboard reported that four agencies had fully completed their overall DCOI optimization efforts for all of their data centers and had no further work to do. The IT Dashboard further reported that another four agencies had met their optimization targets for fiscal year 2019. However, eight agencies had not met their fiscal year 2019 virtualization target. The reasons agencies provided for not meeting the target varied. For example, officials in the Department of Agriculture’s Office of the CIO reported that the department did not meet the virtualization target because the closure date for one of its data centers was moved to fiscal year 2020, which resulted in fewer virtualized hosts for 2019 under OMB’s new definition. Additionally, although EPA did not meet its virtualization target, its DCOI strategic plan described the agency’s intention to meet its goals by expanding its virtualization strategy agency-wide, which would increase the agency’s virtualization performance. In addition, OMB required agencies to report the number of agency- owned data centers with advanced energy metering. As of September 2019, of the 19 agencies with the basis to report, eight reported that they did not reach their target for having such metering in their data centers. For example, officials at the Department of Veteran Affairs reported that they did not meet their advanced energy metering target due to difficulties in getting a contract in place to install the metering. Further, for the new availability metric, there were unexpected variances in how agencies reported information—thus rendering the data for this metric unreliable. Specifically, according to OMB’s quarterly reporting instructions, agencies were to report the number of hours, in the 3-month reporting period, that each data center was expected to be available to provide services. However, several agencies reported information based on annual, instead of quarterly, calculations. In addition, Department of Agriculture officials stated that, for one data center, they reported the total number of availability hours for multiple instances where they provided data center services to other agencies. Based on the various instances of erroneous agency reporting that we identified, we determined that the data for this metric was not sufficiently reliable for us to use. When the problems with these data were brought to agencies’ attention, many agreed that their reporting needed to be updated; in some cases, the agencies updated their information, but not in time for it to be analyzed and addressed in this report. Based on our discussions with agencies, we will continue to monitor their progress in improving the accuracy of their reporting for this metric through our follow-up efforts for this report, as well as our future mandated reviews of DCOI progress. Additionally, and as mentioned previously, Justice had not established a target for server utilization. Officials in the department’s Justice Management Division stated that this was due to OMB’s issuing the revised DCOI guidance and metrics in June 2019. Once they can track server utilization for a few reporting periods, the officials stated that the agency will finalize its definition for underutilized severs and establish an appropriate target for the metric. Overall, while agencies reported more success in meeting the current optimization metrics, most agencies did not meet all of their metric targets for fiscal year 2019. Until these agencies take the steps necessary to meet their optimization targets, it is unlikely that these agencies will achieve the expected benefits of optimization and the resulting cost savings. Given that our April 2019 report included recommendations for all of the agencies except Commerce that missed an optimization target to take action to meet the data center optimization metric targets established under DCOI by OMB, we are not making new optimization- related recommendations to those agencies. OMB’s New Optimization Metric Definitions Lack Key Characteristics of Effective Performance Measures GAO’s Green Book provides the standards for internal control in the federal government and an overall framework for establishing and maintaining an effective internal control system. Such a control system addresses, in part, the attainment of a federal entity’s objectives, which is accomplished through monitoring specific performance measures. Such monitoring is also expected to assess the quality of performance over time. In addition, the Green Book discusses the importance of clearly defining an entity’s objectives in order to determine what is trying to be achieved and to establish related performance measures. According to the Green Book, the controls represented by an agency’s performance metrics should include these key characteristics. The controls should be: Clearly defined in measurable terms that are easily understood. Objective and free of bias, rather than subjective. Defined by appropriate parameters that allow for evaluating performance. Understood by all levels of the organization, including what is being achieved with the metric, who is primarily responsible for achieving the metric, how the metric will be achieved, and when the metric will be achieved. Aligned with internal and external requirements, including applicable legislation, regulations, and standards. We found that all four of OMB’s current optimization performance metrics met three of these five characteristics—that is, each was clearly defined, objectively measurable, and aligned with internal and external requirements. However, the performance metrics did not fully meet the two other characteristics—namely they did not include appropriate performance parameters and did not fully include all the information that would allow them to be understood at all levels of the organization. Table 5 provides our assessment of the extent to which the OMB metrics aligned with the characteristics of an effective metric. In addition, appendix III provides additional detail of our assessment of the characteristics of each metric. While all four of OMB’s metrics met three of the five characteristics of an effective metric, none of the metrics addressed the fourth characteristic of providing appropriate performance parameters. Specifically, none of the metrics included statistical universe parameters that would enable a determination of progress against goals. For example, the virtualization metric requires an agency to report the number of its virtual hosts, but does not relate that to the overall number of servers and mainframes at the agency. As a result, the metric does not indicate whether an agency’s reported number of virtual hosts is almost all of that agency’s servers and mainframes, or very few. Similarly, the server utilization metric identifies how many underutilized servers an agency has, but does not give the context of how that relates to the agency’s total population of servers. In both these cases, percentages cannot be calculated to determine progress. For instance, while the number of an agency’s virtualized servers may increase, if the universe of servers were to increase at a higher rate, then progress would actually be negative. In the June 2019 DCOI revised guidance, OMB acknowledged removing targeted averages for its metric targets. However, by doing so, OMB also removed important information that provided a relative sense of the progress indicated by the data. Further, the lack of performance parameters in defining the metrics had an impact on OMB’s public reporting of agencies’ progress. The IT Dashboard displays agencies’ consolidation and progress information through a DCOI Optimization Summary that displays data about the number of agency data center facilities, achieved and planned closures, achieved and planned IT cost savings, and progress of the current performance metrics against the related targets. However, the IT Dashboard does not provide important information, such as in which fiscal year the targets are to be achieved and how the metric information being reported relates to an agency’s operations. For example, the IT Dashboard reports the number of servers and mainframes serving as virtual hosts in agency-managed data centers, but does not provide the total number of servers and mainframes to give the context of how well agencies are managing the number of their virtual hosts. Staff in OMB’s Office of the Federal CIO stated that the lack of performance parameters for the metrics is due to OMB and the agencies needing time to collect baseline data before making changes to the metrics. However, until OMB addresses missing information from the optimization metric definitions, the metrics will lack important and meaningful information about agencies’ DCOI performance that would assist OMB and Congress in their oversight roles. In addition, unless OMB takes action to update the metrics’ definitions to include missing key metric characteristics, agencies’ reporting may not provide an accurate view of their data center optimization progress. Further, without this information on the IT Dashboard, Congress lacks the information needed to inform its decision making and oversight responsibilities. Conclusions Federal data center consolidation efforts have been underway since 2010, and agencies continue to report progress towards meeting their goals for data center closures and achieving related savings. Specifically, almost all of the 24 DCOI agencies met, or planned to meet, their goals for data center closures in fiscal year 2019. Additionally, in fiscal year 2019, almost all of the agencies met or planned to meet their $249 million total savings target. Agencies’ efforts in both respects have made an important contribution to achieving the overall goals of DCOI. However, agencies’ annual closure goals are not currently reported in their DCOI strategic plans or tracked on the IT Dashboard, requiring us to manually calculate those targets. Unless agencies’ annual closure goals are fully reported and tracked, oversight of DCOI will be hampered. Further, the six agencies without plans to meet their fiscal year data center closure or cost savings targets will continue to be challenged to realize the full benefits of DCOI. As part of the 2019 changes to DCOI, OMB significantly reduced the scope of what is considered a data center, and, in doing so, excluded about 2,000 smaller facilities that were previously reported by agencies in 2018. While OMB previously acknowledged that these types of facilities inefficiently consume resources and pose security risks, agencies are no longer required to report these locations in their inventories. Further, there is currently no documentation of OMB’s decisions on agency requests to remove data centers from reporting, or to exempt mission critical data centers from closure targets. By no longer reporting key facilities as part of DCOI and by not documenting decisions on which facilities are exempt from DCOI, oversight of agencies’ consolidation and optimization efforts may be impaired, and agencies may remain exposed to the related vulnerabilities. Agencies’ progress against OMB’s three revised metrics was mixed, and, for one new metric, agencies reported data that varied so widely, we concluded the data for this metric were not sufficiently reliable for us to report on. However, in comparing OMB’s four metrics against the characteristics of an effective metric, we most notably found that none of the metrics included appropriate performance parameters for evaluating agencies’ progress against goals. Metrics that include more robust and informative agency performance data can play an important role in both achieving the optimization goals and mission of DCOI and allowing for stronger oversight of those efforts. Recommendations for Executive Action In addition to reiterating our prior open recommendations to the agencies in our review regarding their need to meet DCOI’s closure and savings goals and optimization metrics, we are making a total of eight new recommendations—four to OMB and four to three of the 24 agencies. Specifically: The Director of the Office of Management and Budget should (1) require that agencies explicitly document annual data center closure goals in their DCOI strategic plans and (2) track those goals on the IT Dashboard. (Recommendation 1) The Director of the Office of Management and Budget should require agencies to report in their quarterly inventory submissions those facilities previously reported as data centers, even if those facilities are not subject to the closure and optimization requirements of DCOI. (Recommendation 2) The Director of the Office of Management and Budget should document OMB’s decisions on whether to approve individual data centers when designated by agencies as either a mission critical facility or as a facility not subject to DCOI. (Recommendation 3) The Director of the Office of Management and Budget should take action to address the key performance measurement characteristics missing from the DCOI optimization metrics, as identified in this report. (Recommendation 4) The Secretary of Agriculture should take action to achieve its data center- related cost savings target established under DCOI by OMB. (Recommendation 5) The Secretary of Commerce should take action to achieve its data center- related cost savings target established under DCOI by OMB. (Recommendation 6) The Secretary of Commerce should take action to meet its data center optimization metric targets established under DCOI by OMB. (Recommendation 7) The Administrator of the National Aeronautics and Space Administration should take action to achieve its data center-related cost savings target established under DCOI by OMB. (Recommendation 8) Agency Comments and Our Evaluation We provided a draft of this report to OMB and the 24 agencies for their review and comment. In response, of the seven agencies to which we made recommendations, five agencies stated that they agreed with the recommendations and two agencies did not state whether they agreed or disagreed with the recommendations. In addition, of the 18 agencies to which we did not make recommendations, three agencies stated that they concurred with the information presented in the report, three other agencies did not state whether they agreed or disagreed with the report, and 12 agencies stated that they had no comments on the report. Further, four agencies provided technical comments on the report, which we incorporated as appropriate. Of the agencies to which we made recommendations, five agreed with the recommendations. In an email, a Director for Strategic Planning, Egovernment, and Audits in the Office of the CIO at Agriculture stated that the department agreed with our recommendation to achieve its data center-related cost savings target established under DCOI and that it planned to meet the cost savings target in 2020. Agriculture also included technical comments, which we have incorporated as appropriate. In written comments, Commerce agreed with our recommendations to achieve its data center-related cost savings target established under DCOI and to meet its data center optimization metric targets established under DCOI by OMB. The department also described actions that they planned to take in order to address the recommendations. Commerce’s comments are reprinted in appendix IV. In written comments, DHS agreed with our recommendation to achieve its data center-related cost savings target established under DCOI. Further, the department stated that, in its November 2019 DCOI data submission, it reported $354.97 million in cumulative DCOI cost savings through fiscal year 2019. Subsequent to reviewing our draft report, the department provided documentation of the savings claimed in their response. In reviewing this data, we confirmed that these cumulative savings included the $33.8 million savings the department had planned for fiscal year 2019. As a result, we consider our recommendation to have been addressed and therefore removed it from the final report. DHS also provided technical comments, which we have incorporated as appropriate. DHS's comments are reprinted in appendix V. In written comments, NASA agreed with our recommendation to achieve its data center-related cost savings target established under DCOI and described actions that the agency planned to take to address the recommendation. NASA stated that it expects to complete these actions by March 31, 2020. NASA's comments are reprinted in appendix VI. In written comments, OPM agreed with our recommendation to develop and submit to OMB a complete DCOI strategic plan. Subsequent to reviewing our draft report, OPM informed us that the agency had published its fiscal year 2019 plan, and that the agency was on track to meet the OMB reporting deadline for fiscal year 2020. We confirmed that OPM's fiscal year 2019 strategic plan was published and publicly available through the agency's website. As a result, we consider our recommendation to have been addressed and therefore removed it from the final report. OPM's comments are reprinted in appendix VII. In addition, two agencies did not state whether they agreed or disagreed with their recommendations. In an email, a GAO liaison on OMB’s Ethics Team provided an annotated copy of our draft report. In OMB’s comments in that copy of the draft, OMB did not agree or disagree with our recommendations. However, OMB took issue with the report’s findings that the removal of facilities from DCOI oversight posed cybersecurity-related risks represented by those facilities. OMB’s comments further recommended that we remove references to cybersecurity from our report’s title and from the body of the report. In raising these objections, OMB’s comments stated that DCOI is focused on consolidating and optimizing the federal data center portfolio and that cybersecurity is not a primary driver of the initiative. OMB added that DCOI was never designed to track or directly address cybersecurity risks. Specifically, OMB’s comments took issue with our finding that data centers not tracked within DCOI are at a greater risk for a cybersecurity incident. These comments noted that many other laws, policies, and procedures directly deal with the cybersecurity posture of all federal IT systems, and that OMB’s DCOI guidance does not affect the applicability of those requirements. The comments also acknowledged that, while past DCOI guidance has stated that the reduction of data centers may improve the cybersecurity posture of federal agencies, this was because agency CIOs could better allocate constrained resources across a smaller portfolio of devices. We agree that agencies are subject to numerous cybersecurity requirements external to DCOI. We also agree that a reduced portfolio of data centers may improve the cybersecurity of an agency. However, our report focuses on OMB’s recent DCOI policy changes that allow agencies to stop tracking and reporting on over 2,000 data centers. In this discussion, we cite our July 2019 report which found that, facilities such as these, represent a potential access point to an agency’s systems and networks and pose a risk as points of potential attack. OMB’s policy changes do not require agencies to continue to close these points of access, nor do they yield the smaller portfolio of devices that OMB referenced in its comments on our draft report. Our report notes that OMB’s policy change to remove those data centers from DCOI reporting may contribute to agencies losing track of the security vulnerabilities that those facilities present because DCOI has provided a mechanism for ongoing visibility and oversight of these facilities separate from the federal government’s cybersecurity framework. As such, we maintain our report accurately characterizes the increased potential for cybersecurity risk that could be posed by these now-unreported physical locations. We also affirm that our related recommendation to OMB to require agencies to report in their quarterly inventory submissions, those facilities previously reported as data centers, even if those facilities are not subject to the closure and optimization requirements of DCOI, is still appropriate. In written comments, State did not say whether it agreed or disagreed with our recommendation to achieve its data center-related cost savings target established under DCOI by OMB. Subsequent to reviewing our draft report, the department informed us of $61.1 million in fiscal year 2019 optimization and consolidation cost savings and avoidances, an amount in excess of its $58.9 million fiscal year 2019 target, and provided documentation to support this claim. The department also stated that this information would be reported in the department's annual DCOI strategic plan update in the second quarter of fiscal year 2020. In reviewing the documentation provided by the department, we confirmed State’s reported $61.1 million in fiscal year 2019 savings. As a result, we consider our recommendation to have been addressed and therefore removed it from the final report. State's comments are reprinted in appendix VIII. Further, of the 18 agencies to which we did not make recommendations, three agencies agreed with the information presented in the report. Via emails, audit liaisons in the Office of the CIO at Justice, the Office of the Assistant Secretary for Policy at Labor, and the Office of Congressional and Legislative Affairs at VA agreed with the findings in the draft report. In addition, three agencies did not state whether they agreed or disagreed with the report. In written responses, Defense and USAID did not state whether they agreed or disagreed with the draft report. The agencies' responses are reprinted in appendices IX and X respectively. In an email, an audit liaison in the OIG-GAO Audit Liaison Office at Interior did not state whether the department agreed or disagreed with the draft report. The department also provided technical comments, which we have incorporated as appropriate. Finally, 12 agencies stated that they had no comment on the report. In written responses, HUD and SSA stated that they had no comments on the draft report. The agencies' responses are reprinted in appendices XI and XII respectively. We also received emails from officials of Education, Energy, HHS, Transportation, Treasury, EPA, GSA, NSF, NRC, and SBA, which stated that the agencies had no comment on the report. EPA also provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to interested congressional committees, the Director of OMB, the secretaries and heads of the departments and agencies addressed in this report, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-4456 or harriscc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XIII. Appendix I: Objectives, Scope, and Methodology This report addresses (1) agencies’ progress on data center closures and the related savings that have been achieved, and agencies’ plans for future closures and savings and (2) agencies’ progress against the Office of Management and Budget’s (OMB) data center optimization targets. To address the first objective, for data center closures, we obtained and analyzed August 2019 data center inventory documentation from the 24 departments and agencies (agencies) that participate in OMB’s Data Center Optimization Initiative (DCOI). To determine data center closures to date, we totaled their reported closures for fiscal year 2019 through August 31, 2019, and, to identify future closures, we totaled their reported planned closures for fiscal years 2019 through 2022. We also compared agencies’ completed and planned closures to the planned fiscal year 2019 consolidation goals, as documented in their DCOI strategic plans. OMB’s guidance for developing agencies’ DCOI strategic plans required agencies to report cumulative numbers for their planned and achieved data center closures; as a result, we calculated agencies’ fiscal year 2019 targets from the data reported in DCOI plans. To verify the quality, completeness, and reliability of each agency’s data center inventory, we compared information on completed and planned data center closures to similar information reported on OMB’s IT Dashboard—a public website that provides information on federal agencies’ major IT investments. We also checked for missing data and other errors, such as missing closure status information. In some cases identified, we followed up with agency officials to obtain further information. We determined that the data were sufficiently complete and reliable to report on their consolidation progress and planned closures. For cost savings and avoidances, we obtained and analyzed documentation from the 24 DCOI agencies. This documentation is required by OMB’s March 2013, August 2016, and June 2019 memorandums and included the agencies’ quarterly reports of cost savings and avoidances posted to their digital services websites and their DCOI strategic plans. To determine cost savings achieved, we totaled agencies’ reported savings and avoidances from the start of fiscal year 2012 through August 2019, as found in the August 2019 quarterly reports posted to the agencies’ digital services websites. To identify future planned savings, we totaled the agencies’ projected savings and avoidances from fiscal years 2019 through 2020, as reported in their DCOI strategic plans. To assess the quality, completeness, and reliability of each agency’s data center consolidation cost savings information, we used the latest version of each agency’s quarterly cost savings report and DCOI strategic plan as of August 31, 2019. We also reviewed the quarterly reports and DCOI strategic plans for errors and missing data, such as missing cost-savings information. In addition, we compared agencies’ cost savings and avoidances with data from our most recent data center consolidation report. Further, we obtained written responses from agency officials regarding the steps they took to ensure the accuracy and reliability of their cost savings data. As a result, we determined that the data were sufficiently complete and reliable to report on agencies’ data center consolidation cost-savings information. For our second objective, we analyzed the September 2019 data center optimization progress information of the 20 DCOI agencies. This progress information was obtained from the IT Dashboard. We then compared the agencies’ current optimization progress information to agencies’ fiscal year 2019 optimization targets, as documented on the IT Dashboard. In addition, to assess the reliability of agencies’ optimization progress information on OMB’s IT Dashboard, we reviewed the information for errors or missing data, such as progress information that was not available for certain metrics. We also compared agencies’ optimization progress information across two reporting quarters to identify any inconsistencies in agencies’ reported progress. We also followed up with the agencies to understand the steps they took to insure that what they reported to OMB was accurate and reliable. We determined that the data were sufficiently complete and reliable to report on agencies’ progress information for virtualization, advanced energy metering, and server utilization. However, for the fourth metric—data center availability—our analysis identified variances in how agencies reported their data. According to OMB’s quarterly reporting instructions, agencies were to report the number of hours, in the 3-month reporting period, that each data center was expected to be available to provide services. Instead, several agencies reported information based on annual, instead of quarterly, calculations. In addition, Department of Agriculture officials stated that, for one data center, they reported the total number of availability hours for multiple instances where they provided data center services to other agencies. Because of these variances and the impact they had on the reported information, we determined that the availability metric data were insufficiently reliable to report on agencies’ progress. To assess whether OMB’s new performance metrics met key characteristics of an effective performance measure, we adapted principles from the Green Book that described characteristics of effective performance measures. The Green Book provides an overall framework for establishing and maintaining an effective internal control system that includes monitoring through performance measures. We then compared each OMB optimization performance metric, as defined in the revised DCOI guidance and reported on OMB’s IT Dashboard, to the criteria we identified from the Green Book to determine the extent to which each metric met each characteristic. We conducted this performance audit from April 2019 to March 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Previously Made Recommendations on Agencies’ DCOI- related Efforts Since the enactment of FITARA in December 2014, we have reviewed and verified the quality and completeness of each covered agency’s inventory and Data Center Optimization Initiative (DCOI) strategy annually. Accordingly, we have published reports documenting the findings and recommendations from each of these reviews. In addition, we have examined and reported on agencies’ efforts to optimize their data centers, as well as the challenges encountered and successes achieved. As of December 2019, 75 of the 117 recommendations from these reports had not been fully implemented. In a report that we issued in March 2016, we noted that agencies had reported significant data center closures—totaling more than 3,100 through fiscal year 2015—but fell short of the Office of Management and Budget’s (OMB) fiscal year 2015 consolidation goal. Agencies also reported significant consolidation cost savings and avoidances—totaling about $2.8 billion through fiscal year 2015. However, we pointed out that many agencies lacked complete cost savings goals for the next several years despite having closures planned. In addition, we reported that 22 agencies had made limited progress against OMB’s fiscal year 2015 data center optimization performance metrics, such as the utilization of data center facilities. Accordingly, we recommended that the agencies take actions to complete their cost savings targets and improve optimization progress. As of December 2019, 17 of the 32 recommendations from this report had yet to be fully addressed. In May 2017, we reported that the agencies continued to report significant data center closures—totaling more than 4,300 through August 2016— with more than 1,200 additional centers planned for closure through fiscal year 2019. The agencies also reported achieving about $2.3 billion in cost savings through August 2016. However, agencies’ total planned cost savings for fiscal years 2016 through 2018 were more than $2 billion less than OMB’s fiscal year 2018 cost savings goal of $2.7 billion. In addition, our May 2017 report identified weaknesses in agencies’ DCOI strategic plans. Of the 23 agencies that had submitted their strategic plans at the time of our review, seven had addressed all of the five required elements of a strategic plan, as identified by OMB (such as providing information related to data center closures and cost savings metrics). The remaining 16 agencies that submitted their plans either partially met or did not meet the requirements. Given these findings, we recommended that OMB improve its oversight of agencies’ DCOI strategic plans and their reporting of cost savings and avoidances. We also recommended that 17 agencies complete the missing elements in their strategic plans, and that 11 agencies ensure the reporting of consistent cost savings and avoidance information to OMB. As of December 2019, five of the 30 recommendations had not been fully addressed. In a subsequent report that we issued in August 2017, we noted that 22 of the 24 agencies required to participate in the OMB DCOI had collectively reported limited progress against OMB’s fiscal year 2018 performance targets for the five optimization metrics. Specifically, for each of the five targets, no more than five agencies reported that they had met or exceeded that specific target. In addition, we noted in the report that most agencies had not implemented automated monitoring tools to measure server utilization, as required by the end of fiscal year 2018. Specifically, four agencies reported that they had fully implemented such tools and 18 reported that they had not done so. Two agencies did not have a basis to report on progress because they did not have any agency-owned data centers. Accordingly, we recommended that OMB formally document a requirement for agencies to include plans, as part of existing OMB reporting mechanisms, to implement automated monitoring tools at their agency-owned data centers. We also recommended that the 18 agencies without fully documented plans take action within existing OMB reporting mechanisms to complete plans describing how they intended to achieve OMB’s requirement to implement automated monitoring tools at all agency-owned data centers by the end of fiscal year 2018. As of December 2019, two of the 19 recommendations had been fully addressed. In May 2018, we noted that the 24 agencies participating in DCOI reported mixed progress toward achieving OMB’s goals for closing data centers by September 2018. Over half of the agencies reported that they had either already met, or planned to meet, all of their OMB-assigned closure goals by the deadline. However, four agencies reported that they did not have plans to meet all of their assigned goals and two agencies were working with OMB to establish revised targets. With regard to agencies’ progress in achieving cost savings, 20 agencies reported planned and achieved savings that totaled $1.62 billion for fiscal years 2016 through 2018. However, this total was approximately $1.12 billion less than OMB’s DCOI savings goal of $2.7 billion. In addition, the 24 agencies continued to report limited progress against OMB’s five data center optimization targets, with one agency meeting four targets, one meeting three targets, six meeting either one or two targets, and 14 meeting none of their targets. Further, as of August 2017, most agencies were not planning to meet OMB’s fiscal year 2018 optimization targets. Because we had previously made a number of recommendations to OMB and the 24 DCOI agencies to help improve the reporting of data center- related cost savings and to achieve optimization targets, we did not make new recommendations in our May 2018 report, but indicated that we would continue to monitor the agencies’ progress toward meeting OMB’s DCOI goals. Most recently, in April 2019, we reported that the 24 DCOI agencies continued to report mixed progress toward achieving OMB’s goals for closing data centers and realizing the associated savings by September 2018. Thirteen agencies reported that they had met, or had plans to meet, all of their OMB-assigned closure goals by the deadline. However, 11 agencies reported that they did not have plans to meet their goals. In addition, 16 agencies reported that they had met, or planned to meet, their cost savings targets, for a total of $2.36 billion in cost savings for fiscal years 2016 through 2018. This is about $0.38 billion less than OMB’s DCOI savings goal of $2.7 billion. This shortfall is the result of five agencies reporting less in planned cost savings and avoidances in their DCOI strategic plans as compared to the savings targets established for them by OMB. Three agencies did not have a cost savings target and did not report any achieved savings. Regarding data center optimization, the 24 agencies reported limited progress in fiscal year 2018 against OMB’s five optimization targets. In this regard, 12 agencies reported that they had met at least one target, while 10 reported that they had not met any of the targets. Two agencies stated that they did not have a basis to report on progress as they did not own any data centers. Further, 20 agencies did not plan to meet all of OMB’s fiscal year 2018 optimization goals. Specifically, only two agencies reported plans to meet all applicable targets, while six reported that they did not plan to meet any of the targets. As a result of these findings, we recommended that 22 agencies take actions to meet the data center closure, cost savings, and optimization performance metrics targets, as appropriate. As of December 2019, none of the 36 recommendations had been fully addressed. Appendix III: Detailed Analysis of Optimization Metrics As noted previously in this report, the Office of Management and Budget (OMB) issued revised Data Center Optimization Initiative (DCOI) performance metrics in June 2019 as part of its revised DCOI guidance. According to OMB, the four current data center optimization metrics were intended to focus targeted improvements in key areas where agencies can make meaningful improvements and achieve further cost savings through optimization. OMB’s intent was to avoid using averages for metrics and instead identify metrics where agencies could demonstrate continuous improvement beyond the performance period of the June 2019 memorandum. OMB stated this would provide a more accurate measure of the agencies’ data center performance. GAO published the Green Book, which provides the standards for internal control in the federal government and an overall framework for establishing and maintaining an effective internal control system. Such a control system addresses, in part, the attainment of a federal entity’s objectives, which is accomplished through monitoring specific performance measures. Such monitoring is also expected to assess the quality of performance over time. In addition, the Green Book discusses the importance of clearly defining an entity’s objectives in order to determine what is to be achieved and to establish related performance measures. According to the Green Book, the controls represented by an agency’s performance metrics should include several key characteristics. Clearly defined in measurable terms that are easily understood. Objective and free of bias, rather than subjective. Defined by appropriate parameters that allow for evaluating performance. Understood by all levels of the organization, including what is being achieved with the metric, who is primarily responsible for achieving the metric, how the metric will be achieved, and when the metric will be achieved. Aligned with internal and external requirements, including applicable legislation, regulations, and standards. We compared each OMB optimization performance metric, as defined in the revised DCOI guidance and reported on OMB’s IT Dashboard, to the key effective metric characteristics identified in the Green Book. In assessing each of the OMB metrics against the key characteristics, we assigned one of three categories: Met. The metric definition aligned with the characteristics of an effective metric. Partially met. The metric definition aligned with some, but not all, the characteristics of an effective metric. Not met. The metric definition did not align with the effective metric characteristics. Virtualization OMB’s virtualization metric counted the number of servers and mainframes serving as a virtual host in an agency-managed data center. We found that the virtualization metric met three characteristics, met two of four parts of one characteristic, and didn’t meet one. Table 6 provides our evaluation of the extent to which this OMB metric aligns with key characteristics of an effective metric. Advanced Energy Metering OMB’s advanced energy metering metric counted the data centers with advanced energy metering covering the majority of their floor space. We found that the advanced energy metering metric met two characteristics, met three of four parts of one characteristic, and did not meet two. Table 7 provides our evaluation of the extent to which this OMB metric aligned with key characteristics of an effective metric. Server Utilization OMB’s server utilization metric counts the number of underutilized production servers in federal data centers. We found that the underutilized servers metric met three characteristics, met two of four parts of one characteristic, and did not meet one. Table 8 provides our evaluation of the extent to which this OMB metric aligned with key characteristics of an effective metric. Data Center Availability OMB’s data center availability metric calculated the ratio of uptime (when the data center services were available) to unexpected downtime (unplanned service outages) in data centers. We found that the data center availability metric met two characteristics, met two of four parts of one characteristic, and did not meet two. Table 9 provides our evaluation of the extent to which the OMB metric aligned with key characteristics of an effective metric. Appendix IV: Comments from the Department of Commerce Appendix V: Comments from the Department of Homeland Security Appendix VI: Comments from the National Aeronautics and Space Administration Appendix VII: Comments from the Office of Personnel Management Appendix VIII: Comments from the Department of State Appendix IX: Comments from the Department of Defense Appendix X: Comments from the U.S. Agency for International Development Appendix XI: Comments from the Department of Housing and Urban Development Appendix XII: Comments from the Social Security Administration Appendix XIII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, individuals making contributions to this report included Dave Hinchman (Assistant Director), Justin Booth (Analyst-in-Charge), Lamis Alabed, Chris Businsky, Nancy Glover, Gina Hoover, and Jonathan Wall.
Why GAO Did This Study In December 2014, Congress enacted federal IT acquisition reform legislation that included provisions related to ongoing federal data center consolidation efforts. OMB's Federal Chief Information Officer launched DCOI to build on prior data center consolidation efforts; improve federal data centers' performance; and establish goals for inventory closures, cost savings and avoidances, and optimization performance. The 2014 legislation included a provision for GAO to annually review agencies' data center inventories and strategies. This report addresses (1) agencies' progress and plans for data center closures and savings; and (2) agencies' progress against OMB's June 2019 revised data center optimization metrics. To do so, GAO assessed the 24 DCOI agencies' data center inventories as of August 2019, reviewed their reported cost savings documentation, evaluated their data center optimization strategic plans, and assessed their progress against OMB's established optimization targets. GAO also compared OMB's revised metrics to key characteristics of an effective performance measure. What GAO Found The 24 agencies participating in the Office of Management and Budget's (OMB) Data Center Optimization Initiative (DCOI) reported progress toward achieving OMB's fiscal year 2019 goals for closing unneeded data centers. As of August 2019, 23 of the 24 reported that they had met, or planned to meet, their fiscal year closure goals, and would close 286 facilities in doing so (see figure). Agencies also reported plans to close at least 37 of the remaining data centers. OMB issued revised guidance in June 2019 that narrowed the scope of the type of facilities that would be defined as a data center. This revision eliminated the reporting of over 2,000 facilities government-wide. OMB had previously cited cybersecurity risks for these types of facilities. Without a requirement to report on these, important visibility is diminished, including oversight of security risks. The 24 DCOI agencies have reported a total of $4.7 billion in cost savings from fiscal years 2012 through 2019. Of the 24 agencies, 23 reported in August 2019 they had met, or planned to meet, OMB's fiscal year 2019 savings goal of $241.5 million. One agency did not complete a plan, but planned to do so in the future. Agencies also reported plans to save about $264 million in fiscal year 2020. The 24 agencies reported progress against OMB's three revised data center optimization metrics for virtualization, advanced energy monitoring, and server utilization. For a new fourth metric (availability), the data were not sufficiently reliable to report on because of unexpected variances in the information reported by the agencies. As of August 2019, eight agencies reported that they met all three targets for the metrics GAO reviewed, five met two targets, and six met one target. In addition, one agency had not established any targets, and four agencies reported that they no longer owned any data centers. While the three revised metrics' definitions included the key characteristics of being clearly defined and objective, none included statistical universe parameters that enable determinations of progress. Specifically, these metrics call for counts of the actual numbers of (1) virtualized servers, (2) data centers with advanced energy metering, and (3) underutilized servers; but the metrics did not include a count of the universe of all servers and all data centers. Accordingly, percentages cannot be calculated to determine progress–for example, the number of virtualized servers may increase, but if the universe of servers increases at a higher rate, then progress would actually be negative. What GAO Recommends To improve DCOI reporting and performance, GAO is making four recommendations to OMB, and four to three selected agencies. The three agencies agreed with the recommendations while OMB did not state whether it agreed or disagreed. GAO continues to maintain that the four recommendations to OMB are warranted.
gao_GAO-20-344
gao_GAO-20-344_0
Background The Improper Payments Information Act of 2002 (IPIA), as amended by IPERA and the Improper Payments Elimination and Recovery Improvement Act of 2012, requires executive branch agencies, among other things, to (1) review all programs and activities and identify those that may be susceptible to significant improper payments (commonly referred to as conducting a risk assessment), (2) publish improper payment estimates for those programs and activities that the agency identified as being susceptible to significant improper payments, (3) implement corrective actions to reduce improper payments and set reduction targets, and (4) report on the results of addressing the foregoing requirements. IPERA also requires executive agencies’ IGs to annually determine and report on whether their respective agencies complied with six IPERA- related criteria. If an agency does not meet one or more of the six IPERA criteria for any of its programs or activities, the IG considers the agency to be noncompliant overall. The six criteria are as follows: 1. publish a financial report in the form and including all content required by OMB—typically an AFR or a PAR—for the most recent fiscal year, and post that report on the agency website; 2. conduct a program-specific risk assessment, if required, for each program or activity that conforms with IPIA, as amended; 3. publish improper payment estimates for all programs and activities deemed susceptible to significant improper payments; 4. publish corrective action plans for those programs and activities assessed to be susceptible to significant improper payments; 5. publish and meet annual reduction targets for all programs and activities assessed to be at risk for significant improper payments; and 6. report a gross improper payment rate of less than 10 percent for each program and activity for which an improper payment estimate was published. As described above, not all criteria are applicable to every agency. For example, if an agency publishes a financial report and conducts a risk assessment and determines that none of its programs or activities are susceptible to significant improper payments, then the remaining criteria would not be applicable. OMB plays a key role in implementing laws related to improper payment reporting. As required by statute, OMB has established guidance for federal agencies on estimating, reporting, reducing, and recovering improper payments. Such guidance includes OMB Circular A-123 Appendix C, Requirements for Payment Integrity Improvement, which also includes guidance to IGs on determining agency compliance with IPERA. The Council of the Inspectors General on Integrity and Efficiency (CIGIE) also published guidance in July 2019 to assist IGs who are required to conduct an annual improper payment review under IPERA. We continued to report improper payments as a material weakness in internal control in our audit report on the U.S. government’s consolidated financial statements for fiscal years 2018 and 2017 because of the federal government’s inability to determine the full extent to which improper payments occur and reasonably ensure that appropriate actions are taken to reduce them. We have also reported that estimation of improper payments is key to understanding the extent of the problem and to developing effective corrective actions to address it. However, the government’s ability to understand the full scope of its improper payments is hindered by incomplete, unreliable, or understated estimates; risk assessments that may not accurately assess the risk of improper payment; and noncompliance with criteria listed in IPERA. For example, we previously reported that issues and inconsistencies we identified in selected agencies’ processes for estimating improper payments may affect the quality of their estimates. In addition, certain IGs have reported issues with their agencies’ reported improper payment estimates that were caused by insufficient sampling methods and flawed estimation methodologies for calculating and reporting improper payment estimates. Federal Agencies’ Estimates of Fiscal Year 2019 Improper Payments Totaled $175 Billion Based on agencies that reported improper payment estimates in their AFRs and PARs, government-wide estimated improper payments for fiscal years 2019 and 2018 totaled about $175 billion and $151 billion, respectively. See appendix I for the reported amounts by agency and program for fiscal years 2019 and 2018. As shown in figure 1, of the $175 billion for fiscal year 2019, about $121 billion (approximately 69 percent) is concentrated in three program areas: (1) Medicaid, totaling about $57.4 billion (approximately 32.8 percent); (2) Medicare (comprised of three reported programs: Fee-for-Service (Parts A and B), Advantage (Part C), and Prescription Drug (Part D)), totaling about $46.2 billion (approximately 26.5 percent); and (3) Earned Income Tax Credit (EITC), totaling about $17.4 billion (approximately 9.9 percent). Key information contained in agency AFRs and PARs regarding the types and causes of fiscal year 2019 estimates of improper payments, and reasons for significant changes in reported estimates from fiscal year 2018, are summarized as follows: The $175 billion total reported government-wide estimates for fiscal year 2019 is broken down per OMB’s Paymentaccuracy.gov Data Call Instructions by type as follows: overpayments, totaling about $79.1 billion (approximately 45.2 underpayments, totaling about $12.9 billion (approximately 7.4 unknown, totaling about $74.1 billion (approximately 42.4 technically improper due to statute or regulation, totaling about $8.7 billion (approximately 5 percent). About $74.6 billion (approximately 42.7 percent) of the government- wide estimates was reported as monetary loss. About $151.2 billion (approximately 86.6 percent) of the reported government-wide improper payment estimates for fiscal year 2019 related to root causes that occurred in the three areas below. See appendix II for details on the root causes that agencies identified for their reported improper payment estimates for fiscal year 2019. Insufficient documentation to determine payment accuracy. About $74.1 billion (approximately 42.4 percent) resulted from situations where the agency lacked supporting documentation necessary to verify the accuracy of the payments. Administrative or process error. About $39.1 billion (approximately 22.4 percent) resulted from incorrect data entry, classifying, or processing of applications or payments. Inability to authenticate eligibility. About $38 billion (approximately 21.8 percent) resulted from the agency not being able to authenticate eligibility criteria. The fiscal year 2019 total reported government-wide estimated improper payments, among programs that reported estimates, increased by about $24 billion from the fiscal year 2018 total reported. While decreases in estimated improper payments were reported for several programs, these were offset by increases for certain other programs. Between fiscal years 2018 and 2019, six programs had an increase and five programs had a decrease of over $1 billion in estimated improper payments. Appendix III provides information on all the programs that had a substantial change in estimated improper payments between fiscal years 2018 and 2019 and the reasons for those changes as reported in agency AFRs. Examples of substantial changes in improper payments and the reasons for such changes that agencies provided in their AFRs include the following: Department of Health and Human Services (HHS) reported an increase in the total estimated improper payments for the Medicaid program in excess of $21.1 billion for fiscal year 2019. The majority of the increase in the total estimated improper payments for the Medicaid program was due to HHS’s reintegration of the eligibility component of the Payment Error Rate Measurement (PERM) for Medicaid for fiscal year 2019. From fiscal years 2015 through 2018, HHS did not estimate improper payments attributed to eligibility determinations, but did include a proxy estimate, which was the last reported rate in fiscal year 2014 for the eligibility component, while HHS worked to update this component. For fiscal year 2019, HHS estimated improper payments attributed to eligibility determinations in 17 states (about one-third of all states). HHS’s national eligibility estimated improper payment rate still includes a proxy estimate for 34 remaining states that have not yet been measured since the reintegration of the PERM eligibility component. HHS reported that most eligibility errors identified through the new measurement process were due to insufficient documentation to verify eligibility or noncompliance with eligibility redetermination requirements. HHS also reported that these insufficient documentation situations were related primarily to income or resource verifications. HHS’s fiscal year 2019 AFR noted that another significant cause for estimated Medicaid improper payments is errors resulting from state noncompliance with provider screening and enrollment requirements. The Department of the Treasury (Treasury) began reporting improper payment estimates for fiscal year 2019 for two programs deemed newly susceptible to significant improper payments. Specifically, Treasury reported about $7.2 billion and $2.1 billion in improper payment estimates for Additional Child Tax Credit and American Opportunity Tax Credit, respectively. In addition, HHS reported a decrease in the total estimated improper payments for the Medicare Fee-for-Service (Parts A and B) program of about $2.7 billion. According to HHS’s fiscal year 2019 AFR, the decrease in the estimate is due to a reduction in estimated improper payments for home health; Medicare Fee-for- Service Part B; and Durable Medical Equipment, Prosthetics, Orthotics, and Supplies claims. As stated earlier, the federal government’s ability to understand the full scope of its improper payments is hindered by incomplete, unreliable, or understated agency estimates and risk assessments that may not accurately assess the risk of improper payment. For example, certain federal programs and activities that agencies determined to be at risk for significant improper payments did not report estimates of improper payments for fiscal year 2019, including the Premium Tax Credit and Temporary Assistance for Needy Families programs, and as we previously reported, the Department of Defense (DOD) lacks quality assurance procedures to ensure the completeness and accuracy of the payment populations from which it develops improper payment estimates. CFO Act Agencies’ Reported Compliance with IPERA Half of the CFO Act Agencies Were Reported as Compliant for Fiscal Year 2018 Eight years after the implementation of IPERA, half of the 24 CFO Act agencies were compliant with IPERA overall for fiscal year 2018, as reported by their IGs. See appendix IV for each CFO Act agency’s overall compliance with IPERA. With regard to the six IPERA criteria, as shown in figure 2, IGs reported all agencies as compliant with the requirement to conduct program-specific risk assessments if it was applicable to the agency. In addition, 22 of 24 agencies (92 percent) met the requirement to publish a PAR or AFR. Based on the IGs’ fiscal year 2018 compliance reports, agencies were most frequently reported as noncompliant with the IPERA requirement to publish and meet annual targets for improper payment reduction. Out of the 14 agencies for which this requirement was applicable, IGs for eight agencies (57 percent) reported that their agencies were noncompliant. The second most-frequently reported area of noncompliance related to the IPERA requirement for agencies’ reported improper payment rates to be below 10 percent for programs that published estimates. Out of the 15 agencies for which this requirement was applicable, IGs for five agencies (33 percent) reported that their agencies were noncompliant. See appendix IV for additional details on each CFO Act agency’s compliance with the six IPERA criteria for fiscal year 2018, as reported by their IG. In addition, IGs for certain CFO Act agencies reported quality issues in their agencies’ reporting of improper payment data. Although the issues did not result in noncompliance with the related IPERA criterion, the IGs noted these as areas that need improvement. For example, one agency reported inaccurate amounts for identified and recaptured improper payments in its AFR. However, the IG reported that the agency was compliant with the IPERA criterion for publishing financial information in a PAR or AFR. Another agency’s IG reported that its agency did not accurately evaluate its corrective actions’ effectiveness in recapturing improper payments. However, the IG reported that the agency was compliant with the IPERA criterion to publish corrective action plans. As we stated above pertaining to the IGs’ determination of compliance with IPERA criteria, these determinations are based on whether the agency met the requirements and is not a judgment on the quality of the work conducted in order to meet those requirements. Trends in Reported Overall IPERA Compliance for Fiscal Years 2016 through 2018 As stated above, IGs for 12 of the 24 CFO Act agencies reported that their agencies were compliant with IPERA overall for fiscal year 2018. As shown in figure 3, this is an increase from 10 agencies reported as compliant for fiscal year 2017, and 11 agencies reported as compliant for fiscal year 2016. The improvement in IPERA compliance is attributable to the Departments of Commerce and Education, which were reported by their IGs as noncompliant in fiscal year 2017 but compliant in fiscal year 2018. No agencies that IGs reported as compliant in fiscal year 2017 were reported as noncompliant in fiscal year 2018. In addition, the IGs reported that 21 programs within these agencies were noncompliant with IPERA for each of the past 3 fiscal years (2016–2018). Improper payment estimates for these programs totaled about $78 billion, representing approximately 52 percent of the $151 billion government- wide reported improper payment estimates for fiscal year 2018. As shown in table 1, this includes improper payment estimates for Medicaid of about $36 billion and for EITC of about $18 billion. As shown in figure 4, the number of programs reported as noncompliant with IPERA for 3 or more consecutive years has increased since fiscal year 2016. Specifically, the number of programs reported as noncompliant for 3 or more consecutive years increased from 14 programs in fiscal year 2016 to 18 programs in fiscal year 2017 and 21 programs in fiscal year 2018. The reported improper payment estimates for these programs totaled about $109 billion for fiscal year 2016, $74 billion for fiscal year 2017, and $78 billion for fiscal year 2018. The total improper payment estimates for programs reported as noncompliant for 3 or more consecutive years decreased for fiscal 2017 primarily because the Medicare Fee-for-Service program, with about $41 billion of improper payments in fiscal year 2016, was reported as compliant beginning fiscal year 2017. Agency Comments We provided a draft of this report to OMB and CIGIE for review and comment. CIGIE stated that it had no comments. OMB did not provide any comments. We also provided the full draft for review and comment to agencies and respective IG offices we met with throughout the course of this work. In addition, we sent summary facts to other agencies that had substantial changes in reported improper payment estimates between fiscal years 2018 and 2019 (as shown in app. III), and provided the full draft for review and comment, upon request, to those agencies. We received written comments from the U.S. Agency for International Development, which is reproduced in appendix V. The Department of Health and Human Services, Department of Veterans Affairs, and the Social Security Administration’s Office of Inspector General provided technical comments, which we incorporated in the report as appropriate. The remaining agencies and IG offices informed us that they had no comments. We are sending copies of this report to the appropriate congressional committees, the Director of the Office of Management and Budget, the Chairman of the Council of the Inspectors General on Integrity and Efficiency, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2623 or davisbh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Reported Improper Payment Estimates and Rates by Agency and Program for Fiscal Years 2019 and 2018 Table 2 details the improper payment estimates and rates that federal agencies reported to the Office of Management and Budget or in their agency financial reports or performance and accountability reports for fiscal years 2019 and 2018. In addition, as shown in the table 2, 17 programs had a substantial change in their reported improper payment estimates or rates between fiscal years 2018 and 2019. The reasons for the changes, as reported in the agency financial reports, are detailed in appendix III. Appendix II: Agency-Reported Root Causes for Improper Payment Estimates for Fiscal Year 2019 Table 3 shows the government-wide agency-reported improper payment estimates and rates for fiscal year 2019, grouped by Office of Management and Budget (OMB) improper payment root cause categories. OMB defines the root cause categories as follows: Insufficient documentation to determine: For this category, there is a lack of supporting documentation necessary to verify the accuracy of a payment identified in the improper payment testing sample. For example, a program does not have documentation to support a beneficiary’s eligibility for a benefit, and without that particular documentation, the agency is unable to discern that the payment was for the correct amount or went to the right recipient. Administrative or process errors: In this category, errors were caused by incorrect data entry, classifying, or processing of applications or payments. For example, an eligible beneficiary receives a payment that is too high or too low because of a data entry mistake (such as transposing a number) or an agency enters an incorrect invoice amount into its financial system. Inability to authenticate eligibility: In this category, an improper payment is made because the agency is unable to authenticate eligibility criteria. These types of errors include but are not limited to (1) inability to access data and (2) data needed do not exist. Program design or structural issue: For this category, improper payments result from the design of the program or a structural issue. For example, a scenario in which a program has a statutory (or regulatory) requirement to pay benefits when due, regardless of whether all the information has been received to confirm payment accuracy. Medical necessity: For this category, a medical provider delivers a service or item that does not meet coverage requirements for medical necessity (for example, providing a power wheelchair to a patient whose medical record does not support meeting coverage requirements for a power wheelchair). Failure to verify data: In this category, the agency (federal, state, or local), or another party administering federal dollars, fails to verify appropriate data to determine whether a recipient should be receiving a payment, even though such data exist in government or third-party databases. In these situations, the data needed exist, and the agency or other party administrating federal dollars had access to them but did not check the payment against those data prior to making the payment. Other reason: This category covers when the improper payment does not meet any of the above categories. Appendix III: Programs with Substantial Changes in Reported Improper Payment Estimates or Rates from Fiscal Year 2018 to Fiscal Year 2019 Table 4 shows the 17 programs that had a substantial change in the improper payment estimates or rates between fiscal years 2018 and 2019, and the reasons for those changes, as reported in the agency financial reports. Appendix IV: Fiscal Year 2018 CFO Act Agencies’ IPERA Compliance as Reported by Their Inspectors General Figure 5 details the Chief Financial Officers Act of 1990 (CFO Act) agencies’ overall compliance with the Improper Payments Elimination and Recovery Act of 2010 (IPERA), as well as the agencies’ compliance with each of the six IPERA criteria for fiscal year 2018, as reported by their inspectors general. Appendix V: Comments from the U.S. Agency for International Development Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments Beryl H. Davis, (202) 512-2623 or davisbh@gao.gov In addition to the contact named above, Matt Valenta (Assistant Director), Cherry Vasquez (Auditor in Charge), Pat Frey, Jason Kelly, Jim Kernen, Anne Thomas, Judy Tsan, and Landon Western made key contributions to this report.
Why GAO Did This Study Improper payments—payments that should not have been made or that were made in incorrect amounts—continue to be an area of fiscal concern in the federal government. Improper payments have been estimated to total almost $1.7 trillion government-wide from fiscal years 2003 through 2019. From fiscal year 2003 through 2016, a government-wide estimate and rate had been included in government-wide financial reports based on the programs and activities that reported estimates. However, financial reports for fiscal years 2017 and 2018 did not include a government-wide improper payment estimate or rate. Agency-reported improper payment estimates are posted on the Office of Management and Budget's Paymentaccuracy.gov website. IPERA requires IGs to annually determine and report on whether executive branch agencies complied with six IPERA criteria, such as conducting risk assessments and publishing and meeting improper payment reduction targets. This report summarizes (1) federal agencies' reported improper payment estimates for fiscal years 2018 and 2019, and reasons for substantial changes between years, and (2) CFO Act agencies compliance with IPERA criteria for fiscal year 2018, as determined by their IGs, and overall compliance trends for fiscal years 2016 through 2018. GAO summarized (1) improper payment estimates from agency financial reports and Paymentaccuracy.gov and (2) information on CFO Act agencies' IPERA compliance reported in IGs' fiscal year 2018 IPERA compliance reports and prior GAO reports. What GAO Found Agency-reported improper payment estimates for fiscal year 2019 totaled about $175 billion, based on improper payment estimates reported by federal programs, an increase from the fiscal year 2018 total of $151 billion. Of the $175 billion, about $121 billion (approximately 69 percent) was concentrated in three program areas: (1) Medicaid, (2) Medicare, and (3) Earned Income Tax Credit. About $74.6 billion (approximately 42.7 percent) of the government-wide estimate was reported as monetary loss, an amount that should not have been paid and in theory should or could be recovered. However, the federal government's ability to understand the full scope of its improper payments is hindered by incomplete, unreliable, or understated agency estimates; risk assessments that may not accurately assess the risk of improper payment; and agencies not complying with reporting and other requirements in the Improper Payments Elimination and Recovery Act of 2010 (IPERA). Eight years after the implementation of IPERA, half of the 24 Chief Financial Officers Act of 1990 (CFO Act) agencies—whose estimates account for over 99 percent of the federal government's reported estimated improper payments—complied with IPERA overall for fiscal year 2018, as reported by their inspectors general (IG). Based on the IGs' fiscal year 2018 compliance reports, agencies were most frequently reported as noncompliant with the requirement to publish and meet annual targets for improper payment reduction. Out of the 14 agencies for which this requirement was applicable, eight agencies were noncompliant. The second most-frequently reported area of noncompliance related to the requirement for agencies' reported improper payment rates to be below 10 percent for programs that published estimates. Out of the 15 agencies for which this requirement was applicable, five agencies were noncompliant. Chief Financial Officers Act of 1990 Agencies' Fiscal Year 2018 Compliance with IPERA Criteria, as Reported by Their IGs The IGs reported that 21 programs were noncompliant with IPERA for each of the past 3 fiscal years (2016–2018). These programs represented about $78 billion, or approximately 52 percent of the $151 billion government-wide reported improper payment estimates for fiscal year 2018.
gao_GAO-19-723T
gao_GAO-19-723T_0
Background The federal government faces long-standing challenges in strategically managing its workforce. As shown in table 1, in addition to strategic human capital management, skills gaps played a role in 16 of the 34 other high-risk areas on our 2019 High-Risk List, including information technology management and acquisitions, and veterans’ health care. We have also designated as priority 29 of our prior recommendations to OPM because, upon implementation, they may have an especially significant impact on OPM’s operations. Twenty-one of these priority recommendations are aimed at addressing government-wide human capital challenges, including some of the ones discussed above. OPM agreed or partially agreed with most of these recommendations. OPM has implemented 10 of these priority recommendations to date, but needs to take additional action on the other 11. For example, OPM should continue to streamline hiring authorities to strengthen the government’s ability to compete in the labor market for top talent and improve the federal hiring process. We will continue to monitor OPM’s progress in implementing our recommendations. Federal Human Capital Management Challenges are Long- Standing and Systemic The government’s system of current employment policies was designed generations ago for a workforce and types of work that largely no longer exist. Much has changed since the Civil Service Reform Act of 1978 and the Classification Act of 1949 laid the foundation of today’s federal personnel system. We have identified several structural challenges within the federal human capital system that impede the ability of agencies to recruit, retain, and develop employees, both today and in the future. For example: Classification system. The General Schedule classification system—which defines and organizes federal positions, primarily to assign rates of pay—has not kept pace with the government’s evolving requirements. Recruiting and hiring. Federal agencies need a hiring process that is applicant friendly and flexible, and meets policy requirements. Pay system. Employees are compensated through an outmoded system that (1) rewards length of service rather than individual performance and contributions, and (2) automatically provides across- the-board annual pay increases, even to poor performers. Performance management. Federal agencies have faced long- standing challenges developing modern, credible, and effective employee performance management systems and dealing with poor performers. Additionally, the changing nature of federal work and high percentage of employees eligible for retirement could produce gaps in leadership and institutional knowledge. It could also threaten to aggravate the problems created from existing skills gaps. For example, 31.6 percent of permanent federal employees who were on board as of September 30, 2017 will be eligible to retire in the next five years, with some agencies, such as the Department of Housing and Urban Development and the Environmental Protection Agency, having particularly high levels of employees eligible to retire. In March 2019, we identified key trends in agency operations and attitudes toward work that are affecting how federal work is done and consequently, the skills and competencies that workers will need to accomplish agency missions (see fig. 1). Agencies will need to apply talent management strategies that are adapted to these trends to recruit, develop, and retain a high-performing workforce and better meet their missions. Key Talent Management Strategies Can Help Agencies Be More Competitive in a Tight Labor Market In light of trends and other challenges facing the government’s human capital management efforts, our prior work has identified actionable strategies that agencies may be able to use to effectively manage the future federal workforce in key talent management areas (see table 2). We noted that while these strategies are not an exhaustive list, collectively they suggest basic steps that agencies can take within existing authorities to position themselves to meet their talent needs. These practices are based on our review of related reports, group interviews with federal Chief Human Capital Officers (CHCO), and interviews with selected private organizations and foreign governments. For each strategy, we highlight examples of the challenges agencies face, actions OPM can take to implement related recommendations from our prior work, and practices that may help agencies implement the strategy. Align human capital strategy with current and future mission requirements. With shifting attitudes toward work, technological advances, and increased reliance on nonfederal partners, agencies need to identify the knowledge and skills necessary to respond to current and future demands. Key practices include identifying and assessing existing skills, competencies, and skills gaps. In May 2014, we reported that agencies should be aware of existing skills and competencies in their workforce to help inform workforce planning. As one example, the Department of the Treasury CHCO told us that, following the Puerto Rico debt crisis—where it needed to be able to identify the necessary skills to manage the crisis—the agency decided to implement an Integrated Talent Management System to facilitate workforce and succession planning as well as learning and performance management. Acquire and assign talent. To ensure agencies have the talent capacity to address evolving mission requirements and negative perceptions by some of federal work (e.g., that it is too bureaucratic), agencies can cultivate a diverse talent pipeline through strategic partnerships with academic and other institutions, highlight their respective missions, recruit early in the school year, support rotations, and assign talent where needed. As one example, consulting firm representatives that we interviewed for our prior work stated that their internship programs are among their most successful practices for cultivating a talent pipeline because the firms can offer full-time positions to rising seniors during the internship. A representative from one consulting firm said that, after experiencing challenges in recruiting on college campuses, the firm built a competitive internship program to promote the firm’s brand and reputation. Participants in the firm’s 10-week program are paid and assigned challenging projects, and successful participants are given job offers upon completion. According to the representative, approximately a quarter of the firm’s workforce is former interns. Similarly, CHCOs and federal employee and management group representatives we interviewed noted that internships are important for establishing a pipeline for recruitment. The federal government’s Pathways Programs, which consist of the Internship Program, the Recent Graduates Program, and the Presidential Management Fellows Program, were designed to promote employment opportunities for students and recent graduates by providing distinct paths to federal internships and potential careers in government. The Internship Program provides paid opportunities for students (high school, vocational, technical, undergraduate, and graduate) to work in agencies and explore federal careers while still in school. Students who successfully complete academic and program requirements may be eligible for non-competitive conversion to a term or permanent position in the civil service. In our prior work, we have also reported on the importance of cultivating a diverse talent pipeline through active campus recruiting which includes developing long-term institutional relationships with faculty, administrators and students, and by building a “brand” on campus. Other strategies to expand a talent pool include developing strategic partnerships with such entities as trade schools, apprentice programs, and affinity organizations from across the country. Another strategy for attracting strong candidates is for agencies to highlight their missions and innovative work, which, according to our expert and CHCO interviews, can help counter negative perceptions of federal employment. For example, the Department of Homeland Security (DHS) provides “Day in the Life” information on its work to promote public awareness of how its everyday tasks tie in with its mission of protecting the United States, according to the DHS CHCO. The DHS CHCO stated that promoting agency mission can be done while cultivating a talent pipeline and assessing applicants’ abilities. The department holds recruitment events where potential candidates can participate in law enforcement-related activities such as fitness testing. The CHCO noted that these events both promote homeland security careers and help prospective candidates determine if a position is a good fit for them. Incentivize and compensate employees. While federal agencies may struggle to offer competitive pay in certain labor markets, they can leverage existing incentives that appeal to workers’ desire to set a schedule and to work in locations that provide work-life balance. However, agencies do not always promote these benefits and incentives as part of a total compensation package, in part because managers are not always aware of the importance of doing so. Some agencies are addressing this issue by advertising and helping employees use available benefits, work-life balance programs, and other resources. For example, the National Science Foundation offers employees many opportunities to learn about existing benefits, according to the foundation’s CHCO. These opportunities include triannual retirement seminars where employees receive personalized retirement estimates, quarterly financial planning seminars where employees receive a free 1-hour consultation, and annual benefit fairs where employees can learn about various health care providers, the work-life programs, and the employee assistance program. Our prior analysis of CHCO and expert interviews also found that employees may value different benefits and incentives depending on their stage in life. By better understanding the desires of the workforce at various life stages, agencies can better tailor benefits packages and incentives to their employees. For example, the Social Security Administration’s CHCO said that the agency’s younger workers value work-life and wellness programs, so the agency implemented a health- tracking program and a fitness discount program for all employees. CHCOs also suggested identifying and incorporating the benefits that would be most useful to various groups of employees, such as sabbaticals for midlevel employees or paid parental leave for employees starting families. One CHCO found that her cybersecurity workforce values subsidies for training and additional certifications more than bonus pay. Further, OPM’s 2018 Federal Work-Life Survey Governmentwide Report found that the number of respondents who anticipate adult dependent care responsibilities in the next 5 years (31 percent) is double the number of respondents with current adult dependent care needs (15 percent). OPM officials stated in light of this change, agencies may need to provide greater workplace flexibilities and other support services to retain talent. Some CHCOs we interviewed for prior work said that they believe that paid parental leave could be a powerful retention tool for federal workers. Representatives from consulting firms that we interviewed said that they have observed positive impacts from these types of benefit programs. For example, representatives from one firm said that providing employees with peace of mind when managing life events helps them feel more committed to the organization. Engage employees. Engaged employees are more productive and less likely to leave, according to OPM. Agencies can better ensure their workforces are engaged by managing employee performance, involving employees in decisions, and developing employees. Experts we interviewed for prior work said that employees desire an environment where they can collaborate with their peers and feel a sense of comradery. In contrast, even a small number of poor performers can negatively affect employee morale and agencies’ capacity to meet their mission, according to CHCOs and our previous work. In the 2017 Federal Employee Viewpoint Survey (FEVS), 64 percent of federal employee respondents agreed that their supervisor provides them with constructive suggestions to improve job performance and 31 percent agreed that steps are taken to deal with poor performers. Without effective performance management, agencies risk not only losing the skills of top talent, they also risk missing the opportunity to effectively address increasingly complex and evolving mission challenges. Agencies can make performance management more effective by improving the selection and training of supervisors and managers, creating a “line of sight” between individual performance and organizational results, and implementing meaningful reward programs. Our prior analysis found that employees seek autonomy in the workplace, meaningful work, and opportunities to achieve results by developing creative and innovative solutions. Also, experts noted that in some cases, connecting federal employees to a sense of inclusion and meaning can compensate for the opportunity to make higher salaries in other sectors. Creating an inclusive work environment is one practice that can help increase employee involvement in decisions. CHCOs and federal employee and management group representatives said that more can be done to prioritize training, even in an era of resource constraints. In 2017, only 55 percent of FEVS respondents were satisfied with training. As an example of an agency prioritizing training efforts, the Social Security Administration has national and regional development programs that offer 12 to 18 months of training and rotations for entry-, mid-, and senior-level employees to strengthen foundational, technical, and leadership knowledge and skills, according to the agency’s CHCO. For example, its Leadership Development Program assigns selected GS-9 through GS-12 employees to developmental assignments in new areas of work, and provides leadership training that broadens their perspective of the agency’s mission. Chairman Connolly, Ranking Member Meadows, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. If you or your staff have any questions about this testimony, please contact Robert Goldenkoff, Director, Strategic Issues, at (202) 512-2757 or GoldenkoffR@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Shirley Hwang (Assistant Director), Shelby Kain (Analyst-In-Charge), Sarah Green, Allison Gunn, and Alexander Ray. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The federal workforce is critical to federal agencies' ability to address the complex social, economic, and security challenges facing the country. However, the federal government faces long-standing challenges in strategically managing its workforce. We first added federal strategic human capital management to our list of high-risk government programs and operations in 2001. Although Congress, OPM, and individual agencies have made improvements since then, federal human capital management remains a high-risk area because mission-critical skills gaps within the federal workforce pose a high risk to the nation. This testimony focuses on (1) key hiring and other human capital management challenges facing federal agencies, and (2) talent management strategies identified from GAO's prior work that agencies can use to be more attractive employers in a tight labor market. This testimony is based on GAO's large body of work on federal human capital management issued primarily between July 2014 and July 2019. To conduct these studies, GAO reviewed government-wide employment data and interviewed officials from OPM and subject matter specialists from think tanks, academia, government employee unions, and other areas. What GAO Found Outmoded approaches to personnel functions such as job classification, pay, and performance management are hampering the ability of agencies to recruit, retain, and develop employees. At the same time, agency operations are being deeply affected by a set of evolving trends in federal work, including how work is done and the skills that employees need to accomplish agency missions. Given these challenges and trends, federal agencies will need to apply talent management strategies such as the following: Align human capital strategy with current and future mission requirements. Agencies need to identify the knowledge and skills necessary to respond to current and future demands. Key practices include identifying and assessing existing skills, competencies, and skills gaps. Acquire and assign talent. To ensure the appropriate capacity exists to address evolving mission requirements, agencies can use internships, cultivate a diverse talent pipeline, highlight their respective missions, and recruit early in the school year. Incentivize and compensate employees. While agencies may struggle to offer competitive pay in certain labor markets, they can leverage existing incentives that appeal to workers' desire to set a schedule and to work in locations that provide work-life balance. Engage employees. Engaged employees are more productive and less likely to leave, according to the Office of Personnel Management (OPM). Agencies can better ensure their employees are engaged by managing their performance, involving them in decisions, and providing staff development. What GAO Recommends Of the 29 recommendations to OPM that GAO has designated as priorities for implementation, 21 are aimed at improving strategic human capital management efforts government-wide. OPM agreed or partially agreed with most of these recommendations, of which 11 are still open. GAO will continue to monitor OPM's progress in addressing them.
gao_GAO-20-144
gao_GAO-20-144_0
Background The federal government is the largest real property owner in the United States with a vast inventory costing billions of dollars annually to operate and maintain. Federally owned buildings include courthouses, offices, warehouses, hospitals, housing, data centers, and laboratories. GSA acts as the federal government’s landlord and is responsible for designing, constructing, and managing federal buildings that are occupied by federal agencies and the judiciary. Each year, GSA spends hundreds of millions of dollars on major construction projects, which include both new construction and repairs and alterations (R&A) to existing federal buildings. R&A projects can range from building system replacements and security upgrades to full building renovations. GSA manages its major construction projects through its central office in Washington, D.C., and its 11 regional offices. GSA’s central office establishes programming, design, and construction standards and guidance, and provides technical assistance, as needed, to the regional offices that are responsible for project implementation. To obtain authorization for projects above a defined threshold, GSA must submit to certain congressional committees a project prospectus that, among other items, describes the project and provides its estimated cost. Upon approving a project’s prospectus, Congress provides funding, either through an appropriation from the Federal Buildings Fund or appropriating funding to an agency. GSA posts approved project prospectuses on GSA’s public website. In general, GSA develops and implements projects through a sequential process that includes the following steps: Identification. Federal agencies submit a facility or space need to GSA; GSA prepares a feasibility analysis to determine the best way to fulfill the need, which could be through new construction, an R&A project, or a lease. Some R&A projects—limited to building system replacements—may be by identified by GSA based on building age and condition, and not originate from agencies’ space needs. Initiation. GSA assigns a project manager to define the project’s scope, develop cost and schedule estimates, and draft a project management plan (PMP). If a prospectus has not been previously submitted, GSA submits a prospectus to certain congressional committees for authorization. Planning. GSA’s project manager updates the PMP; the project’s baseline scope, schedule, and budget are finalized. Execution. For authorized and funded projects, GSA awards contracts for design and construction; the project’s baseline scope, schedule, and budget are revised, as needed, based on awarded contracts; GSA’s project manager monitors design and construction progress and manages changes to the project’s scope, cost, or schedule. Close-out. GSA’s project manager completes construction close-out activities and turns the project over for tenants’ use. GSA project managers perform key steps in the process that include overseeing contractors, monitoring and reporting on the progress of projects, managing changes to the project, and coordinating with tenant agencies. Additionally, GSA project managers are responsible for ensuring that “commissioning” is performed during the project. “Commissioning” generally requires that an independent commissioning agent oversee the construction contractor’s testing of installed building components to determine if they are performing as designed. GSA Obligated Over $3 Billion to Major Construction Projects Completed in the Past 5 Years; Various Federal Requirements Contributed to Costs GSA Obligated about $3.2 Billion for Major Construction Projects According to GSA data, GSA substantially completed 36 major construction projects in the 5-year period from fiscal year 2014 through fiscal year 2018. The total cost of those 36 projects was approximately $3.2 billion. Listed below are some characteristics of those projects. Cost: Project costs ranged between $21 million and $343 million, with an average cost of about $89.3 million. Schedule: Project durations ranged between about 12 months and 79 months, with an average of about 43 months. Project Type: R&A projects made up the majority of projects (64 percent), with an average cost of about $74.2 million and an average duration of about 47 months. New construction projects accounted for 36 percent, with an average cost of about $116 million and an average duration of about 35 months. On average, R&A projects cost about $42 million less than new construction projects but took about 13 months longer to complete. See figure 1 for summary information on the cost and duration of these projects, by project type. Location: The National Capital Region (GSA Region 11) had the most projects with nine (25 percent), and all but one of the 11 GSA Regions had at least one project. Project Delivery Method: GSA utilized four delivery methods for 35 of the 36 projects in our 5-year time frame. Construction Manager as Constructor, whereby GSA contracts separately with a design firm and a construction contractor. The construction contractor is involved early-on to consult on the design as it is being developed; upon the design’s completion, GSA negotiates with the construction contractor on a price to undertake the construction. GSA used this method for 12 of the 36 projects (average cost of about $99.8 million). Design-Bid-Build, whereby GSA contracts with a design firm to develop a project’s design. After the design is completed, GSA contracts separately with a construction contractor. GSA used this method for 11 of the 36 projects (average cost of about $81.3 million). Design/Build-Bridging, whereby GSA contracts with a construction contractor to finish a partially completed design— termed a “bridging design”—begun by a separately contracted design firm. GSA used this method for 8 of the 36 projects (average cost of about $77.4 million). Design/Build, whereby GSA contracts with a contractor to provide both design and construction services under a single contract. GSA used this method for 4 of the 36 projects (average cost of about $120.4 million). See appendix I for more detailed information on each of the 36 projects. GSA Identified Federal Design Requirements among Key Factors That Can Result in Higher GSA Construction Costs According to GSA officials and GSA’s internal construction-cost study prepared for GSA by the National Institute of Building Sciences (NIBS) in March 2016, several factors can result in higher costs for GSA’s construction projects compared to other similar private sector construction projects. For example, cost models in the 2016 NIBS study indicate that R&A projects cost roughly 15 to 25 percent more than R&A projects for a comparable Class A private sector building. Although the study was based on construction of R&A projects, both GSA and NIBS officials agreed that these same factors can contribute to similar cost premiums for GSA’s new construction projects compared to private sector projects. However, the NIBS staff who conducted the study told us that GSA’s more recent adoption of performance-based design standards, as compared to previously prescriptive standards, likely lowers the federal construction cost’s premium relative to private sector projects but some premium still exists. The performance-based design standards, for example, provide contractors greater latitude in selecting construction materials, which can have cost implications. According to the GSA’s internal construction-cost study, the factors that contribute to higher estimated costs for GSA construction projects when compared to similar private sector projects primarily include design and procurement requirements specific to federal projects that private sector counterparts may not have to comply with. Those requirements are specified in GSA’s design standards, as well as federal statutes and guidelines. Table 1 provides illustrative examples of factors cited by the study and GSA officials. In addition to the factors identified in the GSA’s internal construction-cost study, GSA officials said that meeting other statutory requirements, for example, the Buy American Act and the Federal Information Security Modernization Act of 2014 (FISMA), can contribute to higher costs for federal projects compared to private sector projects. GSA officials said that the cost of making information technology systems FISMA-compliant leads to federal projects costing more than private sector projects. FISMA-compliant systems, among other uses, are needed to enable the sharing of design and construction documents among GSA and contractor staff and the installation of control systems that are integral to the operation of building systems. GSA Uses Various Tools to Monitor Construction Projects’ Information, but the Agency’s Public Reporting Provides Limited Insight into Cost and Schedule Changes GSA Uses Three Primary Project Management Tools to Actively Monitor Construction Projects GSA uses three principal tools—(1) project management plans (PMP), (2) peer reviews, and (3) “earned value management” (EVM)—to monitor its construction projects, including cost and schedule performance. The PMP is the overarching tool GSA and its contractors use to guide projects’ implementation. According to GSA policy, a PMP primarily defines the parameters of a project, to include scope, schedule, cost, implementation strategy, and risks, among other items. GSA policy also indicates that the PMP—which is an industry recognized tool—is to be updated during a project’s execution and reflect notable changes affecting the project’s scope, cost, and schedule. The PMP is to also establish stakeholder roles and responsibilities, project goals, and tenant expectations. In all of the five case-study projects we reviewed, we found the associated PMPs generally: outlined the project’s scope, cost, and schedule information; identified GSA’s project stakeholders—such as GSA’s project manager and GSA’s contracting officer—and representatives for the tenant agencies that the project will benefit; and identified potential risks posed to the delivery of the project. Four of the five PMPs included a “revision history” table that demonstrated that GSA generally used and updated the PMPs over the course of the projects’ execution. The fifth project’s PMP was developed prior to GSA’s 2012 update to its PMP standard format, which then required the use of a revision history log. More information pertaining to our case-study projects, including some information from the GSA PMPs we reviewed can be found in appendix II. The second tool GSA utilizes to monitor its construction projects is peer reviews. GSA policy requires that external peer reviews be conducted on projects with a construction cost over $25 million. Per GSA guidance, these on-site peer reviews typically occur twice during construction— when projects are about 15 percent and 60 percent complete. External peers—typically, construction industry experts who were not involved with the project—assess whether a project is progressing as planned and identify for GSA managers and project stakeholders any issues they observe that may affect its timely completion or cost. In general, peers also assess stakeholders’ working relationships and make recommendations for improvement or identify opportunities for greater consistency in the performance of GSA’s construction program or greater efficiency among project stakeholders. We found that four of our case-study projects utilized external peer reviews during construction, as required. For example, one peer review report included the following observations: The project team showed great progress toward completing the project on time, and potentially ahead of schedule; the implementation of the recommendations made during the initial external peer review resolved potential unknowns and cost issues that would have put the project at high financial risk; the safety record was exceptional; tenants were better informed; and security issues had been streamlined, allowing the contractor to staff the project in a timely manner. Most of the GSA’s project managers and construction contractors we interviewed for these four case study projects said they generally believed the external peer reviews were fair and added value. Our fifth case-study project did not utilize an external peer review because it was not required at the time GSA awarded the construction contract. The third tool GSA uses is EVM, which is an industry-recognized project management tool and is required for major federal acquisitions, such as construction projects, to help project managers monitor cost and schedule during project execution. According to the Office of Management and Budget’s (OMB) guidance and GAO’s cost-estimating guide, EVM measures the value of work accomplished in a given period and compares it with the planned value of work scheduled for that period and the actual cost of work accomplished in that period. The differences between the estimated and actual costs and schedule are used to determine, for example, whether less or more work had been completed than had been planned. By tracking these differences, EVM can provide warning signs of impending cost overruns or schedule delays and provide estimates of anticipated costs at completion. Consistent with our previous findings related to GSA’s use of EVM, we found that GSA continues to use EVM to assess its construction project delivery performance on two dimensions—on-schedule and on-budget: On schedule: GSA considers a construction project to be on- schedule if its construction duration is within 10 percent of the planned duration, from the construction start date to the substantial completion date (i.e., GSA considers a project to be substantially complete on the date the project space is suitable for tenant occupancy; however, the project’s cost could change prior to the actual contract close-out). On budget: GSA considers a construction project to be on budget if its actual cost is within the planned construction cost (as measured by the construction contract’s value at award or the contract value as adjusted based on post-award contract modifications) and the additional 7 to 10 percent construction contingency. According to GSA guidance, a project’s construction contingency is intended to cover unforeseen conditions and design deficiencies; it does not apply to additional scope. According to GSA officials, GSA’s central office uses EVM to conduct monthly performance reviews of GSA’s major construction projects. At these reviews, GSA’s central office considers certain proposed project changes forwarded for approval by GSA regional offices. We have previously reported that federal construction projects typically involve some degree of change as the project progresses and that contract changes, made through contract modifications, can occur for a variety of reasons, including design errors and unforeseen site conditions. In addition, GSA officials said that funding delays, tenant-caused delays, and site acquisition issues can also be factors that cause project delays. According to GSA guidance, while GSA regional offices have some latitude to make contract changes, the regional offices and their project managers must get central office approval if a proposed change is anticipated to exceed the approved contract cost, construction contingency, or schedule contingency. If such a change is approved, GSA will then revise—commonly referred to as “rebaseline”—either the construction contract cost, the planned schedule duration, or both. GSA will then use that new value to measure and report on the project’s budget and schedule performance. According to GSA officials and summary data on its rebaselining decisions, the majority of GSA’s major construction projects within our 5- year scope were rebaselined, within its policy, to account for changes to projects’ costs and schedules. Specifically, GSA officials told us they rebaselined 25 of the 36 projects (about 70 percent). Of those projects, 18 (50 percent) were driven, at least in part, by tenant-requested changes, which GSA officials said were the most prevalent reasons for rebaselining a project. According to GSA policy, if a tenant agency requests a project change that falls outside the original scope, the project manager is to ensure that the tenant agency provides all the associated design-related requirements and funding necessary to perform this additional scope. For example, for one of our case study projects, the tenant provided $17.7 million in additional funding as part of the final phase of its headquarters building’s multi-year modernization. The tenant’s funds paid for, among other things, the tenant-requested change to convert part of the multi-story library into offices to increase the building’s space efficiency and allow more staff to move into the building. Based on our review of GSA’s internal data, we found that four of our five case-study projects were rebaselined; GSA rebaselined the cost of two projects, the schedule of one project, and both the cost and schedule of one project. For example, concerning costs, GSA rebaselined one project to account for a $2.7 million increase to the contract—initially awarded for $21.8 million—upon realizing that the tenant’s plan to increase the number of occupants in the building required another stairwell be added for fire safety purposes. With regard to schedule, GSA rebaselined one project, as previously discussed, to address a tenant-requested change to convert parts of the library into offices; this change extended the schedule by about 1 year. Given GSA’s methodology that allows for rebaselining and GSA’s cost and schedule contingencies, GSA’s EVM performance data showed that all five case-study projects were completed on budget and on schedule, if not early. See appendix II for a summary of the cost and schedule performance of our five case-study projects. GSA’s Public Reporting on Project Performance Has Improved but Final Cost and Schedule Information Could be More Transparent Federal agencies should report pertinent and reliable information to the Congress, so that Congress can adequately assess agencies’ progress in meeting established performance goals, ensure accountability for results, and understand how individual programs and activities fit within a broader portfolio of federal efforts to aid in federal funding decisions. GSA has publicly reported high-level information on its construction project performance in its Annual Performance Reports, which GSA provides to Congress and publishes on GSA’s website. For example, GSA’s fiscal year 2014 through 2018 Annual Performance Reports show that GSA met or exceeded its stated performance targets for project delivery (see fig. 2). Over this period (fiscal year 2014 through 2018), GSA took steps to improve the content and usefulness of its annual reports. For example, in fiscal year 2014, GSA included R&A projects in its performance measure to fully encompass all GSA capital construction projects. Prior to fiscal year 2014, GSA’s performance measure was calculated solely on the performance of GSA’s new construction projects. Also, starting in fiscal year 2017, GSA included additional summary-level information in its reports that identified the total number of projects and total contract value of both completed and ongoing projects that fiscal year. In fiscal year 2018, as shown in figure 2, GSA again revised its performance measure to reflect both the budget and schedule performance of projects. Prior to fiscal year 2018, GSA’s performance measure reflected only projects’ schedule performance. Further, in its fiscal year 2018 report, GSA listed the specific costs of its seven largest projects completed on-schedule and on-budget of the 24 projects completed that year. While GSA has taken some actions to improve the usefulness of its external reporting, neither GSA’s Annual Performance Reports nor its public prospectus website provide information on the extent to which projects have been rebaselined or the final costs of projects. Standards for Internal Control in the Federal Government state that agencies should provide necessary quality information to external stakeholders so that the external parties can help the agency achieve its mission and address related risks. As noted above, GSA regularly rebaselines projects, within policy, to account for changes to projects that affect construction contract costs and schedules due to a variety of reasons. GSA officials told us that they manage total project costs to be within the original prospectus estimate provided to Congress adjusted, as applicable, by funds it receives for tenant-requested changes; the officials do not believe that it is critical to report final costs or if projects have been rebaselined. However, we have found that simply measuring and reporting performance based on the most recent baseline may obscure how projects have performed over their entire construction time frame. Being more transparent about which projects or how many projects were rebaselined, as well as reporting cost and schedule growth from original baselines, can provide stakeholders with a more accurate view of project performance and enhance accountability. Reporting on such cost information, for example, would allow GSA to communicate to Congress actual construction costs at a project’s completion that may be different than the estimated costs on the prospectus approved by Congress at the project’s initiation which likely did not account for items to be funded by tenants. Without that information, it is not possible for Congress to know how projects performed against approved estimated costs and whether final project costs are consistently above, below, or meeting estimated costs. Having this information could benefit Congress in its oversight role and in making future funding decisions. GSA Assesses Whether Projects Have Met Requirements, but Does Not Fully Capture or Share Lessons Learned GSA Uses Commissioning to Test Building Systems, but Its Guidance Is Outdated Key Challenges Identified during Commissioning of Case Study Projects Issues with State-of-the-Art Building Systems State-of-the-art building systems and the automation systems that monitor and control them were not optimally operating for at least two of our case-study projects at substantial completion. For example, stakeholders for one project reported that it was very challenging to get all the integrated systems to work properly, in part, because the design was very technologically advanced. One GSA official said the biggest challenge was coordinating the operations sequence of the various building systems to function as the design team intended. As such, it took well over a year after the building was completed to resolve these issues. Limited Capabilities of Building Contractors to Maintain Complex Systems In three of the five case-study projects, stakeholders said maintenance service contractors were either not prepared to assume or had not yet been contracted to provide for the higher technical maintenance and operation responsibilities for all the building systems. For example, one construction contractor said there seemed to be a knowledge gap between the technical capabilities needed to effectively manage the more advanced building systems and the skills possessed by the existing maintenance contractor. A GSA official said that GSA plans to solicit a new contract for the building’s maintenance. agencies, and others. The Guide identifies its primary audience to be: GSA’s project managers, their construction management agents who help GSA manage the project, and the commissioning agent who oversees the commissioning process. The Guide’s secondary audience includes the many other stakeholders in the commissioning process, including tenant agencies. According to the Guide, the commissioning process is intended to assist in preparing maintenance personnel to operate and maintain any newly installed building systems. We found that GSA conducted commissioning largely in alignment with the Guide on our five case-study projects based on our review of project documentation and interviews with GSA’s project managers, facilities managers, and contractors. Further, we identified two key challenges in regard to state-of-the-art building systems’ and building contractors’ capabilities. See sidebar for additional information on the two challenges. While GSA generally conducted commissioning according to its Guide on the five case-study projects we reviewed, we found that the 2005 Guide is outdated. For example, the Guide references dated industry practices and some outdated external guidance, both of which were in existence at the time the Guide was developed. Specifically, it references the 2003 Leadership in Energy and Environmental Design (LEED), Green Building Rating System, Version 2.1; however, the LEED rating system for projects since 2016 was Version 4.0, and Version 4.1 was recently issued in 2019. We also found disconnects between the 2005 Guide and GSA’s current design standards or industry practices. For example: While the Guide states that GSA buildings should be LEED certified and strive for a Silver certification, GSA now requires buildings to achieve a higher certification, LEED Gold. The Guide states that GSA “strongly recommends” that GSA regions—and agencies to which GSA has delegated the operations of federal buildings—recommission buildings every 3 to 5 years. The current LEED standards call for “periodic commissioning requirements, ongoing commissioning tasks, and continuous tasks for critical facilities.” In general, over the past decade, federal statutes, guidance, executive orders, and changes to industry building certifications have moved the federal government and the industry toward more real- time, continuous monitoring and commissioning in cases where advanced building-automation systems, energy information- management systems, and advanced meters (e.g., electrical, water, gas, temperature, and light meters) have been installed. The continuous data provided by these systems can help building owners make real-time adjustments to optimize building operations. However, the Guide does not mention continuous monitoring-based commissioning as a possible option to, or in addition to, recommissioning buildings. Standards for Internal Control in the Federal Government state that management should periodically review policies, procedures, and related control activities for continued relevance and effectiveness in achieving the entity’s objectives or addressing related risks. Those standards also indicate that if there is a significant change in an entity’s process, management should review this process in a timely manner after the change to determine that the control activities are designed and implemented appropriately. Without updated guidance, GSA’s commissioning activities may be limited in their effectiveness in assuring building systems are operating optimally. Two of the five GSA contractors we interviewed expressed frustration that the commissioning process on their projects did not run smoothly. GSA’s external peer reviews for those same two projects also found that the roles of the various stakeholders in the commissioning process were not clear. In addition, three stakeholders on one of those projects said that some stakeholders—especially GSA’s contracted design team—were not fully involved during part of building’s commissioning. In light of our review, GSA is planning to evaluate its commissioning guidance to determine an appropriate update. GSA officials stated that this update may result in revising the existing commissioning guide or replacing it with industry-recognized guidance. However, GSA is still in the process of identifying the scope of the update, including a timeline and resources required to do so. GSA Intermittently Conducts POEs but Lacks Established Policies and Procedures and a Formal Mechanism for Sharing Lessons Learned According to OMB guidance, Post Occupancy Evaluations (POE) are tools to evaluate the effectiveness of an agency’s overall capital acquisition process. The primary objectives of a POE include (1) identifying how accurately a project meets its objectives, expected benefits, and strategic goals of the agency and (2) ensuring the continual improvement of an agency’s capital-programming process based on lessons learned. The guidance also states that agencies should have a documented methodology for conducting POEs to ensure that each asset is evaluated consistently. The guidance identifies 17 factors to be considered for evaluation in conducting a POE, such as a project’s performance, compliance with design standards, maintenance issues and building workforce competences, use of advanced building technologies, tenant satisfaction, and cost savings. The guidance also notes that a POE should generally be conducted 12 months after the project has been occupied to allow time for the tenant to evaluate the building’s performance and the delivery of the project. However, the guidance allows agencies some flexibility in the timing of a POE to meet their unique needs if 12 months is not the optimal timing to conduct the evaluation. We found GSA did not conduct any POEs on its completed major construction projects in the 4-year period from 2014 to 2017, as called for by OMB guidance. In fiscal year 2018, GSA contracted with the National Institute of Building Sciences (NIBS) to conduct six POEs and seven additional POEs in fiscal year 2019. GSA officials told us that while they understand the value POEs can provide, they are only able to conduct them when funding is available. They explained that POEs are funded through general program funding (not project funding based on the approved prospectus) within GSA’s Office of Facilities Management, and the available resources to conduct such efforts are limited given other GSA portfolio-wide maintenance and operations priorities. GSA acknowledged that it did not have a specific policy for conducting POEs or selecting completed projects for POEs. Instead, GSA officials said when selecting which buildings should undergo a POE, they ensure there is a representation of different building types (i.e., federal buildings, U.S. courthouses, and land ports of entry) and a mix of new and R&A projects. Because GSA does not have a policy for POEs, NIBS developed a general methodology, which it used for conducting each of those POEs. While GSA tries to ensure there is a mix of projects represented when selecting POEs, it is not clear that its selection factors help ensure GSA makes the best use of its limited resources. To balance OMB’s guidance to agencies that POEs should be conducted on agencies’ completed capital-construction projects, and given its resource constraints, GSA could benefit from a more strategic approach to select the projects for POEs. For example, GSA could use a risk-based approach to select for POEs (e.g., more expensive projects or those that include the integration of advanced, state-of-the-art building systems) to help improve the design and construction of future projects. Such an approach is consistent with the Standards for Internal Control in the Federal Government, which states that management should design control activities to achieve objectives and respond to risks and implement those control activities through policies. Control activities could include establishing criteria for selecting projects for POEs and formalizing it through policy. GSA officials also noted that GSA has conducted multi-building studies— which share some similarities with individual building POEs—that GSA officials broadly consider to be POEs. However, while the studies assessed some of the factors described in OMB guidance (e.g., project performance, maintenance, or advanced technology use), none of them comprehensively reviewed the 36 projects in our 5-year time frame. Accordingly, while these broader studies can provide some useful information to GSA, they are limited in their ability to provide GSA with timely information that meets the POE goal stated in OMB’s guidance: “to evaluate the overall effectiveness of the agency’s capital planning and acquisition process” and to “solicit customer feedback and incorporate that feedback into improvements to the performance and delivery of the capital investment process.” OMB guidance states that agencies should establish mechanisms to use lessons learned from POEs to minimize risks of repeating past mistakes on future projects. Along these lines, NIBS produced a summary report for GSA of the six 2018 POEs it conducted; the report identified design, construction, commissioning, and operational maintenance issues and lessons learned. From these lessons learned, NIBS also offered some recommendations to GSA. For example, NIBS said that GSA should establish a POE review committee to examine GSA’s building designs to highlight and offer solutions to previously identified problems in other buildings and develop and distribute a checklist describing the identified problems to teams that are responsible for designing new buildings. GSA developed an operational guide to synopsize the lessons learned from the NIBS report and expects that future building projects will benefit through its efforts to incorporate these lessons in the design of future projects. Further, NIBS reported that improvements to future projects in response to the issues identified in the six 2018 POE projects would result in reductions to GSA’s future operational costs. However, it is unclear whether the extent of these issues and lessons learned are unique to the 2018 POE projects reviewed by NIBS, or may be occurring across more of GSA’s construction projects. According to NIBS officials, they have observed some recurring project issues among the six POEs conducted in fiscal year 2018 and two of the seven conducted in fiscal year 2019. GSA officials said that they plan to implement lessons learned from these POEs into GSA’s design standards by the end of 2019 and expect to later update these design standards based on future POEs. According to GSA officials, they made NIBS’s individual POE reports and the 2018 POE summary report available to their project managers through a shared folder on GSA’s internal intranet site, which can be accessed by over 120 staff. In addition, one GSA project manager told us that GSA periodically holds knowledge-sharing webinars with its project managers where lessons learned from specific projects may be presented. This official indicated that the knowledge-sharing presentations are heavy on photos and that there is no real prescribed format or requirements for content. Accordingly, the presentations are an informal way for project teams to share project knowledge across GSA’s regions. Further, this official said the lessons-learned presentations from those webinars are also posted for a period of time on GSA’s internal website. However, communicating information via such means provides ad-hoc benefits to only the select individuals who know about the availability of the reports or webinars, and choose to access them. This approach may not effectively expand the broader knowledge base of the organization or best position GSA to, as OMB guidance indicates, ensure continual improvement of an agency’s capital-programming process based on lessons learned. Standards for Internal Control in the Federal Government also indicate that management should communicate necessary quality information to all relevant internal stakeholders to achieve the entity’s objectives. Without a sustained effort to consistently conduct POEs on its completed projects, GSA may miss opportunities to gather valuable tenant feedback and to identify marked successes or notable problems, including any issues that are recurring. Such information could inform future improvements to GSA’s major construction projects and increase tenant satisfaction. Further, such information may also help identify the need to change or update some of GSA’s policies, standards, guidance, or practices, such as those recommended by NIBS or other project stakeholders. However, even if GSA undertakes a more systematic approach to conducting POEs, the benefits of doing so can only fully materialize if GSA takes steps to effectively communicate POE lessons learned to all staff who may be at risk of repeating previously identified project mistakes. Conclusions GSA annually spends hundreds of millions of dollars on major construction projects to provide tenant agencies with new buildings and modernized spaces that help support agencies’ missions and enable the effective delivery of government services. GSA has improved its public reporting on major construction projects to depict project schedule and budget performance over time. However, GSA’s public reporting does not include information about the extent to which projects’ schedule or costs were rebaselined, or on projects’ final costs, which may differ from GSA’s estimates in the initial prospectuses approved by Congress. Providing the additional information on projects’ schedule and cost rebaselining, and projects’ final costs could further benefit Congress in its oversight role and improve public knowledge about the full costs of major federal construction projects. In addition, given the significant fiscal exposure for the government to maintain these buildings for the long term, having updated guidance on commissioning would enable GSA to better ensure that completed projects are meeting GSA’s design standards. Finally, given resource constraints, identifying and communicating information about when and how POEs are to be conducted could help GSA maximize opportunities to capture lessons learned from completed projects. Knowledge gained from POEs could also ensure tenant agencies are satisfied with completed projects and improve the design and construction of major projects in the future. Recommendations for Executive Action We are making the following three recommendations to GSA: The Administrator of the GSA should report for Congress and the public— for example, on GSA’s prospectus website—the extent to which completed projects’ construction costs and schedules were rebaselined and final construction costs, to include any additional funding tenant agencies may have provided to GSA for changes. (Recommendation 1) The Administrator of the GSA should update its 2005 Commissioning Guide—or replace it with appropriate industry-recognized standards and guidance—to be consistent with GSA’s current design standards and industry practices. (Recommendation 2) The Administrator of the GSA should identify and communicate—such as through policy, guidance, or other appropriate mechanism—(a) when and how Post Occupancy Evaluations should be conducted for completed projects considering resource constraints and (b) how recommendations or lessons learned from those evaluations are effectively communicated to future project teams. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to GSA for review and comment. In written comments, reproduced in appendix III, GSA stated that it partially concurred with recommendation 1 and concurred with recommendations 2 and 3, and provided related comments. In response to recommendation 1, GSA agreed to publish key information that would be helpful, such as GSA’s total construction costs at project completion. However, GSA said it would be misleading to publish information on additional funds provided to GSA from tenant agencies— that lead to contract changes and rebaselining—because these funds come from different appropriations. GSA believes this would not accurately reflect how GSA managed its original budget and schedule. However, we believe that reporting total project costs in a way that clearly identifies both GSA and tenant agency costs is possible, and would not be misleading. We continue to believe that such additional transparency in reporting can benefit Congress in its oversight role and improve public knowledge about the full costs of major federal construction projects. Related to recommendation two that GSA concurred with, the agency noted that it has other commissioning documents and processes outside of its Building Commissioning Guide (Guide) that it uses to ensure building systems are operating optimally. We believe GSA’s use of other documents and processes is a good practice in light of the outdated nature of its current Guide, which serves as a key document in its commissioning process. Nevertheless, we continue to believe that it is important for GSA to update its outdated Guide, or replace it with appropriate industry-recognized standards and guidance to be consistent with GSA’s current design standards and industry practices, as we recommended. Finally, regarding recommendation three, after a discussion with GSA officials during the comment period, we modified the wording of the recommendation to recognize the range of administrative tools (e.g., policy, guidance, or other appropriate mechanism) that GSA could use to identify when and how Post Occupancy Evaluations (POEs) should be conducted and how lessons learned from those evaluations are communicated. As we noted in the report, under its current process, GSA selects the number of facilities evaluated as its annual budget allows based on several selection factors. We continue to believe that GSA could benefit from a more formalized and strategic approach to identifying and communicating when and how POEs should be conducted to make best use of its limited resources. GSA also mentioned its Design Guide for Operational Excellence as a tool to communicate lessons learned from POEs. We agree that such a guide is a good example of how POEs can be used to inform the design of future projects. However, because the guide was based on a limited number of POEs from 2018, we believe that there is more GSA can do to maximize opportunities to communicate lessons learned to future project teams. The draft report had included a fourth recommendation for the Administrator of the GSA to improve the transparency of what is being measured and reported in GSA’s Annual Performance Reports, including noting any key limitations, such as comparing results from year to year if the measure changed. While GSA was reviewing the draft, the agency provided clarifications on the structure and content of its annual reports that mitigated our concerns about the transparency of the information being presented. As a result, we made changes to the body of the report and removed that recommendation from our final report. GSA also provided technical and clarifying comments, which we incorporated, where appropriate. We are sending copies of this report to the appropriate congressional committees and the Administrator of the General Services Administration. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or rectanusl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: General Services Administration’s (GSA) Completed Major Construction Projects, Fiscal Years 2014 to 2018 Appendix I: General Services Administration’s (GSA) Completed Major Construction Projects, Fiscal Years 2014 to 2018 Case study project. Appendix II: Case Study Snapshots This appendix contains information on five General Services Administration (GSA) case-study projects that we included in our review. We judgmentally selected these five major construction projects that were substantially completed between fiscal years 2014 through 2018 representing diversity in project type, geographic area, building-type, and range in cost and scope. Although not generalizable to all GSA major construction projects, information gathered from our case studies provides illustrative examples of GSA’s monitoring and construction efforts. For each case study, GSA provided us with extensive project documentation. We reviewed this documentation to obtain key information such as on contract award amounts and modifications that resulted in changes to the project’s original budget or schedule. The contract modifications we discuss for each project are examples of modifications that added cost or credit to the final contract value or that changed the delivery schedule; however, these modifications do not necessarily include all the modifications to the construction contract. In addition, we interviewed relevant stakeholders, such as GSA project managers, contractors, and facility managers who were involved with the projects. All information in the case study narratives is attributable to GSA based on our review of project documentation and interviews with GSA project officials and stakeholders. Charles F. Prevedel Federal Building Background The Charles F. Prevedel building was constructed in 1990. For fiscal year 2014, GSA proposed alterations and renovations to the building’s interior and upgrades to the building’s systems such that the Veterans’ Benefits Administration could consolidate into the building. The building was nearly two-thirds vacant at the time, as two federal tenants had moved out of the building. The Veterans’ Benefits Administration had been dispersed in both a nearby federal building and leased space. GSA estimated that the Veterans’ Benefits Administration’s move into the consolidated space would save $3.3 million annually in lease costs. Project Scope The building is five stories above-grade and two-stories below-grade. The project scope included renovating the building’s central atrium; reconfiguring and increasing the building’s useable space; replacing obsolete heating, ventilation, and air-conditioning (HVAC) systems; and, installing an energy-management control system to automate the HVAC and lighting systems and reduce energy consumption. The HVAC upgrades also included replacing and relocating the outdoor air intakes on the roof in order to meet current security requirements. Minor seismic upgrades were also implemented. Contract Cost or Schedule Changes The design/build-bridging construction contract was awarded in January 2015 for $21.8 million. The construction contract cost was rebaselined to $25.4 million, in part, to provide an additional stairwell to meet life-safety egress requirements as required by GSA’s design guide. GSA reported that change required GSA’s Public Buildings Commissioner to approve an overall project budget escalation of $2.7 million in June 2015. GSA reported the final construction cost was $25 million (roughly a 14.5 percent increase from the initial construction contract award). Construction of the repair and alteration project started in May 2015 and was substantially completed after a year and a half in November 2016, approximately 2 months earlier than originally projected. Figure 3 shows before and after views of the building’s main lobby and newly installed stairwell. Figure 4 shows views of meeting and training room spaces renovated during the project. Margaret Chase- Smith Federal Building and Courthouse Location (GSA Region): Bangor, Maine (Region 1) Original Construction Completion Year: 1967 Project Type: Repair and Alteration Project Delivery Method: Construction Manager as Constructor (CMc) Background The 3-story Margaret Chase-Smith Federal Building and Courthouse was built in 1967 and had not had a major renovation since its construction. The project was funded by the American Recovery and Reinvestment Act of 2009 (Recovery Act). GSA proposed the project be funded to recapture the vacant space in the building, which in part increased to approximately 33 percent after the U.S. Postal Service vacated. The proposed project would renovate and provide alterations to the building that would expand space for its existing tenants—including the U.S. Courts and the Social Security Administration, among others—and provide space for new tenant agencies. Project Scope GSA officials reported that in order to get the project started quickly using Recovery Act funds, GSA made the decision to deliver the project under the Construction Manager as Constructor (CMc) delivery method. Under CMc method, the contractor was brought in to advise on the design as it was being completed. In addition to space renovations and alterations, the project repaired and replaced HVAC systems, improved energy efficiency, and provided exterior structural improvements including the replacement of windows. New secure elevators were also added to improve court safety. Other components of the project included repairs and replacements of electrical systems, hazardous materials mitigation, elevator improvements, upgrades to the fire protection system, installing sprinklers, and correcting code deficiencies including bringing the building into compliance with accessibility standards. Contract Cost or Schedule Changes The CMc construction contract was initially awarded in March 2010 for $33.9 million. In September 2010 (6 months later), two contract modifications totaling roughly $4.6 million were issued to increase the contract price to reflect changes made in completing the design. GSA and the contractor reported that the baseline construction contract—after the design was completed—was $38.5 million. While GSA had provided some funding allowances within the initial construction contract to address some project requirements that were not yet fully designed— such as the building’s entry pavilion—another $1.9 million contract modification was issued in March 2011 (a year after the initial contract award), in part, to increase the funding allowances for the front entry pavilion and to provide additional glass that was to be installed in the lobby area. The entry pavilion was added to improve the security screening process and adhere to the U.S. Marshalls Service and U.S. Courts screening station requirements. That $1.9 million cost modification also addressed increased requirements associated with the geothermal heating system and below grade wells. Also, the contract costs increased, in part, due to tenant-requested changes. For example, an $802,000 contract modification was issued, in part, for requested millwork (e.g., judge’s bench and cabinet work) and the Court’s audiovisual equipment, telecommunications, and data-related requirements. GSA reported the final construction cost was approximately $41.3 million (about a 7.5 percent increase above the $38.5 baseline). Construction of the repair and alteration project started in October 2010 and was substantially completed approximately one month early in November 2013. Figure 5 shows the exterior of the building including its new entry pavilion. Figure 6 shows an exterior side view of the new entry pavilion and an interior view of the lobby. Social Security Administration, National Support Center Background As part of the Recovery Act, the Social Security Administration received an appropriation to construct a new National Support Center to replace an older data center whose systems were approaching the end of their useful lives. The new National Support Center provides a state-of-the-art data center, added reliability, and the ability to expand to meet future needs. For example, the data center’s flexible, scalable design allows for a smooth transition to future information technology upgrades and new, emerging technology. Project Scope The new 300,000 gross square foot data center complexbuilt on a 63 acre siteincludes the data center, warehouse, and office building; the facility was built to accommodate 200 employees. The constructed facility—supporting 24 hours a day, 7 day a week operations—is Leadership in Energy and Environmental Design (LEED) Gold Certified, even though data centers traditionally rank among the largest power users in modern facilities. Contract Cost or Schedule Changes GSA’s estimated construction cost for the project was adjusted down in August 2012 from $334 million to $262 million. GSA awarded the design- build construction contract in January 2012 for $191.6 million. The project’s construction contract cost was later rebaselined to $207.4 million due in part to the Social Security Administration requesting GSA have the contractor provide operations and maintenance transition services for 6 months. That contract change was made in March 2014—approximately 4 months before substantial completion—for roughly $2.1 million. GSA reported to us that the final construction cost was $208.1 million (roughly an 8.5 percent increase from the base contract award). Because the construction cost was well below GSA’s original construction estimate of $334 million, GSA reported to us the remaining project funds were returned to the Social Security Administration in accordance with the Recovery Act appropriation. GSA issued a notice to proceed (i.e., contract start date) to the design-build contractor in January 2012 and the project was substantially completed on-schedule roughly two and a half years later in July 2014. Figure 7 shows an exterior view of the main entrance to the data center. Figure 8 shows an interior view of the data center’s server space prior to occupancy. Figure 9 shows an exterior view of the on-site solar panel array with the data center in the background. Stewart Lee Udall Building, Department of the Interior Background The Department of the Interior (Interior) headquarters building— occupying two city blocks—was initially completed in 1936; upgrades to the building’s systems were required to extend the useful life of the building, support Interior’s operations, and meet current building codes and standards. In 2000, GSA began the construction of its multi-year, six- phase modernization plan, where each of the building’s six wings was to be modernized during one of the six phases. Project Scope Phase 6 (Wing 1)—the final phase of the building’s modernization— included upgrading the mechanical and electrical systems, replacing the lights and ceiling systems, installing fire safety upgrades and emergency egress stairs, upgrading restrooms, improving accessibility, and restoring historic spaces to include the auditorium, library, and the Undersecretary’s and Secretary’s suites. Contract Cost or Schedule Changes In 2001, GSA originally negotiated with the contractor the costs to execute Phase 6, which was structured as a contract option. The option could be exercised at GSA’s discretion upon receiving funding but allowed for future, economic price escalation, for inflation. The contract price in 2001 for the Phase 6 scope was approximately $19.3 million. Because appropriated funding was not received until fiscal year 2014, that earlier contract pricing was contractually updated by GSA in 2014 to roughly $38 million; however, that figure included roughly $4.5 million in additional scope that GSA added into the project. The additional scope included, among other items, that the Phase 6 space was to be certified under the Leadership in Energy and Environmental Design criteria and that lessons learned from the earlier completed phases—implemented over nearly 15 years—would be incorporated into the Phase 6 project. Additionally, Interior asked GSA that parts of the library be converted into office spaces to increase the building’s space efficiency and allow Interior to move more personnel into the building. That contract change, for about $6.2 million, was made in May 2016 and also resulted in the schedule’s being rebaselined, adding about one year to the project’s duration. GSA reported that Interior provided $17.7 million in additional funding, inclusive of the costs for converting the library space. GSA reported that the construction contract cost for Phase 6 was $51.7 million (about a 36 percent increase above the 2014 adjusted, base contract cost of $38 million). Phase 6’s construction started in May 2014 and was completed approximately 3 years later in June 2017. Figure 10 shows the exterior of the Department of Interior headquarters building with its six wings. Figure 11 shows interior view of historic spaces that were restored during Phase 6. United States Courthouse for the Southern District of Alabama Background The primary driver for the project was to address the long term housing needs of the United States Courts and related agencies. The District Court required additional space that the adjacent existing John A. Campbell Courthouse could not provide, and GSA determined that a new courthouse was necessary to accommodate the Courts’ projected 10 to 30 year space needs. The Campbell Courthouse renovation followed the new courthouse construction to allow for the relocation of the Bankruptcy and Probation Courts from leased space, and allow for the full Court family to be co-located between the two adjacent buildings. Project Scope The new courthouse building, adjacent to the existing Campbell Courthouse, was designed to provide 155,600 gross square feet of space, including parking. The building houses six courtrooms, nine judges’ chambers, the United States Marshalls Service, 38 below-grade parking spaces, and the capability to expand and accommodate eight additional courtrooms in the future. Contract Cost or Schedule Changes In fiscal year 2010, the new construction project received partial funding in an appropriation in the amount of $50 million, for construction. However, the project was not awarded at that time. The U.S. Courts and GSA had to revisit the long-term space needs for the U.S. Courts, which was later done as part of GSA’s 2013 feasibility study. In fiscal year 2014, an additional $69.5 million was appropriated for a new approach that would involve repairs and alterations to the existing Campbell Courthouse, as well as the construction of a new federal courthouse (which was to be smaller than originally designed), adjacent to the Campbell Courthouse. GSA fiscal year 2014 documentation for the new courthouse project estimated the total design cost at $8.5 million and the total construction cost at $71.1 million, which excluded any prior funding spent on site acquisition costs and the project’s earlier design. In April 2015, GSA awarded a single design-build contract for both the design and construction of the new courthouse and for the repairs and alteration of the existing Campbell Courthouse. GSA baselined the construction cost for the new courthouseexclusive of the costs for the Campbell Courthouse alterationsat $70 million. GSA data showed that the final construction cost for the new courthouse was $72.6 million (an increase of about 4 percent over the baseline cost of $70 million; roughly 9 percent less than the $79.6 million total estimated costs for both the design and construction). Construction started in Spring 2016 and was completed in just over 2 years, in June 2018. The schedule was rebaselined by roughly a month for severe weather delays during the construction. Figure 12 shows the exterior of the new U.S Courthouse and two interior spaces. Appendix III: Comments from the General Services Administration Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Mike Armes (Assistant Director); Catherine Kim (Analyst-in-Charge); John Bauckman; Delwen Jones; Timothy Kinoshita; Ying Long; Malika Rice; Rachel Stoiko; and Crystal Wesco made key contributions to this report.
Why GAO Did This Study As the federal government's landlord, GSA spends hundreds of millions of dollars to construct or modernize federal buildings. By delivering these major construction projects, GSA supports tenant agencies' missions and facilitates the delivery of government services. GAO was asked to review GSA's major construction projects. This report: (1) identifies costs of these projects in the last 5 years and factors that contribute to those costs; (2) examines how GSA monitors and publicly communicates cost and schedule information; and (3) assesses GSA's efforts to confirm that projects meet GSA's requirements and that tenants are satisfied with completed projects. GAO analyzed GSA's performance data from fiscal years 2014 to 2018 for 36 projects with a minimum cost each of $20 million (i.e., a major construction project); selected five case-study projects representing diversity in project type, geographic area, building type, and range in cost and scope; reviewed applicable GSA policies, procedures, guidance, and reports; and interviewed GSA officials and project stakeholders. What GAO Found In fiscal years 2014 through 2018, the General Services Administration (GSA) completed 36 major construction projects—projects with a minimum cost of $20 million to construct new buildings or modernize existing buildings—with a total cost of $3.2 billion. According to a GSA consultant, factors specific to federal construction projects may result in GSA's projects costing roughly 15 to 25 percent more than comparable private sector projects. For example, GSA uses more durable but more expensive materials to achieve a longer building service life compared to private owners who may plan for a shorter service life. GSA's Annual Performance Reports to Congress do not indicate how much GSA “rebaselined” projects' schedules and costs. Rebaslining reestablishes the point at which GSA measures on-schedule and on-budget performance. In accordance with agency policy, GSA rebaselined 25 of 36 projects GAO reviewed to account for issues such as design changes and tenant-funded requests. For example, GSA rebaselined one of its modernization projects for a $2.7 million increase to the construction contract initially awarded for $21.8 million. The increase resulted from a design change to add a stairwell for fire safety purposes to accomodate the tenant's plan to increase the building's occupants (see figure). After GSA rebaselines a project, costs may differ from the project estimates approved by Congress. Because GSA does not report the extent that it has rebaselined projects or projects' final costs, Congress lacks information about GSA's performance: such as whether final costs are consistently above, below, or meeting estimated costs. Reporting such information could benefit Congress' ability to carry out its oversight role and improve transparency about the full costs of major federal construction projects. GSA assesses whether projects meet requirements and tenants' needs but does not fully capture or share lessons learned. For example, GSA uses “commissioning”—testing installed building systems—to validate that the buildings' systems function as designed. However, because GSA's 2005 commissioning guide references outdated guidance, the effectiveness of its activities may be limited in assuring buildings are operating optimally. GSA also uses post occupany evaluations (POE) to assess projects' performance and tenants' satisfaction. However, in the last 5 years, GSA has not regularly conducted POEs, due in part to resource constraints, and lacks a policy for selecting projects for POEs and communicating findings from completed POEs. As a result, GSA may be missing opportunities to fully utilize POEs to gather tenants' feedback and inform the design and construction of future projects. What GAO Recommends GAO is recommending that GSA (1) report the extent projects were rebaselined and their final costs; (2) update GSA's commissioning guidance; and (3) identify and communicate when and how to conduct POEs and share lessons learned. GSA concurred with two recommendations and partially concurred with the other, which GAO believes should be fully implemented as discussed in the report.
gao_GAO-20-4
gao_GAO-20-4_0
Background VA administers one of the largest health care systems in the United States and is charged, through the VHA, with providing health care services to the nation’s eligible veterans. VHA expects to provide care to more than 7 million veterans in fiscal year 2019 at health care facilities across the country through a system of 18 regional networks known as Veterans Integrated Service Networks. VHA has 172 medical centers that offer a variety of inpatient and outpatient services, ranging from routine examinations to complex surgical procedures. VHA’s health care system also includes community-based outpatient clinics and other facilities that generally limit services to primary care and some specialty care. When veterans need services that are not available at VHA medical facilities or within required driving distances or time frames, VHA may purchase care from non-VHA providers through one of its community care programs. VA Organ Transplant Program VHA’s National Surgery Office is charged with overseeing the VA Organ Transplant Program, including the 12 VATCs that have established specialty services to provide solid organ transplant surgery and post- operative care, in some cases in conjunction with an academic affiliate. VATCs offer transplants for one or more organ types including heart, kidney, liver, and lung. (See fig. 1.) VHA considers transplant services provided through a VATC’s academic affiliate as care provided within the VA Organ Transplant Program. VHA’s National Surgery Office is responsible for clinical and operational oversight, as well as policies related to the VA Organ Transplant Program, including facilitating and monitoring the transplant referral process; overseeing quality of care; and monitoring outcomes of veterans receiving transplants. In 2013, VHA’s National Surgery Office established TRACER to track and monitor the referrals, evaluations, and outcomes for organ transplants performed at the VATCs. Referring VHA medical centers use the database to enter a referral for a veteran to be evaluated at a VATC; and VATCs use it to record referral reviews, patient evaluations, transplant outcomes, and follow-up care. In addition, the database provides the National Surgery Office with information used to monitor transplant volumes, the referral and evaluation process, and clinical outcomes across all VATCs. The VA Organ Transplant Program’s services include pre-transplant evaluation and testing, transplant surgery, post-transplant follow-up care, as well as transplant-related round-trip travel and lodging for both the veteran and a caregiver. VHA covers the cost of lodging for the veteran and caregiver through a variety of arrangements including contracts with local hotels and on-site VHA medical center housing, such as through the Fisher House Program. In addition, VHA may cover the cost of transplant services provided by non-VA providers; for example, when a veteran in urgent need of a heart transplant cannot travel to a VATC that provides that service. The VA MISSION Act includes provisions regarding VA’s authority to cover organ transplant services by non-VA providers—referred to as community care. Prior to the VA MISSION Act, VHA used its authority, as needed, to contract for transplant services with providers in the community when VHA care and services were not accessible in a timely fashion; however, the act provides additional authority to improve veterans’ access to transplant care and services through community providers, and authorizes transplant procedures with living organ donors who are not eligible for VHA care. On June 5, 2019, VA issued final regulations for the act. Oversight and Process for Organ Allocation in the United States The Health Resources and Services Administration contracts with the United Network for Organ Sharing—a private, nonprofit organization—to manage the OPTN, which creates and maintains transplant policies and bylaws that are applicable to all transplant centers in the United States, including the VATCs and the academic affiliates performing transplants under contract with them. OPTN documents organ allocation policies, and collects and reports data on transplant recipients, donors, and outcomes. OPTN also conducts periodic audits of transplant program performance, including ensuring that transplant programs meet functional activity requirements (i.e., performing a minimum number of transplants in a proscribed period of time), and reviewing post-transplant patient survival rates. In addition, OPTN assesses whether transplant centers have established required quality assurance and performance improvement programs to help ensure the quality and safety of the transplant services provided. When transplant centers, including the VATCs, identify a candidate for organ transplantation, they register the patient in the OPTN’s centralized, national computer network that matches organ donors with transplant candidates, referred to in this report as the “national organ donation waitlist.” Veterans do not receive preference for organ allocation. When an organ becomes available, the computer network generates a list of transplant candidates ranked by a standard set of criteria that generally include factors such as blood and tissue type, size of the organ, medical urgency of the candidate, time on the waitlist, and geographic distance between the organ donor and transplant candidate. An organ procurement specialist then contacts the transplant program of the top-ranked transplant candidate to determine if the available organ is suitable for the candidate. If the organ is suitable, arrangements are made to procure, transport, and store the donated organ, and for the transplant candidate to travel to the transplant center for surgery. If the organ is not suitable for a given candidate, the procurement specialist contacts the transplant program of the next transplant candidate on the list until the organ is found to be suitable for a transplant candidate. VHA Health Care Program Funds Each year, VA allocates most of its appropriations for health care services to VHA’s 18 Veterans Integrated Service Networks through the Veterans Equitable Resource Allocation (VERA) system. VERA funds are allocated for general purposes, such as treatment for basic and complex patients, research and educational support, and equipment and maintenance costs; as well as for specific purposes, such as preventative and primary care initiatives and transplant care. The VERA model uses price groups— categories of veterans with similar resource needs based on the complexity of their medical conditions—to determine the funding level for each network. In addition, VHA’s National Surgery Office historically allocated transplant specific purpose funds to the VATCs for solid organ transplants, because the costs of transplant services were not fully covered by general purpose funds. Beginning in fiscal year 2019, the VERA model was modified to establish a new price group specifically for transplant patients, allowing full funding with general purpose funds for these services. VHA officials explained that this change is expected to reduce the need for specific purpose funds to supplement transplant care. VHA officials told us academic affiliate contracts are funded through the medical services appropriations allocated to the VHA medical center where the VATC is located. VHA Provides Organ Transplants to Veterans through a Multi-Step Process To receive an organ transplant in the VA Organ Transplant Program, a veteran must go through a five-step process: (1) initial referral, (2) pre- operative evaluation, (3) listing on the OPTN national organ donation waitlist, (4) transplant surgery, and (5) follow-up care that continues for the remainder of the veteran’s life. See figure 2 for an overview of the five steps. Step 1: Initial screening and referral. A veteran seeking an organ transplant begins the process by having an initial screening at a referring VHA medical center. If VHA medical center providers determine that the veteran is a potential candidate for an organ transplant, they may prepare a formal referral to a VATC. To prepare a referral, the providers use an organ-specific checklist and other tools developed by VHA’s National Surgery Office with input from other experts in the field to perform a standard set of assessments of the veteran’s clinical, social, and mental health status. In addition, VHA officials told us that the initial screening includes an assessment of the veteran’s social and family support; for example, identifying a caregiver who can accompany and stay with the veteran throughout the transplant process. In addition, there are organ- specific criteria, such as negative tobacco smoking screens for veterans seeking a heart transplant, and up-to-date dialysis information for liver and kidney transplant candidates. VHA officials noted that the providers may consult with staff at a VATC as needed during the initial screening phase. Following the initial screening, if VHA medical center providers determine that the veteran is a potential candidate for a transplant, they enter the checklist information into the TRACER database, include the results of the required assessments outlined in the checklist, and attach any additional medical information, such as testing performed through care in the community. VHA officials told us that the VATC to which the veteran is referred is chosen based on factors including distance from the veteran’s home and the types of organ transplants offered at the VATC. Once the VHA medical center completes a referral in TRACER, the information becomes available to the selected VATC. Step 2: VATC referral review and veteran evaluation. When the VATC receives a veteran’s referral, VATC staff review it to determine whether the referral information is complete and the veteran meets the criteria to continue the process. If so, VATC staff evaluate the veteran and perform additional testing and clinical preparation needed to determine whether the veteran is a transplant candidate. To reduce the travel burden on veterans and their caregivers, providers at the veteran’s referring VHA medical center may arrange for telehealth visits with the VATC for pre- transplant education and consultation. However, travel for in-person appointments at the VATC is required for most veterans referred for evaluation. According to VATC officials we spoke with, the VATC considers the severity of the veteran’s illness and overall need for a transplant. In some cases, this assessment is conducted by a panel of experts composed of VATC providers and providers from the academic affiliate, where applicable. For example, some VATCs hold regular review meetings to discuss cases up for consideration jointly with providers from the VHA medical center and the academic affiliate, because individual cases may be co-managed depending on the type of organ being transplanted and the services provided at an individual VATC. Providers at some VATCs provide care at both the VATC and its academic affiliate, allowing for integrated clinical management of patients. If the VATC determines the veteran is not a candidate for transplantation, the referring VHA medical center can request a second opinion by another VATC. If the veteran is once again determined not to be a candidate, the referring VHA medical center can make a final appeal. Appeals are forwarded to the VA’s Transplant Surgical Advisory Board, comprised of subject matter experts, for consideration. According to VHA policy, the board considers the appeal and makes a recommendation to the National Director of Surgery (the head of VHA’s National Surgery Office) who is responsible for facilitating second opinion requests, making the final determination, and notifying the referring VHA medical center regarding the final appeal determination. VHA reported that between fiscal years 2014 and 2018, 39 decisions were appealed to the Transplant Surgical Advisory Board, one of which was approved for resubmission to another VATC for consideration. Step 3: Listing on the national organ donation waitlist. If the VATC determines that the veteran is a candidate for an organ transplant, VATC staff add the veteran to the national organ donation waitlist. At this point in the process, veterans follow the same procedure as the general population seeking an organ transplant. To maximize the chances that the veteran will receive an organ, the VATC staff may also discuss options the veteran can pursue for personally identifying a potential living donor (if applicable for the organ needed). VATC officials noted that they may provide other clinical interventions to help prolong a veteran’s life and preserve his or her health while awaiting an organ; for example, implanting a ventricular assist device into a veteran awaiting a heart transplant. Step 4: Transplant surgery. According to VATC officials, once a veteran is placed on the OPTN national organ donation waitlist, depending on the type of organ needed, the veteran and their caregiver may be required to travel to the VATC and remain in close proximity while awaiting organ availability. In some cases, such as for a liver transplant, a harvested organ can be kept viable for longer periods, allowing time for a veteran to travel from their home to the VATC once the organ becomes available. Depending on the arrangement between a particular VATC and its academic affiliate, the veteran could receive the transplant surgery and post-operative care at either the VATC or the affiliate. For example, the VATC in Richmond performs heart transplants and contracts with its affiliate for liver transplants. The VATC in Nashville performs kidney transplants and contracts with its affiliate for heart and liver transplants. From fiscal years 2014 through 2018, 61 percent of the transplant surgeries provided within the VA Organ Transplant Program were performed by a VATC and 39 percent were performed by an academic affiliate. See table 1 for a list of VATCs, organ types transplanted, and contracts with academic affiliates. Step 5: Follow-up care. Following the transplant surgery and the immediate post-operative care provided by the VATC and its academic affiliate, the veteran receives on-going follow-up care from both the VATC and the referring VHA medical center. VHA providers monitor veterans post-transplant for the remainder of their lives; for example, to oversee post-transplant immunosuppression, and track survival rates and outcomes for organ recipients. VA policy states that the VATC has primary responsibility for providing care while the veteran is at the VATC for the transplant and for providing specialized follow-up care after the veteran is discharged. In general, however, following discharge from the VATC, the veteran’s referring VHA medical center maintains responsibility for the veteran’s care coordination. VHA has policies and processes to allow for some aspects of transplant- related care, including follow-up care, to be done via telehealth—that is, visits with a VATC provider remotely from the veteran’s referring VHA medical center. VHA medical centers may establish telehealth agreements with VATCs to ease the burden of travel for veterans and their caregivers, and to allow for ongoing monitoring of the veteran’s health post-transplant. Because VHA monitors transplant recipients for the rest of their lives, using telehealth can decrease the need for the veteran to travel back to the VATC unless a specific clinical need arises, such as biopsies for heart transplant recipients. VHA officials noted that follow-up care is facilitated by VA’s shared electronic health record, which allows VHA providers to share medical records and other patient information over time and across locations. Further, VHA providers noted that follow-up care and communication between VATCs and primary care teams can be more complicated in the private sector when transplant services are not generally part of a patient’s whole system of care. The Number of VHA Organ Transplants and Related Allocations and Spending Generally Increased from Fiscal Years 2014 through 2018 The Number of Organ Transplants at VATCs Increased between Fiscal Year 2014 and Fiscal Year 2018 VATCs provided about 1,700 organ transplants between fiscal year 2014 and fiscal year 2018. The number of organ transplants provided each year generally increased, ranging from 300 transplants in fiscal year 2014 to a peak of 400 transplants in fiscal year 2017. During this 5-year period, kidneys and livers were the most frequently transplanted organs, representing 85 percent of all organs transplanted at VATCs. Heart and lung transplants were much less common and represented the remaining transplants. (See fig. 3.) For the programs that were active during all 5 years from fiscal year 2014 through fiscal year 2018, the number of solid organ transplants performed varied by VATC, ranging from 12 at the Birmingham VATC to 399 at the Pittsburgh VATC. (See table 2.) The nearly 1,700 transplants performed through the VA Organ Transplant Program represent a relatively small portion—less than 20 percent—of the VHA referrals for organ transplant between fiscal years 2014 and 2018. While thousands of veterans are referred for solid organ transplants, far fewer veterans ultimately receive transplants. According to VA officials, VHA considers all submitted transplant referrals; however, many patients do not meet initial screening criteria to proceed with a formal evaluation. For example, a veteran’s state of illness may not be severe enough to warrant a full transplant evaluation. Further, some veterans who are offered transplant evaluations decide not to proceed following education about the process. Officials noted that in many cases, the transplant evaluation reveals that a veteran does not meet the criteria for a transplant, such as not having a committed caregiver who can support the veteran through the evaluation and transplant procedure. Of the veterans who are listed on the national organ donation waitlist, VHA officials report that the number of transplants is limited by the supply of organs, which does not meet the demand in the U.S. general population, including veterans. For veterans who received an organ transplant from a VATC between fiscal year 2014 and fiscal year 2018, survival rates varied by organ type, with the 3-year survival rate ranging from about 95 percent for kidney transplants to 85 percent for lung transplants, according to National Surgery Office data. (See table 3.) For national-level general population survival rates, see appendix II. VHA Allocations and Spending for Organ Transplants Increased from Fiscal Year 2014 through Fiscal Year 2018 Consistent with the increase in the number of organ transplants provided between fiscal years 2014 and 2018, VHA’s allocations and spending for transplant services also increased, similarly peaking in fiscal year 2017. VHA funds these services using a combination of general purpose and specific purpose funds. VHA’s National Surgery Office allocates transplant specific purpose funding to the VATCs based upon the transplant-related workload the VATCs report through TRACER. See appendix III for additional information on transplant-related allocations and expenditures. VHA Allocation of Transplant Specific Purpose Funds. VHA allocated $292 million in transplant specific purpose funds during this 5-year period, ranging from $50.3 million in fiscal year 2014 to approximately $64.6 million in fiscal year 2018. (See table 5 in app. III.) Transplant specific purpose funds are used to support program overhead costs (infrastructure and maintenance) associated with organ transplants performed at VATCs. In addition, they are used for pre-transplant evaluations, lodging, and some miscellaneous costs associated with transplants, such as living donor evaluations. Further, transplant specific purpose funds are used to fund other VHA medical centers without a VATC that perform certain transplant follow-up care. VHA Expenditures for Transplant-Related Services VHA Expenditures of General Purpose and Specific Purpose Funds for Veterans Receiving a Solid Organ Transplant. VHA spent approximately $259 million for services provided to veterans who received a solid organ transplant at a VATC during this 5- year period, ranging from $44.6 million in fiscal year 2014 to a high of $57.7 million in fiscal year 2017. Similarly, VHA’s spending for pre- and post-transplant care provided at VHA medical centers totaled approximately $68.6 million during this time, ranging from $10.8 million in fiscal year 2014 to $15.6 million in fiscal year 2017. (See tables 6 and 7 in app. III.) VHA Contracts with VATC Academic Affiliates. VHA spent over $216 million on contract payments to academic affiliates for transplant services during this period, ranging from $34.9 million in fiscal year 2014 to a high of $49.9 million in fiscal year 2017. This increased spending corresponded to an increase in the number of transplants performed by academic affiliates, which totaled 669 transplants—nearly 40 percent of all VATC transplants from fiscal year 2014 through fiscal year 2018. The highest volume—146 transplants—and the highest cost—$49.9 million—occurred in fiscal year 2017. (See table 8 in app. III.) VHA Contracts for Community Care. From fiscal year 2014 through fiscal year 2018, VHA spent $7.9 million for solid organ transplant services provided to 53 veterans through community care programs. (See table 9 in app. III.) According to VHA data, over this 5-year period, 50 of these transplants were authorized using title 38 U.S.C. § 1703 (“Non-VA Medical Care Program”). The remaining three were funded using the Veterans Access, Choice, and Accountability Act of 2014 (Choice Act)—totaling approximately $411,000 of the $7.9 million. VA has reported that while the Choice Act allows VHA to provide an eligible veteran transplant care at a transplant center in the community, generally at Medicare rates, organ procurement is only partially covered at Medicare rates. This has resulted in community providers being less willing to provide transplant services for VHA patients through community care programs. Timeliness of VHA’s Transplant Referrals and Evaluations Improved from Fiscal Years 2014 through 2018, but Inefficiencies in VA Processes Exist Timeliness of Referral Reviews Improved from Fiscal Years 2014 through 2018, but Opportunities for Increased Efficiencies Exist From fiscal years 2014 through 2018, VATCs received 10,494 referrals from VHA medical centers. In that time, the percentage of VATC referrals that met timeliness standards outlined by VHA’s National Surgery Office improved. Stable condition referrals: For veterans in stable condition, VHA requires that VATCs review referrals and decide within 5 business days whether veterans are potential candidates for an organ transplant and should receive a full evaluation. The percentage of referrals for which VATCs met the timeliness standard increased from 95 percent in fiscal year 2014 to 99 percent in fiscal year 2018. Emergency condition referrals: For veterans in emergency circumstances, VHA requires that VATCs review referrals and document within 48 hours whether veterans are potential transplant candidates and should receive a full evaluation. The percentage of referrals for which VATCs met the timeliness standard increased from 94 percent in fiscal year 2014 to 98 percent in fiscal year 2018. (For more information, see app. IV.) National Surgery Office officials identified two possible drivers of the observed improvements: (1) increased monitoring, and (2) providing real- time feedback to VATCs through TRACER. Providers at one VATC noted that they use information from the National Surgery Office’s Transplant Quarterly Reports to identify areas to improve and they assigned a transplant team nurse the responsibility to monitor program quality, including that timeliness requirements are being met. A provider at a VATC where timeliness has improved since fiscal year 2014 and is now at 100 percent explained that his facility has provided training to staff at referring VHA medical centers they work with frequently. For example, the official said he has hosted a workshop for transplant coordinators to provide training on submitting transplant referral packets through TRACER. While VATCs almost always met timeliness standards in fiscal year 2018, VATC officials in our review noted that transplant coordinators at referring VHA medical centers sometimes submit referral packets that are incomplete, requiring additional time and effort by the provider to search for information not readily available and potentially adding delays to the VATC review times. VHA requires that a complete referral packet be submitted through TRACER using a referral progress note that contains the required assessments outlined in the organ-specific checklist. The referral packet can also include attachments to transmit some required information, such as results for tests performed by community providers. Providers at three VATCs told us that reviewing a complete referral packet generally takes 30 minutes to an hour. However, in cases where the packet is incomplete (for example, it does not include the results from all the required assessments) the process is much less efficient and, according to two providers we interviewed, can take up to 5 hours. VATC providers explained that when not all test results are available in the referral packet, they have to access another VHA medical center’s electronic medical record system and search for the required information. Accessing another medical center’s system adds time to the referral review process and can take time away from the provider’s other duties, such as providing follow-up care to transplant patients or monitoring transplant outcomes. Internal controls state that management should assign responsibility to discrete units and demonstrate a commitment to develop competent individuals in those units, such as through training, to enable the organization to operate in an efficient manner and help achieve the organization’s objectives. However, a lack of understanding or implementation of the required information needed in the referral packets can make the process for reviewing the referral packets inefficient in some cases. Specifically, one VATC provider told us that incomplete referral packets are often due to a lack of training for staff at the referring VHA medical centers on the process for submitting referrals through TRACER. In fact, four of the five transplant coordinators at referring VA medical centers we interviewed reported a lack of training on submitting transplant referrals through TRACER. Instead, for example, a transplant coordinator at one referring VHA medical center said she received assistance from a medical clerk at her facility on how to submit referrals through TRACER. Officials at VHA’s National Surgery Office told us that although there is no centralized, in-person training available for referring VHA medical centers, the office published a training guide, which is available on VA’s intranet and provides guidance on how to access TRACER and refer patients for transplant evaluation. Despite this resource, transplant coordinators from some referring VHA medical centers still cited a need for additional training or other guidance. For example, one official said training for new transplant coordinators would be helpful as would regular updates on transplant criteria or policy changes, such as through regular calls or a newsletter targeted at transplant coordinators. Timeliness of Potential Transplant Candidate Evaluations Improved From Fiscal Years 2014 through 2018, but Some Delays Remain In addition to timeliness requirements for referral review, VHA requires that VATCs complete an evaluation of veterans within 30 calendar days of receiving a referral to determine whether they are a candidate for transplant and should be placed on the national organ donation waitlist. From fiscal years 2014 through 2018, VATCs increased the percentage of evaluations completed within this time frame, from 55 percent (576 of 1,045 appointments) in fiscal year 2014 to 89 percent (1,193 of 1,346 appointments) in fiscal year 2017, before dropping to 87 percent (1,325 of 1,521 appointments) in fiscal year 2018. National Surgery Office officials attributed the overall improvement to increased monitoring and the increased availability of telehealth for conducting transplant evaluations. See appendix IV for more information on the timeliness of transplant evaluations by VATCs from fiscal years 2014 through 2018. The extent to which delayed evaluations occurred varied by VATC location and by organ type each fiscal year. For example, in fiscal year 2018, we found that the average time from referral to completed evaluation was less than 30 days for 19 of the 20 organ transplant programs, and overall, 13 percent of evaluations were not completed within 30 days. Of note, 51 of 128, or 40 percent, of evaluations for kidney transplant at the Bronx VATC were completed more than 30 days after the referral was submitted, with evaluations ranging from 5 to 84 days after submission. In contrast, all 69, or 100 percent, of liver evaluations at the Nashville VATC were completed within 30 days, with evaluations ranging from 0 to 28 days after the referral was submitted. (See fig. 4.) According to VHA data and three VATC providers we interviewed, evaluation appointment availability is not a cause for delays in most cases; rather, delays are primarily due to veteran preference. According to VHA data for 1,617 evaluation appointments completed in fiscal year 2018, 1,412 appointments were scheduled within the 30-day requirement. For the remaining 205 appointments, 13 were delayed due to lack of appointment availability, and 192 appointments were delayed due to veteran preference. According to providers at the three VATCs we interviewed, while veterans may prefer to be seen at a later date for a number of reasons, including that their caregiver is not available to travel, veterans are not always aware that they should be evaluated within 30 days of being referred for a transplant. VHA requires the referring VHA medical center to discuss the 30 day evaluation requirement with the veteran prior to submitting the referral. According to VHA, in some cases evaluation timeliness is a critical factor affecting patient outcomes. Although a veteran may choose to be seen at a time beyond the 30-day standard, postponing an evaluation may delay their placement on the national organ donation waitlist, potentially having a negative impact on their health and well-being. Officials at five VATCs and two referring VHA medical centers reported that additional training for transplant coordinators would be helpful for improving evaluation timeliness. Additional training enables employees to develop competencies and reinforce requirements, which is consistent with internal control standards that state that management should develop competent individuals to achieve the entity’s objectives. According to one VATC provider, transplant coordinators at referring VHA medical centers should be trained to discuss travel with the veteran before submitting the referral, so the transplant coordinator and the veteran understand that the evaluation should be completed within 30 days of referral, increasing the likelihood that veterans will be able to schedule timely evaluations. A referring VHA medical center transplant coordinator also said that additional training about the VATC’s processes would be helpful in order to be better able to inform veterans and their caregivers about what to expect from the transplant process. Conclusions Placing veterans on the national organ donation waitlist as soon as possible is critical for potential transplant candidates to be matched with a donor organ. Since fiscal year 2014, VHA has improved timeliness for reviewing transplant referrals to determine if a veteran is a candidate and for completing transplant evaluations. However, VHA medical center staff do not always submit complete transplant referral packets through TRACER, which can create inefficiencies and delay the referral review process. Similarly, inefficiencies in the transplant evaluation process occur when VATC and VHA medical center staff do not fully inform veterans of their role in the transplant evaluation process, specifically, that their evaluation be completed within 30 days of referral. Without additional training to address these inefficiencies a veteran’s placement on the national organ donation waitlist could be delayed. Recommendation for Executive Action The Under Secretary for Health should establish a requirement that VHA’s National Surgery Office provide additional training to staff at referring VHA medical centers on (a) submitting referral packets through TRACER that are complete, and (b) understanding and communicating the veteran’s role in the evaluation process related to the timely completion of transplant evaluations. (Recommendation 1) Agency Comments We provided a draft of this product to VA for comment. In its comments, reproduced in appendix V, the department concurred in principle with our recommendation, reiterated the resources it currently makes available to staff at referring VHA medical centers, and described actions it plans to take to address the recommendation. Specifically, VHA’s National Surgery Office plans to distribute a memorandum to all VHA facilities to reinforce the available training and resources to support the staff at referring VHA medical centers with submitting complete referrals, and to ensure adequate communication of the veteran’s role in timely completion of transplant evaluations. VA also provided technical comments, which we incorporated as appropriate. In addition, we provided a draft of this report to the Department of Health and Human Services for review and they did not have any comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of VA, the Secretary of the Department of Health and Human Services, and other interested parties. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov/. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or silass@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix VI. Appendix I: Department of Veterans Affairs Transplant Centers Providing Solid Organ Transplants (printable) Appendix I: Department of Veterans Affairs Transplant Centers Providing Solid Organ Transplants (printable) Appendix II: National Transplant Survival Rates by Organ Type for Fiscal Years 2014 through 2018 We analyzed national transplant survival rates by organ type using data from the Scientific Registry of Transplant Recipients. All transplant facilities across the United States, including Department of Veterans Affairs transplant centers, provide data to this database. Table 4 shows the national transplant survival rates by organ for fiscal years 2014 through 2018. Appendix III: VA Resource Allocation and Veterans Health Administration Spending for Solid Organ Transplant Services Annually, the Department of Veterans Affairs (VA) allocates most of its appropriations for health care services to the Veterans Integrated Service Networks within the Veterans Health Administration (VHA) through a model called the Veterans Equitable Resource Allocation (VERA). The VERA model is designed to fund patient care based on a methodology that develops set, or “capitated,” rates for different groups or categories of veterans with similar resource needs based on the complexity of their medical conditions. Categories include oncology, visual impairment, chronic mental illness, and critical illness. VERA uses a national formula that considers the number of veterans and the complexity of care provided; and certain geographic factors, such as local labor costs, in determining how much each Veterans Integrated Service Network should receive. VERA determines this amount based on each network’s activities and needs in the following areas: patient care, equipment, nonrecurring maintenance, education support, and research support. The networks, in turn, allocate resources to their respective VHA medical centers, including those with VA transplant centers (VATC). The networks distribute VERA funds to VHA medical centers based on the complexity of patients treated at the medical center in previous fiscal years. This appendix provides VA’s reported allocations and expenditures for solid organ transplant services through its VATCs and contracts with academic affiliates and community providers from fiscal year 2014 through fiscal year 2018. Table 5 shows VHA allocation of transplant specific purpose funds by VATC for transplant-related services. Table 6 shows VHA expenditures at each VATC for veterans who received solid organ transplants. Table 7 shows VHA expenditures for pre- and post-transplant services provided by VHA medical centers without a VATC for veterans who received transplants. Table 8 shows the total number of transplants and contract payments to academic affiliates for solid organ transplant services. Table 9 shows total number and spending for solid organ transplants provided by community care providers. Appendix IV: Timeliness of Veterans Health Administration Transplant Referrals and Evaluations The Veterans Health Administration (VHA) has timeliness requirements for reviewing transplant referrals to determine whether veterans are potential candidates for organ transplant and should receive a full evaluation, and for completing timely evaluations for potential candidates. Referral Reviews Timeliness of referral reviews improved from fiscal year 2014 through fiscal year 2018. VHA requires that for veterans in stable condition, Department of Veterans Affairs transplant centers (VATC) review referrals and decide within 5 business days whether veterans are potential candidates for an organ transplant and should receive a full evaluation. For emergency cases, VATCs should perform this review and document the results within 48 hours. Table 10 shows the number of referrals reviewed and the percentage of timely referrals for each VATC and organ transplant program from fiscal year 2014 through fiscal year 2018. Completed Evaluations VHA requires that VATCs complete an evaluation of stable veterans within 30 calendar days of receiving a referral to determine whether they are a candidate for transplant and should be placed on the national organ donation waitlist. The percentage of evaluations completed within the required time frame increased from 55 percent in fiscal year 2014 to 87 percent in fiscal year 2018, although some variation can be seen by organ type and location within each fiscal year. Table 11 shows the number of completed evaluations and the percentage of timely evaluations for each VATC and organ transplant program from fiscal year 2014 through fiscal year 2018. Appendix V: Comments from the Department of Veterans Affairs Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Marcia A. Mann (Assistant Director), Jill K. Center (Analyst-in-Charge), Colin Ashwood, Emily Binek, Emily Bippus, Shana Deitch, Keith Haddock, and Ebony Russ made key contributions to this report. Also contributing were Krister Friday, Jacquelyn Hamilton, Giselle Hicks, Drew Long, Vikki Porter, and Ethiene Salgado-Rodriguez.
Why GAO Did This Study As of June 2019, over 113,000 people in the United States—including veterans—were waiting for an organ transplant. In 2018, more than 36,000 organ transplants were performed at transplant centers across the country, including those within the Department of Veterans Affairs' (VA) Organ Transplant Program. GAO was asked to review how VA administers and oversees its organ transplant program. This report, among other things, examines the process and timeliness with which VA reviews referrals and completes evaluations for organ transplants. GAO reviewed data from VHA's Transplant Referral and Cost Evaluation Reimbursement database, documents related to VA's transplant program, and federal internal control standards. GAO conducted site visits to three of the 12 VATCs, selected to obtain diversity in geography and types of organs transplanted. At the selected VATCs, GAO reviewed facility data and documents related to organ transplants and interviewed officials. GAO also collected and reviewed information from the remaining nine VATCs. What GAO Found The 12 Veterans Affairs' transplant centers (VATC), which are overseen by the Veterans Health Administration (VHA), almost always met the referral timeliness standard from fiscal years 2014 through 2018. When a veteran is determined to be a potential candidate for an organ transplant, he or she can receive a formal referral to a VATC. Depending on the type of referral, the VATC must meet specific timeliness standards for reviewing the referral and deciding if the veteran should receive a full evaluation. Likewise, VATCs have timeliness standards for conducting the full evaluation, and generally showed improvement in meeting that standard from fiscal years 2014 through 2018. For those delays in conducting full evaluations that did occur, GAO found they varied by organ type and VATC. Specifically, in fiscal year 2018, transplant evaluation timeliness ranged from 60 percent at two VATC kidney programs to 100 percent at kidney, liver, heart or lung programs across seven different VATCs. According to VATC officials, transplant evaluation delays are caused when patients or caregivers are not available or not aware that they are required to be evaluated within 30 days of being referred. Although veterans may prefer to be seen at a later date, untimely evaluations can delay veterans' placement on the national organ donation waitlist. According to VHA data, 192 of the 1,617 transplant evaluation appointments completed in fiscal year 2018 did not meet the 30-day requirement. VATC officials said this was because veterans were not available or not aware of the requirement. GAO found that staff at referring VHA medical centers lacked a full understanding of the transplant referral and evaluation process. For example, VATC providers told GAO that transplant referrals are sometimes incomplete, requiring providers to spend extra time searching for information that should have been readily available. GAO found that additional training for medical center staff would help to improve the efficiency of the transplant referral process and the timeliness of transplant evaluations provided to veterans, a critical factor affecting veteran outcomes. What GAO Recommends VHA should provide additional training for staff at VHA medical centers that refer patients for organ transplants on (1) submitting complete referrals and (2) understanding and communicating the veteran's role related to timely completion of transplant evaluations. VA concurred in principle with the recommendation and described actions the department will take to address the recommendation.
gao_GAO-20-451
gao_GAO-20-451_0
Background NNSA’s Missions and Organization NNSA’s nuclear stockpile missions are largely executed at eight sites that are managed by seven M&O contractors and that comprise the nuclear security enterprise. These eight sites include: three national security laboratories—Lawrence Livermore National Laboratory in California, Los Alamos National Laboratory in New Mexico, and Sandia National Laboratories in New Mexico and other locations; four nuclear weapons production plants—the Pantex Plant in Texas, the Y-12 National Security Complex in Tennessee, the Kansas City National Security Campus in Missouri, and tritium operations at DOE’s Savannah River Site in South Carolina; and the Nevada National Security Site, formerly known as the Nevada Test Site. As shown in figure 1, each of NNSA’s eight sites has specific responsibilities within the nuclear security enterprise. NNSA’s sites are owned by the federal government but managed and operated by M&O contractors. According to DOE, the use of M&O contracts is supported by an underlying principle: the federal government employs highly capable companies and educational institutions to manage and operate government-owned or government-controlled scientific, engineering, and production facilities because these companies and educational institutions have greater flexibility in bringing scientific and technical skills to bear than the government. As we previously found, an M&O contract is characterized by, among other things, a close relationship between the government and the contractor for conducting work of a long-term and continuing nature. To support its missions, NNSA is organized into program offices that oversee the agency’s numerous programs, such as the B61-12 Life Extension Program—overseen by the Office of Defense Programs—and the Nuclear Smuggling Detection and Deterrence Program—overseen by the Office of Defense Nuclear Nonproliferation. Mission-related activities are primarily overseen by these program offices, which are responsible for integrating the activities across the multiple sites performing work. NNSA’s program offices are: Counterterrorism and Counterproliferation; Defense Nuclear Nonproliferation; Defense Nuclear Security; Naval Reactors; and Safety, Infrastructure, and Operations. NNSA receives four different appropriations, which it is responsible for allocating to programs that are managed by the program offices. The program offices obligate these funds to the M&O contracts to execute specific program functions. Obligated funds that are not “costed,” or expended, by the contractor at the end of the fiscal year can carry over for expenditure in a subsequent fiscal year, or the program offices can deobligate the funds and obligate them to a different contract for work in that same program area. In order for funds to be reallocated to a different program, NNSA may need to reprogram funds; such reprogramming may be subject to congressional notice and approval requirements. NNSA headquarters offices generally are to provide leadership, develop policy and budgets, or provide other functional support across NNSA. NNSA headquarters offices include the offices of: Acquisition and Project Management, Cost Estimating and Program Evaluation, Information Management and Chief Information Officer, Management and Budget, and Policy. NNSA has seven field offices across the country. Field office managers report directly to the NNSA Administrator. NNSA field offices, such as NPO, are collocated at the laboratory, plant, and testing sites and are responsible for overseeing NNSA’s M&O contractors, including ensuring compliance with federal contracts. To provide oversight of the M&O contractors, each field office employs subject matter experts in areas such as emergency management, physical security, cybersecurity, safety, nuclear facility operations, environmental protection and stewardship, radioactive waste management, quality assurance, business and contract administration, public affairs, and project management. NNSA’s field offices are: Kansas City Field Office in Missouri, Livermore Field Office in California, Los Alamos Field Office in New Mexico, Nevada Field Office, NPO in Tennessee and Texas, Sandia Field Office in New Mexico, and Savannah River Field Office in South Carolina. Before awarding the consolidated contract at Y-12 and Pantex, NNSA took steps to consolidate its field offices that oversee the contractor at these two sites. Specifically, NNSA combined the former Y-12 Site Office and former Pantex Site Office into the NPO Field Office in 2012. One NPO manager oversees both the Y-12 and Pantex sites, and each site has a deputy manager. The deputy managers oversee their respective sites as well as certain programs at both sites. The NPO Cost Savings Program Manager provides overall administration of the Cost Savings Program. As of fiscal year 2018, NPO had about 130 federal full-time equivalent employees at both sites, according to an NPO official. According to CNS officials, the contractor employs over 9,000 employees at Y-12 and Pantex. According to an NPO official, NPO acts as a single office because the two sites are closely integrated. Consolidated Contract History and Requirements In December 2011, NNSA issued a request for proposals for a consolidated M&O contract for the Y-12 and Pantex sites. NNSA awarded the M&O contract to CNS in January 2013. However, the award was the subject of three protests to GAO under our bid protest authority. NNSA ultimately reaffirmed its award of the contract to CNS, and CNS began contract performance in July 2014. The consolidated contract includes a total of 10 years, including the base period and all option terms. The contract requires CNS to meet certain performance requirements, and NNSA is to evaluate CNS’s accomplishment of these performance requirements before exercising each option term. During the first 2 full fiscal years of the contract, CNS focused on merger and consolidation activities—that is, merging the two sites under one contractor—and on achieving savings from those activities, according to CNS’s Merger Transformation Plan. Merger savings are associated with efficiencies and reductions in the workforce resulting from the consolidation of the contract. During the third and fourth fiscal years of the contract, CNS focused on transformation savings—or savings based on changing underlying processes to increase standardization, and improve quality and efficiency within and across the organization. From the third full fiscal year of the contract onward, CNS focused on continuous improvement, which constitutes incremental efficiency within established processes. The original contract required CNS to achieve at least 80 percent of its proposed savings and score 80 percent or higher on its performance evaluations in order to have additional option terms exercised. In September 2017, however, NNSA and CNS modified the contract so that delivery of cost savings is only taken into consideration in conjunction with CNS’s performance, as documented in NNSA’s annual Performance Evaluation Reports, when deciding whether to extend CNS additional option terms, also known as gateway decision points. NNSA officials told us they made this modification prior to the first gateway decision in September 2017 because CNS was very close to achieving 80 percent of its proposed cost savings, but it was unclear if CNS would achieve 80 percent. In addition, the initial contract requirements placed equal emphasis on cost savings and the contractor’s performance in meeting the mission, but NNSA officials said they do not view those two goals as equal. Cost savings in and of themselves are only helpful—and only creditable under the contract—if they do not negatively affect the mission, and therefore NNSA officials do not view achieving cost savings as equal to the contractor’s performance in meeting the mission. Following the contract modification in September 2017, NNSA exercised the first 2-year option term, ensuring the contractor will manage and operate Y-12 and Pantex through fiscal year 2021. The gateway decision for the second 2- year option term will occur by the end of June 2020, according to NNSA officials (see fig. 2). Cost Savings Program’s Structure Implementation and oversight of the Cost Savings Program involves contractor representatives and NNSA officials at several levels. CNS manages the Cost Savings Program using a matrixed organization that includes several executives such as vice presidents of the Business Management and Transformation and Program Integration departments, according to CNS officials. Throughout each fiscal year, these officials lead various efforts associated with developing and implementing cost reduction initiatives as well as other key aspects of the Cost Savings Program. One CNS Cost Savings Director is responsible for overseeing much of the company’s cost savings efforts, including coordinating between different program offices. Within NNSA, NPO conducts much of the oversight of the Cost Savings Program while NNSA’s Offices of Management and Budget, and Acquisition and Project Management also have some oversight functions. Within NPO, the Cost Savings Program Manager coordinates among different NPO program offices that help review and conduct oversight of the cost reduction initiatives throughout the year as well as with NNSA headquarters offices. NNSA’s Office of Management and Budget provides NNSA with administrative, human resources, and financial support. NNSA’s Office of Acquisition and Project Management is responsible for acquisition support and contracting oversight for the agency throughout the acquisition lifecycle. NNSA established an Executive Steering Committee, comprised of high- ranking officials from different NNSA program areas, as well as the NNSA Associate Principal Deputy Administrator, the NPO Manager, and the NPO Cost Savings Program Manager (as a non-voting member), to provide leadership and guidance for the governance of the cost savings element of the CNS contract. The steering committee members are to set cost savings policy; resolve disputes; and recommend and approve the cost savings amounts to be shared between the government, the contractor (through a cost-savings incentive fee), and site reinvestment projects. The Cost Savings Program is divided into six processes or phases that CNS and NNSA implement and oversee (see fig. 3): the Annual Controlled Baseline phase, the Cost Reduction Proposal phase, the Change Control phase, the Verification phase, and the Disposition phase. Annual Controlled Baseline phase. CNS develops and maintains the Annual Controlled Baseline, which is a document that describes the current scope of work and its cost and schedule. Among other things, NNSA uses the Annual Controlled Baseline to evaluate whether CNS achieved savings from implementation of prior years’ cost reduction initiatives. CNS is expected to submit the Annual Controlled Baseline no later than August 15 prior to the upcoming fiscal year, and NNSA reviews and approves the document. Cost Reduction Proposal phase. CNS develops cost reduction initiatives and updates the Cost Reduction Proposal, which describes CNS’s proposed cost reduction initiatives for the upcoming fiscal year and the expected cost savings to be validated from activities within the current fiscal year. The Cost Reduction Proposal is to be updated annually, no later than September 1 prior to the upcoming fiscal year. Each cost reduction initiative has a defined lifecycle, from identification and development to validation and sustainment. NNSA reviews and approves the document; approval authorizes CNS to begin implementing the initiatives. Change control phase. The change control phase is continuous throughout the fiscal year and allows CNS and NNSA to document and trace changes to the scope, schedule, and cost that affect the Annual Controlled Baseline and the Cost Reduction Proposal. Changes made during this phase to the Annual Controlled Baseline and the Cost Reduction Proposal are generally limited to changes outside of the control of the contractor, including congressional direction or reprogramming, changes to the programmatic mission, additional contractual requirements, and any NNSA-directed or approved changes. Performance phase. During the performance phase, which is also continuous throughout the year, the contractor is to report interim performance against the approved cost reduction initiatives for NNSA to evaluate accordingly, according to NNSA Cost Savings Program procedures. This interim reporting allows NNSA to monitor potential effects on the mission and offer feedback and course correction as needed. NNSA and CNS officials responsible for the Cost Savings Program collaborate regularly via biweekly meetings and tri-annual reviews to monitor CNS’s progress on cost reduction initiatives throughout the fiscal year. CNS generates a year-end Validation Report, which is the final of three tri-annual reports provided throughout the fiscal year. These reports detail the performance of the M&O contractor and progress made against proposed cost savings targets, and list the amount of savings CNS is claiming to have achieved in that fiscal year, to include both annual new savings and savings sustained from prior years. CNS is to submit the Validation Report for each previous fiscal year no later than November 15. Verification phase. After the end of the fiscal year, between November and January, NNSA uses verification checklists to review and verify CNS’s claimed savings for each cost reduction initiative. NNSA can use these verification checklists to record, among other evidence, any observations, interviews, document reviews, analyses, and measurements that NNSA has undertaken to confirm the savings claimed by CNS in the Validation Report. For each cost reduction initiative, NNSA is to verify, among other things, that CNS implemented the initiative, that the initiative resulted in efficiencies that produced cost savings, and that the initiative did not negatively affect the mission. NNSA is also to verify that CNS set aside the claimed savings. Additionally, NNSA is to verify that CNS sustained savings claimed in prior years. NNSA documents its determination of verified annual new and sustained savings in a Verification Report. Disposition phase. Upon completion of the verification phase, in January and February, the distribution, or disposition, of net savings occurs in accordance with the contract. Net savings are verified savings after accounting for execution costs. The contract allows those verified net savings to be shared among the government, the contractor, and site reinvestment projects to improve Y-12 and Pantex. Under the contract provisions, NNSA is to verify and distribute only those savings that remain after deducting the execution costs required to administer, develop, or implement the cost reduction initiatives. For example, the cost of purchasing a machine to automate a process that will, in turn, save labor hours from the previous non-automated process would be an execution cost. Therefore, NNSA-verified savings for each cost reduction initiative should reflect net savings from having implemented the initiative—that is, the gross savings minus the execution costs associated with the initiative. Verified net savings are to be distributed to the contractor, the government, and for site reinvestment projects. Contractor. The contractor is generally to receive a cost-savings incentive fee of about 35 percent of the verified net savings. For new savings related to employee benefits, however, the contractor is not to receive a share, and the savings are to be split between the government (50 percent) and site reinvestment projects (50 percent). The contractor’s cost-savings incentive fee is to be paid out of cost savings that NNSA has verified. The contract requires CNS to reimburse the government for the cost-savings incentive fee in the event that CNS does not sustain the savings for the remainder of the contract performance period. According to CNS’s proposed savings estimates, CNS planned to earn approximately $222 million in cost- savings incentive fees over the potential 10-year contract. Per the contract, the contractor may also receive award fees annually based on NNSA’s evaluation of its performance. The available award fee for each potential year of the contract ranges from approximately $20 million to approximately $40 million. Government. The government generally is to receive 35 percent of the verified net savings. For new savings related to employee benefits, however, the government is to receive 50 percent of the verified net savings. The portion of verified savings that is available for the government allows NNSA to return those savings to the programs for which funds were originally obligated, and the funds can be spent within the same program at Y-12, Pantex, or another site within the nuclear security enterprise. Site reinvestment. The remaining approximately 30 percent of the verified net savings is for site reinvestment projects. As noted above, however, the site reinvestment share for savings related to employee benefits is 50 percent. Site reinvestment projects may include: projects (such as a parking structure, an office building or a cafeteria) that serve the M&O site as a whole rather than a discrete program or implementation costs for future cost savings initiatives, among other things. Types of potential savings associated with the Cost Savings Program include, for example: Annual new savings. In each fiscal year, CNS validates and NNSA verifies annual new savings for the cost reduction initiatives implemented in that year. Examples of annual new savings include positions that were reduced in a certain program area, in a given fiscal year. As discussed previously, cost savings are only creditable under the contract if they do not negatively affect the mission. Sustained savings. In each fiscal year, CNS validates and NNSA verifies sustained savings resulting from cost reduction initiatives implemented in prior years. For example, CNS can claim sustained savings for each year it does not hire back employees into positions that were reduced in a prior year and for which CNS claimed savings. Cumulative contract savings. Cumulative contract savings is the sum of all contract savings that have accumulated from annual new savings and the sustainment of savings produced in prior years. For example, annual new savings verified in fiscal year 2015 would be multiplied by 10 if they are sustained through the life of the potential 10-year contract. Likewise, annual new savings verified in fiscal year 2016 would be multiplied by 9 if they are sustained through the life of the potential 10-year contract, and so forth. These cumulative contract savings are also known as “gateway savings” because NNSA considers the verified cumulative contract savings when making gateway decisions on whether or not to extend the contract for possible option terms. Table 1 shows how CNS proposed it could achieve approximately $2.9 billion over the life of the 10-year contract using this method of calculating cumulative contract savings. Hard savings—savings that directly reduce the overall cost of operations—are the only creditable type of savings under the contract. NNSA is only to verify savings if they do not negatively affect the mission. Examples of hard savings include a reduced number of personnel working to conduct the same scope of work or fewer labor hours required to complete a process due to operational efficiencies achieved, as well as savings in benefits packages (e.g., by requiring employees contribute more to their benefits). NNSA and CNS classify hard savings into four categories: (1) labor, (2) benefits, (3) supply chain, and (4) non-labor (see sidebar). generated by leveraging collective buying power agreements, utilizing competitive sourcing tools, and taking other actions to reduce the price of goods purchased. For example, in fiscal year 2016, CNS noted in its Validation Report that it used strategic sourcing to realize procurement savings. known as demand management savings—are savings generated through reductions in purchased materials quantities, subcontract costs, or licenses. For example, in fiscal year 2016, CNS assumed responsibility for some information technology work—including, among others, help desk support and network administration—that had been previously handled by subcontractors. Doing so reduced contract costs because CNS was able to perform the work at a lower cost than the subcontractor. products or services such as, for example, slowing the rate of a cost increase. NNSA officials said another example of a cost avoidance would be if the contractor has the option to buy more expensive airplane tickets for travel between the two sites but chooses to buy less expensive airplane tickets; the difference between the most expensive option and the actual tickets purchased is a cost avoidance and not considered hard savings that would be creditable under the contract. CNS Has Achieved Most of Its Proposed Savings, and Changes to Oversight and Methodologies Have Addressed Some Problems That May Affect Actual Savings NNSA verified approximately $170 million in annual new savings and approximately $515 million in cumulative contract savings from fiscal year 2014 through fiscal year 2018. The $515 million in cumulative contract savings that NNSA verified from fiscal year 2014 through fiscal year 2018 is about 80 percent of the approximately $640 million CNS proposed it would save through that fiscal year. NNSA’s oversight of the Cost Savings Program has improved and methods for calculating and verifying cost savings have evolved to address some problems encountered in the early years of the contract that may affect actual contract savings. NNSA Has Verified Hundreds of Millions of Dollars of CNS’s Claimed Savings NNSA verified between approximately $8 million and $63 million in annual new savings each year from fiscal year 2014 through fiscal year 2018, totaling approximately $170 million in annual new savings over this period. Of the $170 million in NNSA-verified annual new savings for fiscal years 2014 through 2018, roughly 10 percent (approximately $17 million) is attributed to the merging of the Y-12 and Pantex sites into a consolidated management structure, according to CNS and NNSA documentation. The remaining roughly 90 percent (approximately $153 million) is attributed to transforming site operations to create a more efficient and sustainable enterprise. Under the contract, savings from the previous year that have been sustained, and for which sustainment has been verified by NNSA, are added to the current year’s verified annual new savings amount, resulting in cumulative contract savings. As of the end of fiscal year 2018, NNSA verified approximately $515 million in cumulative contract savings (see table 2). We found that this $515 million in cumulative contract savings represents a reasonable estimate of the cumulative savings achieved. As part of our review, we traced information from 22 of about 90 cost reduction initiatives for which CNS claimed savings to source documents and reconciled discrepancies with NNSA and CNS officials to understand how NNSA verified the cost savings. Further, we reviewed NNSA’s documented procedures for verifying CNS’s reported data and interviewed officials about that process. Additionally, other reviews provide support that NNSA’s reported $515 million in cumulative contract savings is a reasonable estimate of savings achieved. Specifically, as part of the savings verification process, NNSA’s federal cost accountants ensured that CNS had set aside the money associated with the cost savings and confirmed that the funds were available for distribution under the cost-savings sharing arrangement. DCAA also reviewed CNS’s claimed cost savings for fiscal years 2016 through 2018 and NNSA and DCAA officials said the two entities used similar methods and came to similar conclusions. Labor savings, which include reductions in positions, comprised the largest portion of savings, at nearly two-thirds of the cumulative contract savings achieved from fiscal year 2014 through fiscal year 2018. Savings through changes to employee benefits comprised nearly a quarter of total cumulative contract savings over the period (see fig. 4). NNSA documents we examined showed that CNS, the government, and site reinvestment projects received a certain share of the $515 million in cumulative contract savings that NNSA verified from fiscal year 2014 through fiscal year 2018 in accordance with the terms of the contract. According to NNSA, approximately $262 million of the $515 million was available for the three parties to share during this period. The amount available to the three parties is determined by sharing periods of no more than 2 years negotiated for different categories of savings under the contract. According to NNSA documents, CNS earned about $78 million in cost-savings incentive fees, the government received about $97 million in savings, and site reinvestment projects received about $88 million of the available savings from fiscal year 2014 through fiscal year 2018 (see fig. 5). According to NNSA, the remaining approximately $253 million in cumulative savings was not available for sharing between the three parties because it accumulated outside of the savings sharing period. CNS Has Achieved about 80 Percent of Its Proposed Savings from Fiscal Year 2014 through Fiscal Year 2018 The $515 million in cumulative contract savings that NNSA verified from fiscal year 2014 through fiscal year 2018 is about 80 percent of the approximately $640 million in cumulative contract savings CNS proposed it would save through that fiscal year. CNS achieved more in cumulative contract savings than it proposed through fiscal year 2015. Specifically, CNS proposed approximately $67 million in cumulative contract savings through fiscal year 2015 and NNSA verified approximately $78 million. From fiscal years 2016 through 2018, however, CNS achieved less in cumulative contract savings than it proposed (see fig. 6). As described above, achieving approximately $2.9 billion in savings over the life of the contract assumed meeting all proposed annual new savings targets and fully sustaining those savings in each year of the contract. According to the terms of the contract, NNSA considers achievement of cost savings when evaluating overall contract performance, and therefore, achievement of proposed cost savings may factor into NNSA’s decision of whether to exercise further contract option terms. Two key issues—benefits savings and fiscal year 2016 labor savings— contributed to CNS not meeting its proposed cost savings targets through the end of fiscal year 2018 and may affect CNS’s ability to achieve its proposed cumulative contract savings of approximately $2.9 billion over the life of the contract. Benefits savings. CNS proposed it could save $594 million over the life of the contract through adjustments to employee benefits, but as of March 2020, CNS officials told us that CNS’s projected benefits savings would total $399 million over the entire 10-year contract, a decrease of almost $200 million from its proposal. According to these officials, several factors have contributed to CNS’s decreased benefits savings estimate, including delays in bargaining unit transition to benefit plans and rates and a decrease in employee contributions to pensions, among other reasons. Fiscal year 2016 labor savings. In fiscal year 2016, CNS claimed approximately $30 million in new labor savings based on a claimed reduction of 283 full-time equivalent employees, but NNSA rejected all of those savings. According to the fiscal year 2016 NNSA Verification Report, CNS failed to realize efficiencies that resulted in full-time equivalent growth in other areas, which offset CNS’s claim of new labor savings. Rejection of these fiscal year 2016 labor savings could result in a loss of approximately $270 million in cumulative savings through the end of the potential 10-year contract period when factoring in potential sustained savings. NNSA officials emphasized that any amount of cost savings is beneficial to the government and that NNSA’s priority for CNS is safe and secure performance of its mission. NNSA officials noted that if CNS does not implement any additional cost reduction initiatives and sustains the savings from all previously-implemented cost reduction initiatives, CNS will still save about $1.7 billion through fiscal year 2024. CNS officials told us that CNS will continue to work toward its cumulative proposed savings of approximately $2.9 billion and hopes to meet or exceed that estimate. According to these officials, doing so will allow CNS to realize its proposed savings and provide the maximum benefit to the government and taxpayers. To achieve its proposed savings, CNS would need to sustain all previously implemented savings, achieve verified annual new savings of approximately $57 million per year every year, and sustain those additional savings through 2024. However, CNS’s proposed annual new savings are substantially lower for fiscal year 2019 through the end of the contract (averaging about $30 million per year) than they were from fiscal year 2014 through fiscal year 2018. This decrease is, in part, because many cost reduction initiatives with high savings potential—such as labor streamlining and changes to employee benefits—have been implemented. For example, CNS eliminated 270 positions and provided voluntary separation severance packages to another 182 employees in fiscal year 2014. This accounted for more than 40 percent ($221 million) of the cumulative contract savings because CNS sustained those savings in fiscal years 2015 through 2018. CNS has already implemented many cost reduction initiatives with high savings potential, so it may be difficult for CNS to meet its proposed cumulative contract savings. Methodologies for Calculating Cost Savings and NNSA’s Oversight of the Program Have Evolved to Address Factors That May Affect Actual Contract Savings CNS and NNSA initially encountered problems with calculating and verifying cost savings—problems that may affect actual contract savings—but methods for calculating and verifying savings have evolved, and NNSA’s oversight of the Cost Savings Program has improved. Specifically, CNS and NNSA initially encountered problems—which have largely been addressed—with: (1) calculating and verifying execution costs; (2) calculating and verifying labor savings; and (3) communicating and collaborating about the Cost Savings Program throughout the year. Calculating and verifying execution costs. NNSA encountered early problems with verifying execution costs for CNS’s cost savings initiatives, but CNS changed its methodology for calculating execution costs each year that ultimately addressed those problems. Since the contract’s inception, CNS has relied on a subcontractor to operate much of the Cost Savings Program. In fiscal year 2014, costs for this subcontractor totaled approximately $7 million. CNS believed that approximately $546,000 of the $7 million should be considered execution costs and counted against the cost savings for that year, but NNSA believed the entire $7 million should be considered execution costs. NNSA and CNS reached agreement that a proportional factor—19.3 percent—of the subcontractor’s time was spent on activities that would qualify as execution activities under the contract for fiscal years 2014 and 2015. NNSA instructed CNS to capture and report the subcontractor’s actual execution costs beginning in fiscal year 2016. CNS began using the subcontract’s actual execution costs in fiscal year 2016, according to NNSA officials. However, NNSA officials said CNS used a proportional factor of the subcontract’s execution costs from previous years to estimate the execution costs of CNS employees for fiscal year 2016. NNSA noted in its fiscal year 2016 Verification Report that using the proportional factor approach for estimating execution costs may not reflect the actual execution costs. CNS officials said they believe this estimation was conservative because it resulted in higher CNS administrative and development costs than subsequent years. Additionally, in fiscal years 2015 and 2016, CNS reported estimates for its total execution costs rather than tracking the actual execution costs for each individual cost reduction initiative, which NNSA officials said made it difficult to verify net savings. In fiscal year 2017, CNS developed a methodology for allocating execution costs—administrative costs, implementation costs, and development costs—to individual cost reduction initiatives and began reporting execution costs at this level in the fiscal year 2017 Validation Report. According to NNSA officials, CNS also began reporting execution costs by individual cost reduction initiative for its subcontractor beginning in fiscal year 2017. In fiscal year 2018, CNS developed execution cost charge codes that allowed CNS to report actual hours spent on cost reduction initiative execution activities— including amounts for its subcontractor—for the first time since the contract began. NNSA officials told us that they are generally satisfied with the way CNS is now capturing execution costs and that the use of charge codes has improved their confidence in CNS’s reporting of certain execution costs. However, CNS’s use of the proportional factor of 19.3 percent of the subcontractor’s execution costs, lack of detail on execution costs for individual cost reduction initiatives, and use of estimated—rather than actual—execution costs could mean that the actual execution costs for fiscal years 2014 through 2017 are not fully captured in reported cumulative savings and actual contract savings could be higher or lower than the reported amount. Even if the actual contract savings are higher or lower than the reported amount, we believe $515 million is a reasonable estimate of the savings achieved to date. Calculating and verifying labor savings. In fiscal years 2014 and 2015, CNS used a headcount methodology to calculate labor savings and demonstrate sustainment of those savings. Using a headcount methodology, CNS could claim labor savings if it could demonstrate and maintain a reduced number of employees to conduct the same scope of work. According to NNSA and CNS officials, one potential problem with using a headcount approach is that CNS could maintain a reduced number of staff but have those staff work overtime. If this occurred, it would result in overall increased contract costs, thereby reducing the net savings from the cost reduction initiative. In fiscal year 2016, CNS modified its methodology for calculating labor savings to use labor hours rather than employee headcounts. Under this modified approach, CNS could claim labor savings if it could demonstrate and maintain reduced labor hours regardless of the number of employees, a method that NNSA and CNS officials said is a better measure of labor savings. However, under this methodology, CNS calculated labor savings based on planned, rather than actual, reductions in labor hours. In fiscal year 2017, CNS modified its methodology again to begin using actual reduced labor hours rather than planned reduced labor hours. However, CNS’s use of headcounts and planned, rather than actual, reduction in labor hours could mean that the labor savings for fiscal years 2014 through 2016 are not accurately reflected in the verified cumulative contract savings, and actual contract savings could be higher or lower than the reported amount. As noted above, even if the actual contract savings are higher or lower than the reported amount, we believe $515 million is a reasonable estimate of the savings achieved to date. Communicating and collaborating about the Cost Savings Program. According to NNSA officials, early years of the contract were marked by limited oversight and poor communication between NNSA and CNS. CNS delegated responsibility for the Cost Savings Program to a subcontractor, and according to NNSA and CNS officials, CNS had limited involvement in the Cost Savings Program and did not communicate with NNSA about cost savings matters. Similarly, NNSA officials told us that one or two individuals at NNSA managed the cost savings component of the contract for the federal government and that communication was poor between those individuals and the technical personnel responsible for evaluating the implementation of CNS’s cost reduction initiatives. As a result of this limited oversight and communication, NNSA officials said CNS did not understand NNSA’s expectations for cost savings data and had to submit five iterations of its first Validation Report. In fiscal year 2017, NNSA established a collaborative working team— known as the Integrated Project Team and consisting of personnel from NNSA and CNS—which meets biweekly to discuss issues related to the Cost Savings Program. Also in fiscal year 2017, NNSA began conducting tri-annual reviews of active cost reduction initiatives. For these reviews, CNS submits performance reports and briefs knowledgeable NNSA officials on the status of individual cost reduction initiatives. NNSA uses this information to identify potential gaps in cost-savings reporting data and, among other things, informs CNS of any concerns with its methodology or NNSA’s ability to verify the cost savings. NNSA officials stated that the increased collaboration and more frequent communication has resulted in improved Validation Reports and fewer revisions. For example, NNSA stated in its fiscal year 2017 Verification Report that the quality and completeness of CNS’s fiscal year 2017 Validation Report “demonstrated substantial improvement” over the fiscal year 2016 report. While CNS’s and NNSA’s methods for calculating and verifying savings and conducting oversight evolved in the early years of the contract to improve the accuracy of cost savings calculations, we believe the $515 million in reported cumulative savings represents a reasonable estimate of the contract savings achieved to date for reasons we described earlier. NNSA Identified Benefits of the Cost Savings Program but Has Not Fully Used Them to Improve M&O Contracts NNSA Identified Three Key Benefits of the Y-12 and Pantex Cost Savings Program but Has Not Planned on How Best to Use Site Reinvestment Funds NNSA officials said three key benefits of the Cost Savings Program are (1) achieving savings; (2) increasing financial transparency; and (3) funding site reinvestment projects. Achieving savings. As discussed previously, the Cost Savings Program resulted in total new annual savings of approximately $170 million and $515 million in cumulative contract savings, from fiscal year 2014 through fiscal year 2018. According to NNSA officials, these cost savings would not have materialized without the Cost Savings Program. We have previously found that DOE could better assess M&O contractors’ cost performance—i.e., their performance on spending, budgeting, strategic sourcing, and cost-effectiveness—to help strengthen contractor oversight and better inform acquisitions decisions. Demonstrating contractors’ efforts to achieve cost savings and NNSA’s associated efforts to evaluate contractors’ cost effectiveness provides evidence that for the CNS contract, NNSA is placing importance on cost performance while overall resource needs are increasing. For example, NNSA has identified an increasing weapons program workload and a need to recapitalize or replace aging facilities and equipment to meet nuclear weapons modernization programs over the next decades. To help achieve these goals, NNSA’s fiscal year 2021 budget request included a 25 percent increase for NNSA’s weapons activities appropriation, which funds programs at NNSA sites including Pantex and Y-12. Identifying cost savings could help NNSA minimize budget increases in an era of increasing workload and assure congressional decision-makers that NNSA is working to effectively steward federal resources. Increasing financial transparency. Because of the Cost Savings Program, which required the establishment of the Annual Controlled Baseline in order to measure potential savings, NNSA has better and more thorough information on the costs of running the two sites, NPO officials said. The Annual Controlled Baseline provides more information because in order to demonstrate savings CNS had to first establish a cost baseline, which required complete information on funding streams as well as how certain rate structures are established, according to NPO officials. Officials from NNSA’s Office of Acquisition and Project Management also said this was the first time that NNSA has been able to gain insight into the actual costs of certain activities at Y-12 and Pantex, as a result of the Annual Controlled Baseline being established. None of the other M&O sites have an established site-wide baseline against which to measure costs or cost savings, according to NNSA and M&O officials we interviewed. Officials from the Office of Acquisition and Project Management said having an Annual Controlled Baseline at other sites would give them additional insight into the cost of certain activities, as opposed to the traditional budget-based view they have into M&O activities. At other M&O sites, NNSA uses a budget-based model, which consists of the government obligating a certain amount of money and getting as much product or service for that amount of money as the sites will provide, NNSA officials said. Instead, NNSA is employing a cost- based model at Y-12 and Pantex, which involves determining the cost to produce a certain amount of product, NNSA officials explained. Funding site reinvestment projects. As part of the Cost Savings Program, a certain percentage of the achieved savings is reinvested back into the sites. According to NNSA officials, this process has allowed NNSA to allocate funds to site reinvestment projects to improve the Y-12 and Pantex sites’ aging infrastructure. As of fiscal year 2019, NNSA reported about a $4.8 billion deferred maintenance backlog throughout the nuclear security enterprise. We previously found that facilities considered not mission dependent—such as cafeterias, parking structures and excess facilities—comprised about 40 percent of the deferred maintenance backlog. NNSA officials said addressing deferred maintenance at these types of facilities is low priority, beyond keeping facilities in a safe condition, because the agency targets scarce budgetary resources to mission critical facilities. According to NNSA officials, NNSA would not likely have allocated funds for these site reinvestment projects at Y-12 and Pantex without the Cost Savings Program because they are often considered lower priority projects. As a result, the nuclear security enterprise as a whole potentially benefits from these site reinvestment projects at Y-12 and Pantex since those reinvestment projects serve to reduce overall deferred maintenance and potentially make funds available for projects to address aging infrastructure at other sites. Site reinvestment projects may lead to additional cost savings as well, NNSA officials said, if, for example, NNSA uses site reinvestment funds to purchase a machine that automates a process and saves labor hours as a result. For example, NNSA invested in a machine to replace three different machines that were previously required to produce a screw. This improved throughput and turnaround time and saved labor hours, according to NNSA documentation. NNSA approved a total of 80 site reinvestment projects at Y-12 and Pantex as of April 2020, for a total of approximately $75 million that was available for reinvestment into the sites. For example, CNS used about $1.2 million in site reinvestment funds to replace analog cameras along Y-12’s perimeter fencing with digital cameras (see fig. 7). This site reinvestment project improved physical security and reduced camera maintenance costs, as well as the security team’s ability to assess alarms and manage alarm response, according to NPO documentation. Because the analog cameras were still functioning, they may have otherwise been a lower priority to replace without the site reinvestment funding, NPO officials said. In addition, the John C. Drummond Center, a new administrative support complex at Pantex, was partially built with savings from the Cost Savings Program (see fig. 8). According to NNSA documentation, the new facility helps eliminate approximately $20 million in deferred maintenance costs of the older administrative buildings it replaced. Although NNSA identified site reinvestment projects as one of the key benefits of the Cost Savings Program, NNSA and CNS had not committed approximately $13 million of site reinvestment funds available at Y-12 and Pantex as of April 2020. NNSA and CNS had not yet committed the site reinvestment funds to specific project efforts, in part because they have not evaluated how best to use the remaining available site reinvestment funds or developed a plan for doing so. The $13 million is currently distributed across several different layers of accounts, in some cases in amounts too small to execute a site reinvestment project. To aggregate the funds in amounts large enough for certain projects, NNSA may need to move funding from one account to another. The funds for site reinvestment projects are distributed in accordance with the terms of the contract and are spread across different programs, projects, or activities (PPA). Beneath the PPA is the DOE budget and reporting code level, which DOE also tracks in its official accounting system (see fig. 9). According to NNSA officials, there were 68 PPAs with 97 budget and reporting codes underneath them that, as of April 2020, had funds available for site reinvestment. According to NNSA officials and CNS representatives, this distribution makes it difficult to use all of the site reinvestment funds. This difficulty is because a given site reinvestment project may require funds to be aggregated across budget and reporting codes in order to have enough funds for executing the project, and while NNSA can move funds between budget and reporting codes that are within the same PPA, movement of funds among PPAs (reprogramming) could require congressional approval. As of April 2020, of the 68 PPAs with available funds for site reinvestment, 17 (or about 25 percent) had multiple budget and reporting codes underneath them, according to NNSA officials. Those 17 PPAs had between 2 to 6 budget and reporting codes underneath them, according to those officials (see fig. 10). We have previously found that comprehensive plans can help organizations identify potential problems before they occur and target limited resources. A comprehensive plan can also detail milestones and key goals, which provide meaningful guidance for planning and measuring progress. Such plans can establish deadlines for achieving objectives and assign responsibility for any implementation. Most of NNSA’s appropriations are “no-year funds” and are, therefore, available for obligation until expended. Without evaluating and developing a plan for how best to use funds for site reinvestment projects—to include determining whether to reprogram funds—NNSA and CNS are not fully utilizing available site reinvestment funds, and the funds could be rescinded from NNSA’s appropriations in later years if the unspent balances persist. NNSA has not sought congressional approval to combine site reinvestment money across different PPAs in order to aggregate these funds to execute larger site reinvestment projects, officials said. Also, while NNSA moves funds weekly between budget and reporting codes that are within the same PPA to execute its work, officials said NNSA has not moved any site reinvestment funds from different budget and reporting codes within the same PPA to fund site reinvestment projects. Once NNSA develops a plan on how best to aggregate or use the remaining and potential future site reinvestment funds, it would be better positioned to: move some funds between budget and reporting codes within the reprogram funds between PPAs, including seeking congressional approval where it may be required. NNSA Is Not Fully Using Information on the Benefits of the Cost Savings Program to Improve M&O Contracts NNSA Has Not Analyzed Whether to Implement the Cost Savings Program in Other Existing or Future M&O Contracts NNSA officials identified the achievement of cost savings as a benefit of the Cost Savings Program that could be useful at other sites and to the nuclear security enterprise generally; however, the officials said they are not planning to implement the Cost Savings Program as part of other future or existing M&O contracts. Most existing NNSA M&O contracts include a “Cost Reduction” clause, under which sites could implement a Cost Savings Program with some attributes of the program at Y-12 and Pantex. According to GAO’s Framework for Assessing the Acquisition Function at Federal Agencies, leading organizations gather and analyze data to identify opportunities to reduce costs, among other reasons. Further, the framework states that incomplete data can prevent an agency from maximizing information tools for strategic acquisition planning and analysis. According to officials from the Office of Acquisition and Project Management, they do not plan to implement the Cost Savings Program or anything similar to it as part of future M&O contracts because of uncertainties regarding (1) the opportunities for similar savings at other sites and (2) the federal costs involved in implementing and overseeing the Cost Savings Program—including the time and effort needed to verify cost savings—and how these costs affect the overall net savings. NNSA site officials and contractor representatives we interviewed also raised questions about these issues. For example, according to NNSA officials and representatives at two sites, the Cost Savings Program may not be exportable to other sites, in part because other sites may not be able to identify cost savings initiatives that would yield the same level of savings as at Y-12 and Pantex. The officials believed that much of the savings identified at those sites resulted from merger savings—savings stemming from consolidating the two sites—that would not be possible without combining two sites under one contract. However, as mentioned previously, our analysis found that the majority—about 90 percent—of annual savings at Y-12 and Pantex resulted from transformation initiatives, or savings based on improving standardization, quality, and efficiency. Merger savings contributed only about 10 percent of the total new annual savings identified from fiscal year 2014 through fiscal year 2018. NNSA officials and contractor representatives at other NNSA sites also raised questions about whether the cost of implementing and maintaining a formal cost savings program might outweigh the benefits at a site. According to NNSA officials, a large number of government employees are involved in implementing and overseeing the Cost Savings Program. According to an official from the Office of Acquisition and Project Management, NNSA has not analyzed the total costs of implementing the Cost Savings Program, including the costs associated with the government effort to oversee the program. For the Cost Savings Program, NNSA verifies net savings after accounting for CNS’s execution costs. However, the verified savings do not take into consideration federal costs for implementing, maintaining, and overseeing the Cost Savings Program. To provide a sense of the scope of the oversight effort, NPO officials said about 100 of the approximately 130 employees at NPO at the end of fiscal year 2018 had some role in the Cost Savings Program, although only one full-time position is dedicated to the Cost Savings Program. Further, NNSA is likely to start its acquisition planning for some M&O contracts in 2022 and 2023. However, NNSA officials, as well as site officials, were uncertain about whether the Cost Savings Program could be exported to other existing or future contracts, including the cost effectiveness of the program, because NNSA has not gathered information on and documented analysis of the costs and potential benefits of the Cost Savings Program. By gathering information on and documenting the analysis of data on the costs and benefits of the Cost Savings Program, NNSA officials and contractor representatives could make better-informed decisions about whether to implement aspects of the Cost Savings Program at other sites. NNSA Has Not Evaluated or Shared Information on Specific Benefits of the Cost Savings Program That Could Be Applied Elsewhere CNS achieved cost savings at Y-12 and Pantex by implementing a variety of cost savings initiatives. Even without a formal Cost Savings Program in place, some efficiencies may be applicable at other sites as a way to save money across the enterprise, according to officials we interviewed from NPO. For example, at Pantex, the contractor discovered it could conduct fewer recurring injections of treatment wells but still achieve the same technical results and comply with standards, according to NNSA officials. This initiative saved over $500,000, according to NNSA’s Verification Report. If other sites experience similar recurring costs, then sharing this initiative might lead to cost savings at those sites. According to DOE’s Order 210.2A on the DOE Corporate Operating Experience Program, each DOE organization is required to submit lessons learned to the DOE Corporate Lessons Learned Database when the operating experience has relevance to other DOE sites and the information has the potential for cost savings. Although NPO did not enter information about lessons learned from the Cost Savings Program into the database, NPO officials said they shared lessons learned with the Executive Steering Committee and that they presumed the Committee had passed information along to other sites. Contractor representatives and NNSA officials from all five of the other NNSA sites we interviewed noted that NNSA has not shared any information about specific successful cost savings initiatives from Y-12 and Pantex that could be applicable to them. Almost half of the NNSA officials and contractor representatives from other sites we interviewed said they were not very familiar with the Cost Savings Program. However, officials at Y-12 and Pantex told us they believe there are certain initiatives that could be useful at other sites and that other sites have asked for information about certain initiatives. Officials from the Office of Acquisition and Project Management said they believe there will be a request for a lessons learned evaluation from NNSA headquarters once the current Y-12 and Pantex contract expires; however, such an effort would begin in several years—as late as 2024 if all option terms are exercised and NNSA began this evaluation immediately. According to NNSA officials, the Cost Savings Program was a new concept and required maturity and proven concepts before sharing any lessons learned. However, by sharing information on potentially beneficial efficiencies and lessons learned from the Cost Savings Program at Y-12 and Pantex throughout the enterprise, NNSA could help achieve cost savings enterprise-wide even without implementing formal cost savings programs at other sites. NNSA Has Not Evaluated Whether an Annual Controlled Baseline May Be Beneficial at Other Sites The Annual Controlled Baseline is another specific aspect of the Cost Savings Program that could be beneficial to implement at other sites, or programs at a site, NNSA officials said. Currently, none of the other NNSA sites have an established site-wide baseline that would allow NNSA to understand the costs involved in running those sites or implementing their programs, according to officials from NNSA’s Office of Acquisition and Project Management. According to NPO officials, the Annual Controlled Baseline provides NNSA with better and more thorough information on the costs of running the two sites. As discussed previously, employing a cost-based model at Y-12 and Pantex—as opposed to the budget-based model at other sites—allows NNSA to understand the contractor’s cost to produce a certain amount of product. Although officials from NNSA’s Office of Acquisition and Project Management said it would be beneficial to have the Annual Controlled Baseline at other sites in order to gain additional insight into the cost of certain activities, they believed a drawback to requiring other sites to institute such a baseline would be deploying the considerable effort and resources to establish the baseline similar to those that were required at Y-12 and Pantex. NNSA has not evaluated whether to require the other sites to have an Annual Controlled Baseline, either for the entire site or for certain programs at different sites. The 2019 DOE Acquisition Guide states that in the context of acquisition planning, good technical, schedule, and cost baselines are essential for developing realistic and measureable targets. By evaluating whether to require all sites to implement an Annual Controlled Baseline, either for the entire site or for certain programs at the different sites, NNSA may be in a better position to achieve greater financial transparency at sites across the nuclear security enterprise. This action, in turn, could potentially identify opportunities for cost savings, help NNSA better understand their contractors’ cost performance, and help the agency administer its sites more efficiently. Conclusions In recent years, the Cost Savings Program at Y-12 and Pantex has realized hundreds of millions in savings to the nuclear security enterprise, dozens of site reinvestment projects, and increased financial transparency. Although NNSA has identified site reinvestment projects as one of the key benefits of the Cost Savings Program, NNSA and CNS have not committed approximately $13 million of site reinvestment funds available at Y-12 and Pantex, in part because they have not evaluated and developed a plan on how best to aggregate and use the funds. If NNSA develops a plan on how best to use the remaining and potential future available site reinvestment funds, it would be better positioned to aggregate funds for site reinvestment projects. Further, if funds for site reinvestment projects persist in PPAs for too long, NNSA risks their rescission in future years’ appropriations. NNSA officials were uncertain about whether the Cost Savings Program could be exported to other existing or future contracts, including the cost effectiveness of the program, because NNSA has not gathered information on and documented its analysis of the costs and potential benefits of the Cost Savings Program. By gathering information on and documenting its analysis of the results of the Cost Savings Program, NNSA officials and contractor representatives could make a better- informed decision about whether to implement aspects of the Cost Savings Program under existing contracts or as part of future M&O contracts. NNSA has not shared information on specific efficiencies that could be applicable to other sites because NNSA officials have not submitted such lessons learned to DOE’s Corporate Lessons Learned Database. By sharing information on potentially beneficial efficiencies and lessons learned from the Cost Savings Program at Y-12 and Pantex throughout the enterprise, NNSA could help achieve cost savings enterprise-wide even without implementing formal cost savings programs at other sites. Additionally, none of the other NNSA sites have an established site-wide baseline. NNSA has not evaluated whether it should require the other sites to have such a baseline. By evaluating whether to require other sites to institute a baseline—either in whole or in part for certain programs at the different sites—NNSA could increase financial transparency agency- wide. Recommendations for Executive Action We are making the following four recommendations to NNSA: The NPO Cost Savings Program Manager should work with CNS to evaluate the remaining site reinvestment funds and develop and implement a plan for how best to aggregate and use them. (Recommendation 1) The Associate Administrator for Acquisition and Project Management should gather data on and document an analysis of the Cost Savings Program, including its cost effectiveness, to determine whether it is exportable to existing or future contracts. (Recommendation 2) The NPO Cost Savings Program Manager should share relevant lessons learned with other NNSA sites so that those sites can determine if efficiencies CNS has achieved can be implemented at other sites. (Recommendation 3) The Associate Administrator for Acquisition and Project Management should evaluate whether to require all other sites to institute an Annual Controlled Baseline. (Recommendation 4) Agency Comments We provided a draft of this report to NNSA for review and comment. The agency provided written comments, which are reproduced in appendix I; the agency also provided technical comments that we incorporated in the report as appropriate. NNSA agreed with three of the recommendations and agreed in principle with the fourth. Regarding our second recommendation that NNSA gather data on and document an analysis of the Cost Savings Program, including its cost effectiveness, to determine its exportability to existing or future contracts, NNSA agreed that the potential benefits of a Cost Savings Program should be considered for future contracts, as applicable. However, in its written comments, NNSA stated that the Cost Savings Program was uniquely intertwined with the consolidation of the two sites, Y-12 and Pantex, under one contract. As we discussed in the report, roughly 90 percent of the savings from the Cost Savings Program were attributed to transforming site operations to create a more efficient and sustainable enterprise, and not associated with merging the two sites. We continue to believe that by gathering data and documenting an analysis of the Cost Savings Program for its exportability, NNSA will be able to make better- informed decisions about whether to implement the program at other existing or future contracts. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Administrator of NNSA, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or bawdena@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix II. Appendix I: Comments from the National Nuclear Security Administration Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments Allison B. Bawden, (202) 512-3841, or bawdena@gao.gov. In addition to the individual named above, key contributors to this report included Hilary Benedict (Assistant Director), Jessica Lewis (Analyst in Charge), Antoinette Capaccio, Cindy Gilbert, Dan Royer, Holly Sasso, Sheryl Stein, Breanna Trexler, and Monique Williams.
Why GAO Did This Study NNSA relies on M&O contracts to manage and operate its eight laboratory and production sites. In 2013, NNSA awarded a consolidated M&O contract to CNS for the Y-12 and Pantex sites to reduce costs. In the contract, NNSA required that CNS create a Cost Savings Program. CNS proposed it would save about $2.9 billion over the contract's potential 10-year term. The Senate committee report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2019 includes a provision for GAO to review the cost savings achieved from the competition and award of the CNS contract. GAO's report examines the extent to which (1) CNS achieved proposed cost savings from fiscal year 2014 through fiscal year 2018 and (2) NNSA identified benefits associated with the Cost Savings Program and used that information to improve its M&O contracts. GAO reviewed documentation and data on the Cost Savings Program from NNSA and CNS, interviewed NNSA headquarters and field office officials as well as representatives from M&O contractors, and toured the Y-12 site to understand examples of cost savings initiatives. What GAO Found The National Nuclear Security Administration (NNSA) verified about $515 million in cumulative cost savings claimed by Consolidated Nuclear Security, LLC, (CNS) from fiscal year 2014 through fiscal year 2018 (see figure). CNS was awarded the management and operating (M&O) contract for both the Y-12 National Security Complex (Y-12) in Tennessee and the Pantex Plant (Pantex) in Texas. Those savings represented about 80 percent of the approximately $640 million CNS proposed it would save through the end of fiscal year 2018. CNS achieved most of the savings through labor savings—for example, by reducing positions. While CNS's and NNSA's methods for calculating and verifying savings evolved in the early years of the contract, GAO concluded the $515 million in reported cumulative savings represents a reasonable estimate. However, due to differences between proposed and achieved savings through fiscal year 2018, and annual savings projections that are lower for the remaining years of the contract, it may be difficult for the contractor to achieve its total proposed $2.9 billion in savings over the potential 10-year contract that would end in 2024. NNSA officials identified three key benefits of the Cost Savings Program—achieving savings, reinvesting in site infrastructure, and increasing financial transparency—but has not determined whether the program could be implemented at other sites to improve its M&O contracts. For example, NNSA officials said achieving cost savings at other sites could be useful, and most M&O contracts include a clause under which sites could implement a Cost Savings Program with some attributes of the program at Y-12 and Pantex. However, NNSA is not planning to implement the Cost Savings Program—or a variation of it—at other sites. NNSA officials and contractor representatives were uncertain about whether the Cost Savings Program could be exported to other existing or future contracts because NNSA has not gathered information on nor documented its analysis of the Cost Savings Program. GAO has previously found that leading organizations gather and analyze data to identify opportunities to reduce costs, among other reasons. By performing such an analysis, NNSA officials and contractors' representatives could make better-informed decisions about whether to implement aspects of the Cost Savings Program under existing contracts or as part of future M&O contracts to achieve additional savings in the future. What GAO Recommends GAO is making four recommendations, including that NNSA document its analysis of the Cost Savings Program to determine whether it is exportable to other contracts. NNSA generally agreed with the four recommendations.
gao_GAO-19-368T
gao_GAO-19-368T_0
Background Family Separations at the Southwest Border According to DHS and HHS officials, DHS has historically separated a number of children from accompanying adults at the border and transferred them to HHS custody, but these separations occurred only in certain circumstances. For example, DHS might separate families if the parental relationship could not be confirmed, if there was reason to believe the adult was participating in human trafficking or otherwise a threat to the safety of the child, or if the child crossed the border with other family members such as grandparents without proof of legal guardianship. HHS has traditionally treated these children as unaccompanied alien children (UAC)—children who (1) have no lawful immigration status in the United States, (2) have not attained 18 years of age, and (3) have no parent or legal guardian in the United States or no parent or legal guardian in the United States available to provide care and physical custody. The Attorney General’s April 2018 memorandum, also referred to as the “zero tolerance” policy, directed Department of Justice (DOJ) prosecutors to accept all referrals of all improper entry offenses from DHS for criminal prosecution, to the extent practicable. According to DHS officials, in implementing the April 2018 memo, DHS’s U.S. Customs and Border Protection (CBP) began referring a greater number of individuals apprehended at the border to DOJ for criminal prosecution, including parents who were apprehended with children. In these cases, referred parents were placed into U.S. Marshals Service custody and separated from their children because minors cannot remain with a parent who is arrested on criminal charges and detained by U.S. Marshals Service. In cases where parents were referred to DOJ for criminal proceedings and separated from their children, DHS and HHS officials stated they treated those children as UAC. In such cases, DHS transferred these children to the custody of HHS’s Office of Refugee Resettlement (ORR) and ORR placed them in one of their shelter facilities, as is the standard procedure for UAC. The President’s executive order issued on June 20, 2018, directed, among other things, that the Secretary of Homeland Security maintain custody of alien families during any criminal improper entry or immigration proceedings involving their family members, to the extent possible. This order stated that the policy of the administration is to maintain family unity, including by detaining alien families together where appropriate. In addition, on June 26, 2018, a federal judge ruled in the Ms. L. v. ICE case that certain separated parents must be reunited with their minor children (referred to in this testimony statement as the “June 2018 court order”). In this case, the American Civil Liberties Union filed a federal lawsuit on behalf of certain parents (referred to as class members) who had been separated from their children. As of September, 10, 2018, the government had identified 2,654 children of potential class members in the Ms. L. v. ICE case, which we discuss in greater detail later in this statement. As of January 31, 2019, this litigation was ongoing. Care and Custody of Unaccompanied Alien Children (UAC) Under the Homeland Security Act of 2002, responsibility for the apprehension, temporary detention, transfer, and repatriation of UAC is delegated to DHS, and responsibility for coordinating and implementing the placement and care of UAC is delegated to HHS’s ORR. CBP’s U.S. Border Patrol (Border Patrol) and Office of Field Operations (OFO), as well as DHS’s ICE, apprehend, process, temporarily detain, and care for UAC who enter the United States with no lawful immigration status. ICE’s Office of Enforcement and Removal Operations (ERO) is generally responsible for transferring UAC, as appropriate, to ORR, or repatriating them to their countries of nationality or last habitual residence. Under the William Wilberforce Trafficking Victims Protection Reauthorization Act of 2008 (TVPRA), UAC in the custody of any federal department or agency, including DHS, must be transferred to ORR within 72 hours after determining that they are UAC, except in exceptional circumstances. In addition, the 1997 Flores v. Reno Settlement Agreement (Flores Agreement) sets standards of care for UAC while in DHS or ORR custody, including, among other things, providing drinking water, food, and proper physical care and shelter for children. In 2015 and 2016, we reported on DHS’s and HHS’s care and custody of UAC, including the standard procedures that DHS follows to transfer UAC to ORR. ORR’s UAC policy guide states that the agency requests certain information from DHS when DHS refers children to ORR, including, for example, how DHS determined the child was unaccompanied. Depending on which DHS component or office is referring the child to ORR, DHS may provide information on the child in an automated manner directly into ORR’s UAC Portal—the official system of record for children in ORR’s care—or via email. ORR has cooperative agreements with residential care providers to house and care for UAC while they are in ORR custody. The aim is to provide housing and care in the least restrictive environment commensurate with the children’s safety and emotional and physical needs. In addition, these care providers are responsible for identifying and assessing the suitability of potential sponsors—generally a parent or other relative in the country—who can care for the child after the child leaves ORR custody. Release to a sponsor does not grant UAC legal immigration status. Children are scheduled for removal proceedings in immigration courts to determine whether they will be ordered removed from the United States or granted immigration relief. Once at the shelter, shelter staff typically conduct an intake assessment of the child within 24 hours, and then are to provide services such as health care and education. According to ORR’s UAC policy guide, shelter staff are responsible for meeting with the child to begin identifying potential sponsors, which can include parents. To assess the suitability of potential sponsors, including parents, ORR care providers collect information from potential sponsors to establish and identify their relationship to the child. For example, the screening conducted of potential sponsors includes various background checks and in June 2018, ORR implemented increased background check requirements that were outlined in an April 2018 memorandum of agreement with DHS. These changes required ORR staff to collect fingerprints from all potential sponsors, including parents, and all adults in the potential sponsor’s household and transmit the fingerprints to ICE to perform criminal and immigration status checks on ORR’s behalf. ICE was to submit the results to ORR, and ORR used this information, along with information provided by, and interviews with, the potential sponsors, to assess their suitability. However, in December 2018, ORR revised its background check policy to limit criminal and immigration status checks conducted by ICE to the potential sponsor, unless concerns about other adult household members are raised via a public records check, there is a documented risk to the safety of the child, the child is particularly vulnerable, or the case is referred for a home study. HHS and DHS Planning for Family Separations According to HHS and DHS officials we interviewed, the departments did not take specific steps in advance of the April 2018 memo to plan for family separations or a potential increase in the number of children who would be referred to ORR because they did not have advance notice of the memo. Specifically, ORR, CBP, and ICE officials we interviewed stated that they became aware of the April 2018 memo when it was announced publicly. Though they did not receive advance notice of the April 2018 memo, ORR officials stated that they were aware that increased separations of parents and children were occurring prior to the April memo. According to ORR officials, the percentage of children referred to ORR who were known to have been separated from their parents rose by more than tenfold from November 2016 to August 2017 (0.3 to 3.6 percent). In addition, the ORR shelter and field staff we interviewed at four ORR facilities in Arizona and Texas told us they started noticing an increase in the number of children separated from their parents in late 2017 and early 2018, prior to the April 2018 memo. The DHS officials we interviewed stated that, in some locations across the southwest border, there was an increase in the number of aliens CBP referred to DOJ for prosecution of immigration-related offenses after an Attorney General memo issued in April 2017. This memo prioritized enforcement of a number of criminal immigration-related offenses, including misdemeanor improper entry. In addition, CBP officials stated that there may have been an increase in children separated from non-parent relatives or other adults fraudulently posing as the child’s parents. According to ORR officials, in November 2017, ORR officials asked DHS officials to provide information about the increase in separated children. In response, DHS officials stated that DHS did not have an official policy to separate families, according to ORR officials. A few months prior to the April 2018 memo, ORR officials said they saw a continued increase in separated children in their care. ORR officials noted that they considered planning for continued increases in separated children, but HHS leadership advised ORR not to engage in such planning since DHS officials told them that DHS did not have an official policy of separating families. From July to November 2017, the Border Patrol sector in El Paso, Texas conducted an initiative to address an increase in apprehensions of families that sector officials had noted in early fiscal year 2017. Specifically, Border Patrol officials in the sector reached an agreement with the District of New Mexico U.S. Attorney’s Office to refer more individuals who had been apprehended, including parents who arrived with minor children, for criminal prosecution. Prior to this initiative, the U.S. Attorney’s Office in this district had placed limits on the number of referrals it would accept from Border Patrol for prosecution of immigration offenses. According to Border Patrol officials, under this initiative, the U.S. Attorney’s Office agreed to accept all referrals from Border Patrol in the El Paso sector for individuals with violations of 8 U.S.C. § 1325 (improper entry by alien) and § 1326 (reentry of removed aliens), consistent with the Attorney General’s 2017 memo directing federal prosecutors to prioritize such prosecutions. For those parents placed into criminal custody, Border Patrol referred their children to ORR’s care as UAC. According to a Border Patrol report on the initiative, the El Paso sector processed approximately 1,800 individuals in families and 281 individuals in families were separated under this initiative. Border Patrol headquarters directed the sector to end this initiative in November 2017, and Border Patrol officials stated that there were no other similar local initiatives that occurred prior to the Attorney General’s 2018 memo. DHS and HHS Systems for Indicating When Children Were Separated from Parents When the April 2018 memo was released, there was no single database with easily extractable, reliable information on family separations. DHS and HHS subsequently updated their data systems in the spring and summer of 2018, but it is too soon to know the extent to which these changes, if fully implemented, will consistently indicate when children have been separated from the parents or will help reunify families, if appropriate. Specifically, prior to April 2018, CBP’s and ORR’s data systems did not include a designated field to indicate that a child was unaccompanied as a result of being separated from his or her parent, and ORR officials stated that such information was not always provided when children were transferred from DHS to HHS custody. According to agency officials, between April and August 2018, the agencies made changes to their data systems to help notate in their records when children are separated from parents. Regarding DHS, CBP’s Border Patrol and OFO made changes to their data systems to allow them to better indicate cases in which children were separated from their parents; however, ORR officials told us in September 2018, that they had been unaware that DHS had made these systems changes. According to Border Patrol officials, Border Patrol modified its system on April 19, 2018, to include yes/no check boxes to allow agents to indicate that a child was separated from their parent(s). However, Border Patrol officials told us that information on whether a child had been separated is not automatically included in the referral form sent to ORR. Rather, agents may indicate a separation in the referral notes sent electronically to ORR, but they are not required to do so, according to Border Patrol officials. While the changes to the system may make it easier for Border Patrol to identify children separated from their parents, ORR officials stated ORR may not receive information through this mechanism to help it identify or track separated children. Prior to this system modification, Border Patrol agents typically categorized a separated child as an unaccompanied child in its system and did not include information to indicate the child had been separated from a parent. CBP’s OFO, which encounters families presenting themselves at ports of entry, also modified its data system and issued guidance to its officers on June 29, 2018, to track children separated from their parents. OFO officials have access to the UAC Portal but typically email this information to ORR as part of the referral request. According to OFO officials, prior to that time, OFO designated children separated from their parents as unaccompanied. ORR updated the UAC Portal to include a check box for indicating that a child was separated from his or her parents. According to ORR officials, ORR made these changes on July 6, 2018, after the June 20 executive order and June 2018 court order to reunify families. According to ORR officials, prior to July 6, 2018, the UAC Portal did not have a systematic way to indicate whether a child was designated as unaccompanied as a result of being separated from a parent at the border. The updates allow those Border Patrol agents with direct access to the UAC Portal to check this box, and Border Patrol issued guidance on July 5, 2018, directing its agents to use the new indicator for separated children in the UAC Portal and provide the parent’s alien number in the UAC Portal when making referrals to ORR as of July 6, 2018. However, ORR officials also said that DHS components with access to the UAC Portal are not yet utilizing the new check box consistently. Staff at three of the four shelters we visited in Arizona and Texas in July and August of 2018 said that in most, but not all cases during the spring of 2018, DHS indicated in the custody transfer information that a child had been separated. Staff at one shelter estimated that for approximately 5 percent of the separated children in its care there was no information from DHS indicating parental separation. In these cases, shelter staff said they learned about the separation from the child during the shelter’s intake assessment. Staff at the same shelter, which cares for children ages 0 to 4, noted that intake assessments for younger children are different from intake for older children, as younger children are unable to provide detailed information on such issues as parental separation. While the updates that OFO and ORR have made to their data systems are a positive step, they do not fully address the broader coordination issues we identified in our previous work. Specifically, we identified weaknesses in DHS and HHS’s process for the referral of UAC. In 2015, we reported that the interagency process to refer and transfer UAC from DHS to HHS was inefficient and vulnerable to errors because it relied on emails and manual data entry, and documented standard procedures, including defined roles and responsibilities, did not exist. To increase the efficiency and improve the accuracy of the interagency UAC referral and placement process, we recommended that the Secretaries of DHS and HHS jointly develop and implement a documented interagency process with clearly defined roles and responsibilities, as well as procedures to disseminate placement decisions, for all agencies involved in the referral and placement of UAC in HHS shelters. In response, DHS officials told us DHS delivered a Joint Concept of Operations between DHS and HHS to Congress on July 31, 2018, which provides field guidance on interagency policies, procedures, and guidelines related to the processing of UAC transferred from DHS to HHS. DHS submitted the Joint Concept of Operations to us on September 26, 2018, in response to our recommendation. We are reviewing the extent to which the Joint Concept of Operations includes a documented interagency process with clearly defined roles and responsibilities, as well as procedures to disseminate placement decisions, for all agencies involved in the referral and placement of unaccompanied children, including those separated from parents at the border, in HHS shelters. Moreover, to fully address our recommendation, DHS and HHS should implement such interagency processes. DHS and HHS Actions to Reunify Families in Response to the June 2018 Court Order DHS and HHS took various actions in response to the June 26, 2018, court order to identify and reunify children separated from their parents. The June 2018 court order required the government to reunite class member parents with their children under 5 years of age within 14 days of the order, and for children age 5 and over, within 30 days of the order. HHS officials told us that there were no specific procedures to reunite children with parents from whom they were separated at the border prior to the June 2018 court order. Rather, the agency used its standard procedures, developed to comply with the William Wilberforce Trafficking Victims Protection Reauthorization Act of 2008 (TVPRA), to consider potential sponsors for unaccompanied children in their custody; if a parent was available to become a sponsor, reunification with that parent was a possible outcome. DHS and HHS Efforts to Identify Potential Class Members. To create the list of potential class members (that is, those parents of a separated child covered under the lawsuit) eligible for reunification per the June 2018 court order, DHS and HHS officials told us that they generated the list based on children who were in DHS or HHS custody on that date. As a result, DHS and HHS officials told us that a parent of a separated child would only be a class member if his or her child was detained in DHS or HHS custody on June 26, 2018. After developing the class list, DHS and HHS officials told us that they next determined whether class members were eligible for reunification, as a class member could be determined ineligible for reunification if it was determined that the parent was unfit or presented a danger to the child. Parents of children who were separated at the border but whose children were released by ORR to sponsors prior to the June 2018 court order were not considered class members, and according to HHS officials, the department was not obligated to reunite them with the parent or parents from whom they were separated. Further, HHS officials told us that they do not know how many such children separated from parents at the border were released to sponsors prior to the order and that the court order does not require the department to know this information. Because there was no single database with information on family separations, HHS officials reported using three methods to determine which children in ORR’s custody as of June 26, 2018, had been separated from parents at the border: 1. Data Reviewed by an Interagency Data Team. An interagency team of data scientists and analysts—led by HHS’s Office of the Assistant Secretary for Preparedness and Response with participation from CBP, ICE, and ORR—used data and information provided by DHS and HHS to identify the locations of separated children and parents, according to HHS officials. 2. Case File Review. HHS reported that more than 100 HHS staff reviewed about 12,000 electronic case files of all children in its care as of June 26, 2018 for indications of separation in specific sections of each child’s case file, such as the phrases “zero tolerance,” “separated from ,” and “family separation.” 3. Review of Information Provided by Shelters. According to HHS officials, shelter staff were asked to provide lists of children in their care who were known to be separated from parents based on the shelter’s records. On the basis of its reviews, as of September 10, 2018, the government had identified 2,654 children of potential class members in the Ms. L. v. ICE case. Of the 2,654 children, 103 were age 0 to 4 and 2,551 were age 5 to 17. As previously discussed, the number of children of potential class members does not include those who were separated from parents but released to sponsors prior to the June 2018 court order or the more than 500 children who were reunified with parents by CBP in late June 2018, because these children were never transferred to ORR custody. As of September 10, 2018, 2,217 of the 2,654 identified children had been released from ORR custody, according to a joint status report filed in the Ms. L. v. ICE case. About 90 percent of the released children were reunited with the parent from whom they were separated and the remaining children were released under other circumstances. Children released under other circumstances could include those released to another sponsor such as a parent already in the United States, another relative, or an unrelated adult, or children who turned 18. Staff at one ORR facility we visited told us they planned to release some children under these circumstances. As of December 11, 2018, the government had identified additional possible separated children of potential class members for a total of 2,816. It had released 2,657 and 159 remained in ORR custody. However, the government has also reported that 79 of the children it initially identified as separated had not been separated from a parent. Excluding those 79 children from the 2,816 total would bring the total number of children separated to 2,737. Plan for Reunifying Children with Class Member Parents Within and Outside ICE’s Custody. The process used to reunify separated children with their class member parents in the Ms. L. v. ICE case evolved over time based on multiple court hearings and orders, according to HHS officials. After the June 2018 court order, HHS officials said the agency planned to reunify children using a process similar to their standard procedures for placing unaccompanied children with sponsors. However, according to agency officials, the agency realized that it would be difficult to meet the court’s reunification deadlines using its standard procedures and began developing a process for court approval that would expedite reunification for class members. As a result, from June 26, 2018 to July 10, 2018, the reunification process was refined and evolved iteratively based on court status conferences, according to HHS officials. ORR field and shelter staff we interviewed noted the impact of the continually changing reunification process; for example, staff at one shelter told us there were times when they would be following one process in the morning but a different one in the afternoon. On July 10, 2018, the court approved reunification procedures for the class members covered by the June 2018 court order. In the July 10, 2018 order that outlined these procedures, the court noted that the standard procedures developed by ORR pursuant to the TVPRA were meant to address “a different situation, namely, what to do with alien children who were apprehended without their parents at the border or otherwise” and that the agency’s standard procedures were not meant to apply to the situation presented in the Ms. L. v. ICE case, which involves parents and children who were apprehended together and then separated by government officials. The reunification procedures approved in the Ms. L. v. ICE case apply only to reunification of class members with their children and included determining (1) parentage and (2) whether the parent is fit to take care of the child or presents any danger to the child. Specifically: 1. Determining Parentage. Before July 10, 2018, to determine parentage for children ages 0 to 4, HHS officials said they initially used DNA swab testing instead of requiring documentation, such as birth certificates, stating that DNA swab testing was a prompt and efficient method for determining biological parentage in a significant number of cases. On July 10, 2018, the court approved the use of DNA testing “only when necessary to verify a legitimate, good-faith concern about parentage or to meet a reunification deadline.” HHS officials told us that at that point, to determine parentage, ORR relied on the determinations made by DHS when the family was separated and information ORR shelter staff had already collected through assessments of the children in their care. Unless there were specific doubts about the relationship, ORR did not collect additional information to confirm parentage, according to HHS officials. 2. Determining Fitness and Danger. To reunify class members, HHS also followed the procedures approved by the court on July 10, 2018 for determining whether a parent is fit and whether a parent presents a danger to the child. HHS used the fingerprints and criminal background check of the parent conducted by DHS when the individual was first taken into DHS custody rather than requiring the parent and other adults living in the household to submit fingerprints to ORR, as potential sponsors were typically required to do for unaccompanied children. According to HHS officials, ORR personnel also reviewed each child’s case file for any indication of a safety concern, such as allegations of abuse by the child. HHS did not require fingerprints of other adults living in the household where the parent and child will live. HHS also did not require parents to complete an ORR family reunification application as potential sponsors are typically required to do for unaccompanied children. The specific procedures for physical reunification varied depending on whether the parents were inside or outside of ICE custody. DHS and HHS took steps to coordinate their efforts to reunify children with parents who were in ICE custody, but experienced challenges. Generally, for parents in ICE custody, DHS transported parents to a detention facility close to their child and HHS transported the child to the same facility. At the facility HHS transferred custody of the child to ICE for final reunification. HHS officials said that in some instances children had to wait for parents for unreasonably long amounts of time and parents were transported to the wrong facilities. In one case, staff at one shelter told us that they had to stay two nights in a hotel with the child before reunification could occur. According to HHS officials, for families in which the parent was released into the interior of the United States, the reunification process involves ORR officials and shelter staff attempting to establish contact with the parent and determining whether the parent has “red flags” for parentage or child safety. These determinations are based on DHS-provided criminal background check summary information and case review of the child’s UAC Portal records. In cases where no red flags are found, HHS transports the child to the parent or the parent picks the child up at the ORR shelter. For more information on DHS and HHS reunification procedures for class member parents inside and outside ICE custody, see GAO-19-163. Chair DeGette, Ranking Member Guthrie, and Members of the Subcommittee, this concludes our prepared remarks. We would be happy to answer any questions that you may have. GAO Contacts and Staff Acknowledgments For further information regarding this testimony, please contact Kathryn A. Larin at (202) 512-7215 or larink@gao.gov or Rebecca Gambler at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Kathryn Bernet (Assistant Director), Elizabeth Morrison (Assistant Director), David Barish (Analyst-in-Charge), Andrea Dawson, Jason Palmer, and Leslie Sarapu. In addition, key support was provided by Susan Aschoff, James Bennett, Sarah Cornetto, Michael Kniss, Sheila R. McCoy, Jean McSween, Jan Montgomery, Heidi Nielson, and Almeta Spencer. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study On April 6, 2018, the Attorney General issued a memorandum on criminal prosecutions of immigration offenses. According to HHS officials, this resulted in a considerable increase in the number of minor children whom DHS separated from their parents after attempting to cross the U.S. border illegally. On June 20, 2018, the President issued an executive order directing that alien families generally be detained together, and on June 26, 2018, a federal judge ordered the government to reunify separated families. DHS is responsible for the apprehension and transfer of UAC to HHS. HHS is responsible for coordinating UAC placement and care. This testimony discusses DHS and HHS (1) planning efforts related to the Attorney General's April 2018 memo, (2) systems for indicating children were separated from parents, and (3) actions to reunify families in response to the June 2018 court order. It is based on a report GAO issued in October 2018. This testimony also includes updated data reported by the government on the number children separated from their parents subject to the court's reunification order and the number of those children in ORR custody as of December 11, 2018. What GAO Found Department of Homeland Security (DHS) and Department of Health and Human Services (HHS) officials GAO interviewed said the agencies did not plan for the potential increase in the number of children separated from their parent or legal guardian as a result of the Attorney General's April 2018 “zero tolerance” memo because they were unaware of the memo in advance of its public release. The memo directed Department of Justice prosecutors to accept for criminal prosecution all referrals from DHS of offenses related to improper entry into the United States, to the extent practicable. As a result, parents were placed in criminal detention, and their children were placed in the custody of HHS's Office of Refugee Resettlement (ORR). DHS and ORR treated separated children as unaccompanied alien children (UAC)—those under 18 years old with no lawful immigration status and no parent or legal guardian in the United States available to provide care and physical custody. Prior to April 2018, DHS and HHS did not have a consistent way to indicate in their data systems children and parents separated at the border. In April and July 2018, U.S. Customs and Border Protection's Border Patrol and ORR updated their data systems to allow them to indicate whether a child was separated. However, it is too soon to know the extent to which these changes, if fully implemented, will consistently indicate when children have been separated from their parents, or will help reunify families, if appropriate. In response to a June 26, 2018 court order to quickly reunify children separated from their parents, HHS determined how many children in its care were subject to the order and developed procedures for reunifying these families. As of September 2018, the government identified 2,654 children in ORR custody who potentially met reunification criteria, which does not include separated children released to sponsors prior to the June 2018 court order. On July 10, 2018, the court approved reunification procedures for the parents covered by the June 2018 court order. This July 10, 2018 order noted that ORR's standard procedures used to release UAC from its care to sponsors were not meant to apply in this circumstance, in which parents and children who were apprehended together were separated by government officials. Since GAO's October 2018 report, the government identified additional children separated from parents subject to the court's reunification order and released additional children from its custody (see figure). What GAO Recommends GAO recommended in 2015 that DHS and HHS improve their process for transferring UAC from DHS to HHS custody. DHS and HHS concurred and have taken action, but have not fully implemented the recommendation.
gao_GAO-20-482T
gao_GAO-20-482T_0
The Federal Government Is on an Unsustainable Fiscal Path By the end of fiscal year 2019, the federal debt held by the public had climbed to 79 percent of GDP. By comparison, such debt has averaged 46 percent of GDP annually since 1946. If current trends continue, debt as a share of GDP will exceed the historic high 1946 level of 106 percent of GDP within 11 to 14 years. In 2050, it will be nearly twice that level and about four times its post-World War II average. Figure 1 shows that in GAO, CBO, and 2019 Financial Report projections, debt held by the public as a share of GDP grows substantially over time. Spending Outlook Is Driven by Health Care and Net Interest on the Debt Under GAO, CBO, and the 2019 Financial Report projections, spending for the major health and retirement programs grows more rapidly than GDP in coming decades. This is a consequence of both an aging population and projected continued increases in health care costs per beneficiary. Medicare spending is expected to exceed $1 trillion per year by fiscal year 2026, and Social Security spending already exceeds $1 trillion per year. However, according to the projections, these spending categories will eventually be overtaken by spending on net interest, which primarily consists of interest costs on the federal government’s debt held by the public. In recent years, persistently low interest rates have resulted in lower interest costs for the government than previously projected. Despite these low interest rates, spending on net interest grew from $263 billion in 2017 to $376 billion in 2019. That $376 billion is 8.4 percent of total federal spending, which exceeded combined spending on agriculture, transportation, and veterans’ benefits and services. Going forward, both interest rates and the debt are projected to grow, which means spending on net interest is projected to grow faster than any other component of the budget. In 2032, spending on net interest is projected to exceed $1 trillion annually. Over the past 50 years, net interest costs have averaged 2 percent of GDP but these costs are projected to increase to 7.2 percent by 2049. As figure 2 shows, we project that as a share of GDP, net interest spending will exceed Medicare spending in 2041, Social Security spending in 2044, and total Discretionary spending in 2049. Interest costs will also depend in part on the outstanding mix of Treasury securities. The Department of the Treasury issues securities in a wide range of maturities to appeal to a broad range of investors to support its goal of borrowing at the lowest cost over time. Treasury refinances maturing debt by issuing new debt in its place at the prevailing interest rate. At the end of fiscal year 2019, 61 percent of the outstanding amount of marketable Treasury securities held by the public (about $9.9 trillion) was scheduled to mature in the next 4 years. If interest rates are higher, Treasury will have to refinance these securities at the higher interest rates, adding to the interest costs of the growing federal debt. Action Is Needed to Address an Unsustainable Fiscal Path Impending financial challenges for major programs and fiscal risks are both straining the federal budget and contributing to the growing debt. Sustaining key programs will require changes (see fig. 3). The President’s Budget, CBO, and the Chair of the Board of Governors of the Federal Reserve System all make it clear that rising federal debt could have long-term consequences for the economy. For example it could: constrain Congress’s ability to support the economy or address other national priorities, restrain private investment and thereby reduce productivity and overall growth, and erode confidence in the U.S. dollar. In addition, it may increase the risk of a fiscal crisis, in which investors would lose confidence in the U.S. government’s financial position, and interest rates on Treasury securities would increase abruptly. To change the long-term fiscal path, policymakers will need to consider policy changes to the entire range of federal activities, both revenue (including tax expenditures) and spending (entitlement programs, other mandatory spending, and discretionary spending). As Congress considers changes in revenue and spending policies to improve the federal government’s long-term fiscal path, it will also need to consider other approaches for managing the level of debt. As currently structured, the debt limit is a legal limit on the total amount of federal debt that can be outstanding at one time. The debt limit does not restrict Congress’s ability to pass spending and revenue legislation that affects the level of debt, nor does it otherwise constrain fiscal policy. Without legislation to suspend or raise the debt limit, Treasury cannot continue issuing debt to finance the decisions already enacted by Congress and the President. We have reported on the negative impacts of uncertainty around the debt limit which include (1) increased Treasury borrowing costs, (2) decreased demand for Treasury securities, and (3) constrained Treasury cash management. We have reported numerous times that the full faith and credit of the United States must be preserved. We have also recommended that Congress consider other approaches to the current debt limit to avoid seriously disrupting the Treasury market and increasing borrowing costs and to allow it to better manage the federal government’s level of debt. A number of bills have been introduced in this Congress to address this issue. The Senate Budget Committee’s proposal to reform the Congressional budget process would automatically adjust the debt limit to conform to levels established in the budget resolution. In contrast to the debt limit, fiscal rules can support efforts to achieve fiscal sustainability by imposing numerical limits or targets on the budget to guide fiscal policy. Fiscal rules are intended to influence decisions about spending and revenue as they are made. The Senate Budget Committee’s proposal to reform the Congressional budget process is an example of one such approach. This legislation would specify target ratios for debt as a share of GDP and track legislation against that target. As Congress continues to consider options, two key points should be emphasized. An agreed-upon goal can help policymakers justify and frame their choices. With that in mind, a fiscal target that establishes a common goal for controlling the size of the federal debt relative to the economy—as well as well-designed rules that put the federal government on a path to achieve that target—could form part of a long-term fiscal plan to put the government on a sustainable fiscal path. The longer action is delayed, the greater and more drastic the changes will have to be, placing an additional burden on future generations. While changes in spending and revenue to ensure long-term fiscal sustainability require legislative actions to alter fiscal policies, executive agencies can also take actions to contribute toward a sustainable fiscal future. Although executive actions alone cannot put the U.S. government on a sustainable fiscal path, it is important for agencies to act as stewards of federal resources. These actions include reducing improper payments, which agencies estimate totaled $175 billion in fiscal year 2019; addressing the $381 billion annual net tax gap; better managing fragmentation, overlap, and duplication across the federal government; and improving information on federal programs and fiscal operations to aid agency decision-making. Chairman Enzi, Ranking Member Sanders, and Members of the Committee, this completes our prepared statement. We would be pleased to respond to any questions that you may have. GAO Contacts For further information on this testimony, please contact Susan J. Irving, Senior Advisor to the Comptroller General, Debt and Fiscal Issues, who may be reached at (202) 512-6806 or IrvingS@gao.gov; Robert F. Dacey, Chief Accountant, who may be reached at (202) 512-3406 or daceyr@gao.gov; or Dawn B. Simpson, Director, Financial Management and Assurance, who may be reached at (202) 512-3406 or simpsondb@gao.gov. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
What GAO Found This testimony summarizes information contained in GAO's March 2020 report, entitled The Nation’s Fiscal Health: Action Is Needed to Address the Federal Government’s Fiscal Future ( GAO-20-403SP ). Long-term fiscal projections by GAO, the Congressional Budget Office (CBO), and in the 2019 Financial Report of the U.S. Government (2019 Financial Report) all show that, absent policy changes, the federal government continues to face an unsustainable long-term fiscal path. Although the assumptions in each of these projections vary somewhat, all result in the same conclusion: over the long term, the imbalance between spending and revenue that is built into current law and policy will lead to (1) deficits exceeding $1 trillion each year beginning in fiscal year 2020 and (2) both the annual deficit and the cumulative total debt held by the public continuing to grow as shares of gross domestic product (GDP). This situation—in which debt grows faster than GDP—means the current federal fiscal path is unsustainable. To change the long-term fiscal path, policymakers will need to consider policy changes to the entire range of federal activities, both revenue (including tax expenditures) and spending (entitlement programs, other mandatory spending, and discretionary spending). As Congress considers changes in revenue and spending policies to improve the federal government’s long-term fiscal path, it will also need to consider other approaches for managing the level of debt.
gao_GAO-20-438T
gao_GAO-20-438T_0
Some Self-Insured Operator Bankruptcies Shifted $865 million in Estimated Liability to the Trust Fund, but Commercial Insurance Coverage Can Help Limit Trust Fund Exposure Of the eight coal mine operator bankruptcies we identified, three resulted in a transfer of estimated benefit liability from the coal operator to the Trust Fund and five did not, according to DOL. Figure 1 shows how many operators were self-insured or commercially insured at the time of bankruptcy, and if responsibility for benefits was shifted from the bankrupt operator to the Trust Fund. Federal law generally requires coal mine operators to secure their black lung benefit liability. A self-insured coal mine operator assumes the financial responsibility for providing black lung benefits to its eligible employees by paying claims as they are incurred. Operators are allowed to self-insure if they meet certain DOL conditions. For instance, operators applying to self-insure must obtain collateral in the form of an indemnity bond, deposit or trust, or letter of credit in an amount deemed necessary and sufficient by DOL to secure their liability. Operators that do not self-insure are generally required to obtain coverage from commercial insurance companies, state workers’ compensation insurance funds, or other entities authorized under state law to insure workers’ compensation. From 2014 through 2016, three self-insured coal mine operator bankruptcies resulted in a transfer of $865 million of benefit liabilities from the coal operators to the Trust Fund, according to DOL estimates (see table 1). DOL estimates for how these bankruptcies will affect the Trust Fund have considerably increased from what DOL had previously reported. In June 2019, we reported that DOL estimated that between $313 million to $325 million in benefit liabilities would transfer to the Trust Fund as a result of these bankruptcies. In January 2020, however, DOL provided updated estimates stating that $865 million in benefit liabilities would transfer to the Trust Fund as a result of these bankruptcies. According to DOL, their estimates increased, among other reasons, to account for higher black lung benefit award rates that occurred from fiscal years 2016 through 2019, and higher medical treatment cost inflation in recent years. Additionally, DOL’s prior estimate for the Patriot Coal (Patriot) bankruptcy did not account for future claims and their effect on the Trust Fund. The amount of collateral DOL required from these operators to self-insure did not fully cover their estimated benefit liabilities. When this occurs, benefit liabilities in excess of the collateral can be transferred to the Trust Fund. For example, the collateral DOL required from Alpha Natural Resources (Alpha) was about $12 million and approximately $494 million of estimated benefit liability transferred to the Trust Fund, according to DOL. The three other self-insured coal mine operator bankruptcies we identified did not affect the Trust Fund. Specifically, Arch Coal, Peabody Energy, and Walter Energy were also self-insured operators, but DOL officials said that their federal black lung benefit liabilities were assumed by a reorganized company or by a purchaser, and therefore did not transfer to the Trust Fund. Insurance contracts or policies to secure operators’ benefit liabilities are required by law to include a provision that insolvency or bankruptcy of an operator does not release the insurer from the obligation to make benefit payments. Additionally, state insurance regulation, insurer underwriting, risk management practices, and state guaranty funds help to protect the Trust Fund from having to assume responsibility for paying black lung benefits on behalf of bankrupt coal operators. Thus, by being commercially insured, the two operators we identified that filed for bankruptcy between 2014 and 2016—Energy Future Holdings and Xinergy Ltd—did not affect the Trust Fund, according to DOL (see fig. 1). Since 2016, several other self-insured operators have also filed for bankruptcy, according to DOL officials, including Cambrian Coal, Cloud Peak Energy, Murray Energy, and Westmoreland Coal. DOL officials said that about $17.4 million in estimated black lung benefit liability will transfer to the Trust Fund as a result of Westmoreland Coal’s bankruptcy. Given the uncertainty of the bankruptcy process in terms of whether liabilities will or will not transfer to the Trust Fund, however, DOL officials said that they could not speculate on how these other bankruptcies may affect the Trust Fund. DOL’s Limited Oversight Has Exposed the Trust Fund to Financial Risk, and Its New Self-Insurance Process Lacks Enforcement Procedures In overseeing coal mine operator self-insurance in the past, DOL did not estimate future benefit liability when setting collateral or regularly review operators to monitor their changing financial conditions. DOL regulations require that collateral be obtained from operators in an amount deemed necessary and sufficient to secure the payment of the operators’ liability. To determine collateral amounts under the former process, agency procedures stated that an operator’s net worth be assessed by reviewing, among other factors, the operator’s audited financial statements and black lung claims information. The amount of collateral was to be equal to 3, 5, or 10 years of the operator’s annual black lung benefit payments made at the time of a coal operator’s self-insurance application depending on its net worth. Specifically, if net worth was $1 billion or greater, agency procedures set collateral equal to 3 years of benefit payments. If net worth ranged from $500 million to $1 billion, collateral was equal to 5 years of benefit payments. If net worth ranged from $10 million to $500 million, DOL set collateral equal to 10 years of benefit payments. Agency procedures did not permit operators with net worth less than $10 million to self-insure. DOL’s former process for determining collateral did not routinely consider potential future claims for which an operator could be responsible. The agency periodically reauthorized coal operators to self-insure by reviewing an operator’s most recent audited financial statement and claims information, among other things. DOL prepared memos documenting these reviews, and communicated with coal operators about whether their financial circumstances warranted increasing or decreasing their collateral. Regulations state that DOL may adjust the amount of collateral required from self-insured operators when experience or changed conditions so warrant, but regular monitoring of self-insured operators was not conducted. In reviewing the most recent reauthorization memos for each of the self-insured operators, we found that while some of these operators had been reauthorized more recently, others had not been in decades. One operator in particular had not been reauthorized since 1988. There were no written procedures that specified how often reauthorizations should occur after an operator’s initial 18-month reauthorization. DOL has other tools available to mitigate financial losses to the Trust Fund. These include revoking an operator’s ability to self-insure; fining mine operators for operating without insurance; and placing liens on operator assets. Based on our review of agency documentation, however, we found instances when officials did not use these tools to protect the Trust Fund, or were hindered from doing so because of an operator’s ongoing appeal or bankruptcy. James River. In September 2001, DOL required $5 million in additional collateral from James River Coal (James River), which would have increased its collateral from $0.4 million to $5.4 million. Although DOL did not receive the additional collateral, it did not revoke the operator’s authority to self-insure, which is a potential option under agency regulations. Further, DOL had not reauthorized James River at any point from August 2001 until it filed for bankruptcy in April 2014. If James River’s ability to self-insure had been revoked, DOL could have potentially prevented the Trust Fund from being responsible for claims based on a miner’s employment from 2001 through 2016, when James River liquidated. Additionally, if the operator had been unable to obtain commercial insurance, the agency could have potentially fined the operator for each day it operated without insurance. Instead, no action was taken during these years and estimated benefit liability of $141 million was shifted to the Trust Fund, according to DOL. DOL officials stated that they do not have records explaining why James River did not provide the additional collateral or why they did not revoke its authority to self-insure. Patriot. In August 2014, DOL required an additional $65 million in collateral from Patriot, increasing its collateral from $15 million to $80 million. Patriot appealed this decision and, in the 8 months that followed before Patriot filed for bankruptcy in May 2015, DOL did not obtain additional collateral, or revoke Patriot’s ability to self-insure because the appeal was still pending. DOL officials said they would not typically revoke an operator’s authority to self-insure during an ongoing appeal. As a result, DOL was hindered from using this enforcement tool. Liens on operator assets can be an effective tool to protect the Trust Fund if an operator defaults on its benefit liability, but DOL officials said they are hindered from using this tool if an operator files for bankruptcy. DOL can place a lien on a coal operator’s assets under federal law if they refuse the demand to pay the black lung benefit payments for which they are liable. In the event of bankruptcy or insolvency, federal law states that the lien imposed shall be treated in the same manner as a lien for taxes due and owing to the United States under certain laws. However, DOL officials said that operators rarely stop paying benefits until after they file for bankruptcy. Once a bankruptcy occurs, DOL officials said that they are generally prevented by the court from placing a lien and taking an operator’s assets in lieu of payment of current and future benefit liability. Under bankruptcy law, DOL officials said that they have no special status over other creditors with outstanding financial claims. DOL officials said that obtaining sufficient collateral is a better way to protect the Trust Fund. DOL’s New Self-Insurance Process May Help Address Problems, but Key Enforcement Procedures Are Needed In July 2019, DOL began implementing a new process for coal mine operator self-insurance that may help to address some past deficiencies, if implemented effectively. Among other things, DOL will require operators to periodically submit financial and claims information, including an actuarial estimate of the operator’s current and future benefit liability. DOL plans to use this information to assess the insolvency risk of each operator. Depending on the results of their analysis, DOL plans to categorize the risk-level of each applicant as low, medium, or high. DOL will then set the amount of collateral required to self-insure by linking the operator’s risk category to a corresponding percentage of the operator’s actuarial estimated benefit liability. DOL policies state that they would require a high-risk operator to secure with collateral 90 percent of estimated benefit liability, a medium-risk operator to secure 45 percent, and a low-risk operator to secure 15 percent. However, in February 2020, DOL officials said they plan to revise these percentages to 100 percent, 85 percent, and 70 percent for high-risk, medium-risk, and low-risk operators, respectively. Coal mine operators that are already authorized to self-insure will be required to submit, among other things, an annual renewal application. DOL plans to use this information to update their insolvency risk analysis. If an operator’s risk category changes (e.g., from low-to medium-risk), DOL plans to send a form to the operator requiring an additional amount or type of collateral. Upon receiving the completed form, and proof that the collateral has been obtained, DOL stated that they will notify the operator that its authority to self-insure has been reauthorized. DOL’s new self-insurance process made important changes, but overlooked other key internal control improvements that are needed to protect the financial interests of the Trust Fund. DOL’s new requirements for setting collateral and for the more frequent review of self-insured operators are key components of internal controls, which call for agency management to implement control activities through policy. However, DOL’s new self-insurance procedures do not specify (1) the duration of an operator’s self-insurance authority, (2) the time frames for submitting renewal applications and supporting documentation, and (3) the conditions under which an operator’s self-insurance authority would not be renewed. Our report recommends that DOL implement procedures for coal mine operator self-insurance renewal that clarifies how long an operator is authorized to self-insure; when an operator must submit its renewal application and supporting documentation; and the conditions under which an operator’s self-insurance authority would not be renewed. DOL agreed with this recommendation and stated that it will ensure letters granting or renewing self-insurance authority will inform operators that their authorization expires in one year and that they must submit renewal information three months in advance of the expiration date. DOL staff are hindered from taking enforcement action during an operator’s ongoing appeal, as previously mentioned. DOL policies state that an operator may request reconsideration if its self-insurance application has been denied or if it believes the collateral required by DOL is too high to secure its benefit liability. However, DOL lacks procedures that specify, among other things, the length of time that operators have to submit supporting information. Further, DOL does not specify a goal for how much time DOL appeals decisions should take. For example, in October 2015, DOL recommended revoking Murray Energy’s (Murray) authority to self-insure due to deteriorating financial conditions. Murray appealed this decision, and DOL officials said they postponed responding to the appeal until their new self-insurance process was implemented. However, Murray filed for bankruptcy in October 2019 and DOL had not revoked its authority to self-insure or requested additional collateral because Murray’s appeal was still pending and DOL was still evaluating how much collateral it would require from the operator under its new self- insurance process. Our report recommends that DOL develop and implement procedures for self-insured coal mine operator appeals that identify time lines for self- insured operators to submit documentation supporting their appeals and that identify a goal for how much time DOL should take to make appeals decisions. DOL agreed with this recommendation and stated that they will ensure letters denying self-insurance will inform operators that they have a 30-day appeal period (limited to one extension), and that DOL has set a goal of resolving all appeals within 90 days of the denial letter. Commercial Insurance Oversight Improvements Are Needed We found that DOL does not monitor coal mine operators that do not self- insure and, thus, must commercially insure their federal black lung liabilities to make certain they maintain adequate and continuous coverage as required by law. In the absence of effective DOL monitoring, we evaluated the potential risk that uninsured operators could pose to the Trust Fund. Specifically, in examining the 13 largest coal mine operators that do not self-insure, we found that some insurers erred in reporting black lung policies and in one instance an operator did not have adequate coverage. We found six operators (parent or subsidiary) that were not insured for the entire 3-year period from 2016 through 2018, according to our review of DOL data. When we discussed our findings with DOL, agency officials had to research each operator individually and in some cases contact the operator or their insurer to find out whether or not they had been covered. DOL concluded that these entities were insured. However, the insurers had not properly reported the federal black lung endorsement on new policies or subsequent renewals, in addition to other reporting issues. One of these six operators also had, inadvertently, not maintained adequate commercial coverage for its mining operations in Texas, and had not self-insured those operations. In this instance, the operator obtained an excess loss policy that only pays claims once they exceed a high threshold and, therefore, is not sufficient by itself to secure the payment of the operator’s benefit liability. Designing processes to achieve agency objectives and respond to risks is a principle of effective internal controls. Without a process to monitor operator compliance with program insurance requirements, DOL risks not identifying a lapse or cancellation of operator coverage. This could result in the Trust Fund having to assume responsibility for paying benefits that would otherwise have been paid by an insurer. Our report recommends that DOL should develop and implement a process to monitor operator compliance with commercial insurance requirements and periodically evaluate the effectiveness of this process. DOL agreed with this recommendation and stated that it will modify existing computer systems to identify lapses or cancellations of commercial insurance coverage, and require operators identified as having lapsed or cancelled coverage to obtain or provide proof of coverage within 30 days. Chairwoman Adams, Ranking Member Byrne, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to respond to any questions you may have at this time. If you or your staffs have any questions concerning this testimony, please contact Cindy Brown Barnes at (202) 512-7215 or brownbarnesc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Alicia Puente Cackley, (Director), Blake Ainsworth (Assistant Director), Patrick Ward (Assistant Director), Justin Dunleavy (Analyst-in-Charge), Alex Galuten, Rosemary Torres Lerma, Olivia Lopez, Scott McNulty, and Almeta Spencer. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study In May 2018, GAO reported that the Trust Fund, which pays disability benefits to certain coal miners, faced financial challenges. The Trust Fund has borrowed from the U.S. Treasury's general fund almost every year since 1979 to make needed expenditures. GAO's June 2019 testimony included preliminary observations that coal operator bankruptcies were further straining Trust Fund finances because, in some cases, benefit responsibility was transferred to the Trust Fund. This testimony is based on GAO's report being released today, and describes (1) how coal mine operator bankruptcies have affected the Trust Fund, and (2) how DOL managed coal mine operator insurance to limit financial risk to the Trust Fund. In producing this report, GAO identified coal operators that filed for bankruptcy from 2014 through 2016. GAO analyzed information on commercially-insured and self-insured coal operators, and examined workers' compensation insurance practices in four of the nation's top five coal producing states. GAO also interviewed DOL officials, coal mine operators, and insurance company representatives, among others. What GAO Found Coal mine operator bankruptcies have led to the transfer of about $865 million in estimated benefit responsibility to the federal government's Black Lung Disability Trust Fund (Trust Fund), according to DOL estimates. The Trust Fund pays benefits when no responsible operator is identified, or when the liable operator does not pay. GAO previously testified in June 2019 that it had identified three bankrupt, self-insured operators for which benefit responsibility was transferred to the Trust Fund. Since that time, DOL's estimate of the transferred benefit responsibility has grown—from a prior range of $313 million to $325 million to the more recent $865 million estimate provided to GAO in January 2020. According to DOL, this escalation was due, in part, to recent increases in black lung benefit award rates and higher medical treatment costs, and to an underestimate of one company's (Patriot Coal) future benefit claims. Trust Fund, Filed from 2014 through 2016 DOL's limited oversight of coal mine operator insurance has exposed the Trust Fund to financial risk, though recent changes, if implemented effectively, can help address these risks. In overseeing self-insurance in the past, DOL did not: estimate future benefit liability when setting the amount of collateral required to self-insure; regularly review operators to assess whether the required amount of collateral should change; or always take action to protect the Trust Fund by revoking an operators' ability to self-insure as appropriate. In July 2019, DOL began implementing a new self-insurance process that could help address past deficiencies in estimating collateral and regularly reviewing self-insured operators. However, DOL's new process still lacks procedures for its planned annual renewal of self-insured operators and for resolving coal operator appeals should operators dispute DOL collateral requirements. This could hinder DOL from revoking operators' ability to self-insure should they not comply with DOL requirements. Further, for those operators that do not self-insure, DOL does not monitor them to ensure they maintain adequate and continuous commercial coverage as appropriate. As a result, the Trust Fund may in some instances assume responsibility for paying benefits that otherwise would have been paid by insurers. What GAO Recommends GAO made three recommendations to DOL to establish procedures for self-insurance renewals and coal operator appeals, and to develop a process to monitor whether commercially-insured operators maintain adequate and continuous coverage. DOL agreed with these recommendations.
gao_GAO-19-445T
gao_GAO-19-445T_0
Background Since 1990, generally every 2 years at the start of a new Congress, we call attention to agencies and program areas that are high risk due to their vulnerability to mismanagement or that are most in need of transformation. Our high-risk program is intended to help inform the congressional oversight agenda and to improve government performance. Since 1990, a total of 62 different areas have appeared on the High-Risk List. Of these, 26 areas have been removed, and 2 areas have been consolidated. On average, the high-risk areas that were removed from the list had been on it for 9 years after they were initially added. Our experience with the High-Risk List over the past 29 years has shown that the key elements needed to make progress in high-risk areas are top-level attention by the administration and agency leaders grounded in the five criteria for removing high-risk designations, which we reported on in November 2000. When legislative and agency actions, including those in response to our recommendations, result in our finding significant progress toward resolving a high-risk problem, we will remove the high- risk designation. However, implementing our recommendations alone will not result in the removal of the designation, because the condition that led to the recommendations is symptomatic of systemic management weaknesses. In cases in which we remove the high-risk designation, we continue to closely monitor the areas. If significant problems again arise, we will consider reapplying the high-risk designation. The five criteria for removing high-risk designations are as follows: Leadership commitment. Demonstrated strong commitment and top leadership support to address the risks. Capacity. Agency has the capacity (i.e., people and other resources) to resolve the risk(s). Action plan. A corrective action plan that defines the root causes, identifies solutions, and provides for substantially completing corrective measures in the near term, including steps necessary to implement solutions we recommended. Monitoring. A program has been instituted to monitor and independently validate the effectiveness and sustainability of corrective measures. Demonstrated progress. Ability to demonstrate progress in implementing corrective measures and in resolving the high-risk area. These five criteria form a road map for efforts to improve and ultimately address high-risk issues. Addressing some of the criteria leads to progress, and satisfying all of the criteria is central to removal from the list. Figure 1 shows the five criteria for removal for a designated high-risk area and examples of agency actions leading to progress toward removal. Importantly, the actions listed are not “stand alone” efforts taken in isolation of other actions to address high-risk issues. That is, actions taken under one criterion may be important to meeting other criteria as well. For example, top leadership can demonstrate its commitment by establishing a corrective action plan, including long-term priorities and goals to address the high-risk issue and by using data to gauge progress—actions that are also vital to addressing the action plan and monitoring criteria. When an agency meets all five of these criteria, we can remove the agency from the High-Risk List. We rate agency progress on the criteria using the following definitions: Met. Actions have been taken that meet the criterion. There are no significant actions that need to be taken to further address this criterion. Partially met. Some, but not all, actions necessary to meet the criterion have been taken. Not met. Few, if any, actions toward meeting the criterion have been taken. Agencies Made Some Progress Addressing the Management Weaknesses That Led to the 2017 High- Risk Designation Officials from Indian Affairs, BIE, BIA, and IHS expressed their commitment to addressing the issues that led to the high-risk designation for federal management of programs that serve tribes and their members. Since we last testified before this committee on June 13, 2018, we met with agency leaders and worked with each agency to identify actions the agencies took or plan to take to address the concerns that contributed to the designation. We determined that Indian Affairs, BIE, BIA, and IHS demonstrated some progress to partially meet each of the criteria for removing a high-risk designation. However, additional progress is needed for the agencies to fully address the criteria and related management weaknesses. Overall Rating for Improving Federal Management of Programs That Serve Tribes and Their Members As we reported in the March 2019 high-risk report, when we applied the five criteria for High-Risk List removal to each of the three segments— education, energy, and health care—we determined that Indian Affairs, BIE, BIA, and IHS have each demonstrated some progress. Overall, the agencies have partially met the leadership commitment, capacity, action plan, monitoring, and demonstrated progress criteria for the education, health care, and energy areas. However, the agencies continue to face challenges, particularly in retaining permanent leadership and a sufficient workforce. The following is a summary of the progress that Indian Affairs, BIE, BIA, and IHS have made in addressing the five criteria for removal from the High-Risk List. Leadership Commitment To meet the leadership commitment criterion for removal of a high-risk designation, an agency needs to have demonstrated strong commitment and top leadership support to address management weaknesses. The following examples show actions Indian Affairs, BIE, BIA, and IHS took to partially meet the leadership commitment criterion. Education. Indian Affairs’ leaders have demonstrated commitment to addressing key weaknesses in the management of BIE schools in several ways. For example, the BIE Director formed an internal working group, convened meetings with other senior leaders within Indian Affairs, and publicly stated that his agency is committed to ensuring implementation of our recommendations on Indian education. In addition, the BIE Director and other Indian Affairs leaders and senior managers have met with us frequently to discuss outstanding recommendations, actions they have taken to address these recommendations, and additional actions they could take. We also met with the new Assistant Secretary-Indian Affairs, who expressed her commitment to supporting the agency’s efforts to address weaknesses in the management of BIE schools. However, it is important that Indian Affairs leaders be able to sustain this level of commitment to solving problems in Indian education. Since 2012, there have been seven Assistant-Secretaries of Indian Affairs and five BIE Directors. There has also been leadership turnover in other key offices responsible for implementing our recommendations on Indian education. We have previously reported that leadership turnover hampered Indian Affairs’ efforts to make improvements to Indian education. We believe that ensuring stable leadership and a sustained focus on needed changes is vital to the successful management of BIE schools. Energy. BIA officials demonstrated leadership commitment by, among other things, meeting with us to discuss the agency’s progress in addressing our recommendations. In June 2018, a permanent Assistant Secretary for Indian Affairs was confirmed. This action provided an opportunity to improve Indian Affair’s oversight of federal actions associated with energy development. According to the BIA Acting Director and the Acting Director for Trust Services, BIA held a number of meetings with the Assistant Secretary to discuss agency action plans for our recommendations. However, BIA does not have a permanent Director, and BIA’s Office of Trust Service—which has significant responsibility over Indian energy activities—does not have a permanent Director or Deputy Director. We have seen turnover in these leadership positions as officials have been brought in to temporarily fill these roles. As officials are brought in temporarily, previously identified plans and time frames for completing some activities have changed, and BIA has found itself starting over on the process to identify or implement corrective actions. Health Care. IHS officials demonstrated leadership commitment by regularly meeting with us to discuss the agency’s progress in addressing our recommendations. In addition, IHS has chartered a policy advisory council that will focus on issues related to strategic direction, recommended policy, and organizational adjustments. According to IHS, this advisory council will, among other things, serve as a liaison among IHS leadership for issues involving strategic direction and policy, as well as monitor and facilitate related policy workgroups. However, IHS still does not have permanent leadership—including a Director of IHS—which is necessary for the agency to demonstrate its commitment to improvement. Additionally, since 2012, there have been five IHS Acting Directors, and there has been leadership turnover in other key positions, such as area directors. To fully meet the leadership commitment criterion, all agencies will need, among other things, stable, permanent leadership that has assigned the tasks needed to address weaknesses and that holds those assigned accountable for progress. For a timeline of senior leadership turnover in Indian Affairs, BIE, BIA, and IHS from 2012 through March 2019, see Figure 3. Capacity To meet the capacity criterion, an agency needs to demonstrate that it has the capacity (i.e., people and other resources) to resolve its management weaknesses. Indian Affairs, BIE, BIA, and IHS each made some progress in identifying capacity and resources to implement some of our recommendations, but BIE and IHS continue to face significant workforce challenges. The following examples show actions Indian Affairs, BIE, BIA, and IHS took to partially meet the capacity criterion. Education. BIE and other Indian Affairs offices that support BIE schools have made some progress in demonstrating capacity to address risks to Indian education. For example, BIE hired a full-time program analyst to coordinate its working group and help oversee the implementation of our recommendations on Indian education. This official has played a key role in coordinating the agency’s implementation efforts and has provided us with regular updates on the status of these efforts. BIE has also conducted hiring in various offices in recent years as part of a 2014 Secretarial Order to reorganize the bureau. For example, it has hired school safety officers and personnel in offices supporting the oversight of school spending. However, about 50 percent of all BIE positions have not been filled, including new positions that have been added as a result of the agency’s restructuring, according to recent BIE documentation. Moreover, the agency reported that it has not filled the position of Chief Academic Officer, a top-level BIE manager responsible for providing leadership and direction to BIE’s academic programs. Furthermore, BIE has not completed a strategic workforce plan to address staffing and training gaps with key staff, which we previously recommended. Such a plan is important to allow BIE and other Indian Affairs offices to better understand workforce needs and leverage resources to meet them. In February 2019, BIE drafted a strategic workforce plan and reported it is currently gathering feedback on the plan from internal stakeholders. BIE officials indicated they are planning to finalize and implement the plan in 2019. Energy. In November 2016, we recommended that BIA establish a documented process for assessing the workforce at its agency offices. BIA has taken a number of actions, such as conducting an internal survey to identify general workforce needs related to oil and gas development. This survey information supported staffing decisions for the recently created Indian Energy Service Center. In February 2019, BIA officials told us they have drafted a long-range workforce plan to ensure BIA has staff in place to meet its organizational needs. We will review the plan to determine if the planned actions will help BIA identify critical skills and competencies related to energy development and identify potential gaps. Health Care. IHS has made some progress in demonstrating it has the capacity and resources necessary to address the program risks we identified in our reports. For example, among other actions, IHS officials stated that the agency is expanding the role of internal audit staff within its enterprise risk management program to augment internal audits and complement audits by the HHS Inspector General and GAO. In addition, IHS has developed a new Office of Quality, which is expected to develop and monitor agency-wide quality of care standards. However, IHS officials told us there are still vacancies in several key positions, including the Director of the Office of Resource Access and Partnerships, and the Office of Finance and Accounting. Additionally, our August 2018 report found that IHS’s overall vacancy rate for clinical care providers was 25 percent. To fully meet the capacity criterion, all of the agencies need to assess tradeoffs between these and other administration priorities in terms of people and resources, and the agencies should provide to decision makers key information on resources needed to address management weaknesses. Action Plan To meet the action plan criterion, an agency needs to have a corrective action plan that defines the root causes, identifies solutions, and provides for substantially completing corrective measures in the near term, including steps necessary to implement the solutions we recommended. The following examples show actions Indian Affairs, BIE, BIA, and IHS took to partially meet the action plan criterion. Education. Among other actions, BIE implemented a new action plan for overseeing BIE school spending, including written procedures and risk criteria, which fully addressed two priority recommendations. Also, BIE completed a strategic plan in August 2018, which we recommended in September 2013. The plan provides the agency with goals and strategies for improving its management and oversight of Indian education, and establishes detailed actions and milestones for the implementation. However, Indian Affairs has not provided documentation that it has completed action plans on other important issues, such as a comprehensive, long-term capital asset plan to inform its allocation of school facility funds, which we recommended in May 2017. Energy. In meetings, BIA officials identified actions they have taken towards implementing our recommendations. For instance, BIA officials told us they have recently completed modifications to BIA’s database for recording and maintaining historical and current data on ownership and leasing of Indian land and mineral resources—the Trust Asset and Accounting Management System (TAAMS). The officials said that the modifications incorporate the key identifiers and data fields needed to track and monitor review and response times for oil and gas leases and agreements. BIA officials we met with have demonstrated an understanding that addressing long-standing management weaknesses is not accomplished through a single action but through comprehensive planning and continued movement toward a goal. However, the agency does not have a comprehensive action plan to identify the root causes of all identified management weaknesses and address the problems. Health Care. In February 2019, IHS finalized its strategic plan for fiscal years 2019 through 2023, and is developing a related work plan to address certain root causes of management challenges and define solutions and corrective measures for the agency. The strategic plan divides these challenges into three categories: (1) access to care, (2) quality of care, and (3) program management and operations. We will examine the strategic plan and IHS’s work plan, once issued, to determine whether they contain the needed elements of an action plan. To fully meet the action plan criterion, a comprehensive plan that identifies actions to address the root causes of its management shortcomings would have to come from top leadership with a commitment to provide sufficient capacity and resources to take the necessary actions to address management shortcomings and risks. Monitoring To meet the monitoring criterion, an agency needs to demonstrate that a program has been instituted to monitor and independently validate the effectiveness and sustainability of corrective measures. We have been working with the agencies to help clarify the need to establish a framework for monitoring progress that includes goals and performance measures to track their efforts and ultimately verify the effectiveness of their efforts. The following examples show actions Indian Affairs, BIE, BIA, and IHS took to partially meet the monitoring criterion. Education. Indian Affairs, in consultation with Department of Interior’s Office of Occupational Safety and Health, has taken actions to monitor corrective measures that address weaknesses with the agency’s safety program—which covers safety at BIE schools. However, the agency has not yet demonstrated that it is monitoring several other areas, such as whether relevant employees are being held to the agency’s required performance standards for safety inspections. Energy. BIA has taken steps to improve monitoring by holding frequent meetings to assess its progress in implementing our recommendations. However, BIA has not yet taken needed steps to monitor its progress in addressing the root causes of management weaknesses. Health Care. IHS has taken some steps toward monitoring the agency’s progress in addressing the root causes of their management weaknesses. In addition to developing its new Office of Quality, IHS has taken steps to develop a patient experience of care survey, as well as standards for tracking patient wait times. These efforts should be reflected in the agency’s corrective plan, as part of an overall framework for monitoring progress that includes goals and performance measures to track their efforts and ultimately verify the effectiveness of their efforts. To fully meet the monitoring criterion, the agencies need to establish goals and performance measures as they develop action plans and take further actions to monitor the effectiveness of actions to address root causes of identified management shortcomings. Demonstrated Progress To meet the demonstrated progress criterion, an agency needs to demonstrate progress in implementing corrective measures and in resolving the high-risk area. The following examples show actions Indian Affairs, BIA, and IHS took to partially meet the demonstrated progress criterion. Education. As of February 2019, Indian Affairs had addressed 11 of the 23 outstanding education recommendations we identified in our September 2017 testimony. Three of these recommendations were closed after the June 2018 hearing, including a recommendation from our 2013 report for BIE to develop a strategic plan and two recommendations from our 2017 report on improving the oversight and accountability for BIE school safety inspections. Overall, Indian Affairs’ efforts since we issued our High-Risk List update in February 2017 represent a significant increase in activity implementing our recommendations. Substantial work, however, remains to address our outstanding recommendations in several key areas, such as in accountability for BIE school safety and school construction projects. For example, Indian Affairs has not provided documentation that the inspection information its personnel collect on the safety of BIE schools is complete and accurate. As of late February 2019, 12 recommendations related to this high-risk area remain open and Indian Affairs concurred with all 12 recommendations. For a full description of the status of these open recommendations, see in table 1 in appendix I. Energy. BIA has shown significant progress developing data collection instruments and processes needed to track and review response times for a number of different actions associated with energy development. For example, in our June 2015 report, we recommended that BIA take steps to improve its geographic information system (GIS) capabilities to ensure it can verify ownership in a timely manner. We closed this recommendation as BIA has made significant progress in enhancing its GIS capabilities by integrating map-viewing technology and capabilities into its land management data system. In addition, we recommended that BIA take steps to identify cadastral survey needs. BIA’s enhanced map- viewing technology allows the bureau to identify land boundary discrepancies, which can then be researched and corrected. To address the recommendation, BIA identified unmet survey needs that were contained within the defunct cadastral request system. BIA developed a new mechanism for its regions and agency offices to make survey requests and a new database to maintain survey requests. In fall 2018, BIA completed enhancements to TAAMS that will allow the agency to track time frames and status of oil and gas revenue-sharing agreements-called communitization agreements (CA) through the review process. BIA held training on the enhancements in November 2018 and requested staff input information on any newly submitted CAs in the system. In a meeting on February 25, 2019, the Acting Director of BIA said that BIA had also completed efforts to modify TAAMS, incorporating the key identifiers and data fields needed to track and monitor review and response times for oil and gas leases and agreements. We believe these actions show significant progress in addressing management weaknesses associated with outdated technology and data limitations for tracking and monitoring the review and approval of energy related documents. However, BIA needs to collect data from its updated system, develop time frames, and monitor agency performance to close open recommendations. For a full description of the status of the agency’s open recommendations, see in table 2 in appendix II. Health Care. IHS has made progress in implementing corrective actions related to the management of health care programs. Specifically, since our 2017 High-Risk Report, IHS implemented four of our 13 open recommendations. For example, in response to our April 2013 recommendation, to ensure that IHS’s payment rates for contracted services do not impede patient access to physician and other nonhospital care, IHS developed an online tool that enables the agency to track providers that do not accept IHS’s payment rates. As of March 2019, six out of the 13 recommendations in our 2017 High- Risk Report remain open, and we have added one additional recommendation—for a total of seven open recommendations related to this high-risk area. IHS officials told us that they plan to complete the implementation of additional recommendations in 2019. For a full description of the status of the agency’s open recommendations, see in table 3 in appendix III. To fully meet the demonstrating progress criterion, agencies need to continue taking actions to ensure sustained progress and show that management shortcomings are being effectively managed and root causes are being addressed. In conclusion, we see some progress in meeting all of the criteria, at all agencies, especially related to education programs. However, permanent leadership that provides continuing oversight and accountability is needed. We also see varying levels of progress at all of the agencies in understanding what they need to do to be removed from the High-Risk List, and identifying steps that can be incorporated into corrective action plans. We look forward to working with the agencies to track their progress in implementing a framework for monitoring and validating the effectiveness of planned corrective actions. Among the greatest continuing challenges for the agencies is developing sufficient capacity, including demonstrating that they have the people and other resources required to address the deficiencies in their programs and activities. This challenge cannot be overcome by the agencies without a commitment from their leadership and the administration to prioritize fixing management weaknesses in programs and activities that serve tribes and their members. Sustained congressional attention to these issues will help ensure that the agencies continue to achieve progress in these areas. Chairman Hoeven, Vice Chairman Udall, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have. GAO Contacts and Staff Acknowledgments If you or your staff have any questions about health care issues in this testimony or the related reports, please contact Jessica Farb at (202) 512-7114 or farbj@gao.gov. For questions about education, please contact Melissa Emrey-Arras at (617) 788-0534 or emreyarrasm@gao.gov. For questions about energy resource development, please contact Frank Rusco at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement include Kelly DeMots, (Assistant Director), Christina Ritchie (Analyst-in-Charge), Edward Bodine, Christine Kehr, Elizabeth Sirois, and Leigh White. Appendix I: Status of Open Recommendations to the Department of the Interior on Indian Education As of late February 2019, 12 of the 23 recommendations to the Department of the Interior on Indian education we identified in our September 13, 2017, testimony remain open. Appendix II: Status of Open Recommendations to the Department of Interior on Indian Energy As of February 2019, 12 of the 14 recommendations to the Department of Interior’s Bureau of Indian Affairs cited in our 2017 High-Risk Report remain open. Appendix III: Status of Open Recommendations to HHS on the Indian Health Service As of March 2019, six out of the 13 recommendations in our 2017 High- Risk Report remain open, and we have added one additional recommendation—for a total of seven open recommendations related to this high-risk area.
Why GAO Did This Study GAO's High-Risk List identifies federal program areas that are high risk due to their vulnerability to mismanagement, among other things. GAO added the federal management of programs that serve Indian tribes and their members to its February 2017 biennial update of high-risk areas in response to management weaknesses at Interior and HHS. GAO's recommendations identified in this high-risk area are neither reflective of the performance of programs administered by tribes nor directed at any tribally operated programs and activities. This testimony, which is based on GAO's March 2019 High Risk report, provides examples of actions taken and progress made by these agencies to address the five criteria GAO uses for determining whether to remove a high-risk designation. For this statement, GAO also drew on findings from its reports issued from September 2011 through August 2018 and updated that work by reviewing agency documentation and interviewing agency officials. What GAO Found GAO designated the federal management of programs that serve tribes and their members as high risk in 2017. Officials from the Department of the Interior's Office of the Assistant Secretary-Indian Affairs (Indian Affairs), the Bureau of Indian Education (BIE), the Bureau of Indian Affairs (BIA), and the Department of Health and Human Services' (HHS) Indian Health Service (IHS) have expressed their commitment to addressing the issues that led to the designation. Since GAO last testified before this committee on June 13, 2018, Indian Affairs, BIE, BIA, and IHS have demonstrated progress to partially meet each of the five criteria for removing a high-risk designation (leadership commitment, capacity, action plan, monitoring, and demonstrated progress). However, additional progress is needed to fully address management weaknesses—particularly in the areas of retaining permanent leadership and a sufficient workforce. For example, to meet the capacity criterion, an agency needs to demonstrate that it has the capacity (i.e., people and other resources) to resolve its management weaknesses. While Indian Affairs, BIE, BIA, and IHS each made progress identifying capacity and resources to partially meet this criterion, BIE and IHS continue to face significant workforce challenges. Specifically, although BIE has conducted hiring in recent years as part of an effort to reorganize the bureau, about 50 percent of all BIE positions have not been filled according to recent BIE documentation. IHS also faces workforce challenges—GAO's August 2018 report found that IHS's overall vacancy rate for clinical care providers was 25 percent. GAO has identified varying levels of progress at the agencies in understanding what they need to do to be removed from the list and will continue to closely monitor their progress. What GAO Recommends GAO has made more than 50 recommendations related to this high-risk area to improve management weaknesses at some Interior and HHS agencies—specifically BIE, BIA, and IHS—of which 31 recommendations are still open. Sustained focus by Interior and HHS in fully implementing these recommendations and continued oversight by Congress are essential to achieving progress in these areas.
gao_GAO-20-211
gao_GAO-20-211_0
Background VA, through VHA, operates the nation’s largest integrated health care system. At the local level, VA has 172 VA medical centers that are organized into 18 Veterans Integrated Service Networks (VISN). At the national level, VHA’s central office includes approximately 75 national program offices as of October 2019, which perform a range of clinical or administrative functions. For example, some program offices are responsible for specific clinical areas, such as spinal cord injury or mental health care, and may develop policy for those areas. To support VA’s health care delivery system, VA’s intramural research program aims to improve veterans’ health by funding research on issues that affect veterans, developing effective treatments for veterans, and recruiting and retaining VA researchers. VA’s medical and prosthetic research appropriation—$722 million in fiscal year 2018—funds VA’s intramural research program. VA also uses funding from its other appropriation accounts—$544 million in fiscal year 2018—to support VA’s intramural research by paying some costs associated with this research, such as equipment maintenance. According to VA, more than 60 percent of VA researchers are also clinicians who provide direct patient care, which helps translate VA research into clinical practice. ORD manages VA’s intramural research program. Within ORD, there are four research and development services that are responsible for administering and supporting research; each research service has a specific focus, such as biomedical research and rehabilitation research. Each of the four research services is led by a director and has scientific program managers who are responsible for specific research portfolios (or topic areas) within their service. In addition to the four research services, ORD has a Cooperative Studies Program that is responsible for large-scale clinical trials and epidemiological studies within VA. (See Table 1.) All five of these ORD components support research by funding VA research projects. See appendix II for details on research funding and awards. Organizationally, ORD falls within the Office of Discovery, Education and Affiliates Networks, which was created in November 2018 to foster collaboration in addressing veterans’ health concerns. ORD funds VA intramural research in a number of ways, such as the following: Merit Review Program. This program supports VA research projects that are typically led by one VA researcher at one VA facility. ORD’s four research services administer this program and are responsible for soliciting, reviewing, selecting, and funding research proposals submitted by VA researchers. Researchers may submit proposals either in response to a request for applications on a specific topic (sometimes called targeted or focused requests) or to a general request for applications, for which researchers can propose projects on a wider range on topics. To be considered for funding, research proposals must be veteran-centric and meet other requirements. Each ORD research service typically evaluates Merit Review Program research proposals in two review cycles per year. Selected projects are funded for a set number of years and have a maximum budget— typically for four years with a maximum amount of $1.2 million. The Merit Review Program accounts for the majority of VA-funded research studies that ORD funds. Cooperative Studies Program. This program funds larger-scale, multi-site clinical trials and epidemiological research studies on diseases that affect veterans. VA researchers can submit proposals at any time during the year. ORD’s Chief Research and Development Officer and Cooperative Studies Program leadership evaluate proposals in two review cycles per year. The time frame and budget for selected studies varies, depending on features of specific studies. Career Development Program awards. The research services’ Career Development Program provides funding to support, train and mentor individuals early in their career as VA researchers, which can include funding for specific research projects. Funding to support VA research centers and entities. In addition to funding individual research projects and researchers, ORD funds research centers and entities that focus on specific research areas. For example, the rehabilitation research service provides “core funding” to support 12 research centers focusing on areas such as limb loss, spinal cord injury, vision loss, and auditory disorders. This type of funding is competitive, and VA researchers must recompete in 5-year cycles. Once research has been completed, the findings can inform additional research. For example, research findings from a study on tissue could be used to inform a study done on humans in a controlled clinical setting, such as a clinical trial, which could in turn inform research that tests the effectiveness of a particular intervention in a less-controlled community setting. This concept is referred to as the “research pipeline.” Research findings leading to broad changes in clinical practice that affect public health is considered the end of the pipeline. Within VA, research findings can be translated into clinical practice in a number of ways, such as by implementing a new diagnostic tool, changing the treatment protocol for a particular disease, adding a new prompt for providers in the electronic health record, or developing a new clinical policy or a clinical practice guideline. The specific outcome might vary depending on the study or type of research. For example, one body of VA research has confirmed the utility of an intimate partner violence screening tool for female veterans in primary care settings. These findings will be used in developing national guidelines for screening for, and responding to, intimate partner violence. In addition, VA research contributed to the development of a clinical practice guideline for the management of upper extremity amputation rehabilitation. This guideline is a tool to assist clinicians and health care professionals with their decision-making when caring for individuals with upper extremity amputation. VA Uses Stakeholder Input and Other Factors to Set Priorities for Funding Research Projects ORD leadership sets VA’s national research priorities based on input from internal and external stakeholders and other factors. The directors of ORD’s four research services, in turn, set their own service-level research priorities based on the national priorities, veterans’ specific needs, and other considerations. Once these research priorities are set, ORD officials use a range of approaches to incorporate them when funding research, such as by funding collaborative research efforts focused on specific priorities. ORD officials said in funding research, they also consider other clinical and research needs that are not identified as priorities but are still important to veterans’ health, such as encouraging researchers to test new ideas. VA’s ORD Sets National Research Priorities Based on External and Internal Input At the national level, ORD leadership sets VA’s overall research priorities based on input from internal and external stakeholders and other factors. ORD sets these priorities annually, and its priority-setting process involves discussions with stakeholders and reviews of relevant VA data, according to ORD officials. For example, ORD sets priorities using the following methods: Input from internal VA stakeholders. The directors of ORD’s research services provide input to the Chief Research and Development Officer on issues they see as priorities. Officials outside of ORD, such as the leadership of VA’s new Office of Discovery, Education, and Affiliated Networks, of which ORD is a part, and other VA leaders also provide input, according to ORD officials. Input from external stakeholders. ORD officials said they also consider input from the National Research Advisory Council, a 12- member federal advisory committee that provides advice to VA on its research and development efforts, such as recommending which topics to include among the agency’s research priorities. According to ORD officials, the office also obtains input on research priorities by meeting with veterans service organizations and by hearing from veterans through veterans’ engagement opportunities. In addition, Congress provides direction and input on topics that VA should study, such as through legislation or committee reports. Other factors. ORD officials said, for example, they set research priorities using VHA data on the prevalence of health conditions among veterans and Veterans Benefits Administration data on military deployment-related conditions. Based on stakeholder input and other factors, ORD established three types of national research priorities: strategic, cross-cutting clinical, and other priorities. (Figure 1 shows these research priorities for fiscal years 2019 and 2020.) As of October 2019, ORD officials told us they were determining what these priorities will be for fiscal year 2021. Strategic priorities are broad long-term priorities that focus on VA’s research capability, resources, and operations, rather than on specific clinical conditions, according to ORD officials. For example, one of VA’s strategic priorities is to “put VA data to work for veterans.” As part of this priority, VA aims to improve its ability to leverage the agency’s medical data to improve veterans’ care. ORD officials said they plan to revisit these priorities about every 5 years, though the specific initiatives that fall within each priority can change annually. As an example of how the initiatives within the strategic priorities can change, ORD officials said that given VA’s plans to implement a new electronic health records system, the National Research Advisory Council advised ORD in 2019 to focus on mitigating any unintended consequences of this transition on research, as part of the strategic priority on data. ORD officials said that as a result of this input, they are increasing the intensity and scope of their efforts pertaining to this transition. Cross-cutting clinical priorities, in contrast, focus on predominant clinical conditions seen in veterans and can change yearly, according to ORD officials. For example, one current cross-cutting clinical priority is PTSD. As part of this priority, VA supports research to better understand the underlying biology of PTSD, refine approaches for diagnosing this condition, and develop and test new treatments. ORD officials said they plan to add precision oncology as a new cross- cutting clinical research priority for fiscal year 2021, based on input from VA leadership. ORD officials also said they plan to broaden the Gulf War illness cross-cutting clinical priority to include the effects of military service-related toxic exposures, more generally. This planned change is based in part on veterans service organization and Gulf War veteran input. Other priorities are those that VA will focus on in the near term, based primarily on input from Congress, veterans service organizations, and other non-ORD stakeholders, according to ORD officials. For example, several of these priorities for fiscal year 2020— such as addressing the prosthetic needs of women veterans and exploring ways to use an “exoskeleton” for veterans who have experienced strokes or traumatic brain injury (TBI)—were identified by Congress as research needs. ORD’s Four Research Services Set Their Own Priorities Based on National Priorities and Veterans’ Specific Needs and Conditions In addition to national priorities for research funding, ORD permits its four research services to set their own service-level research priorities, which are based in part on the national priorities. According to ORD officials, the directors of the four ORD research services—the biomedical laboratory, clinical science, health services, and rehabilitation research and development services—have latitude to set their own priorities, given their expertise in, and the particular focus of, their respective research areas. These directors told us they may consider a range of internal and external factors when setting priorities, including: Internal factors. The four research service directors told us they take into account VA’s national research priorities when determining their priorities. For example, the director of the biomedical laboratory service identified VA’s five cross-cutting clinical research priorities as priorities for this research service. Also helping shape research priorities are VA stakeholders, such as the Secretary of VA and scientific program managers—the ORD staff who are responsible for specific topic areas within their services. In addition, service directors told us they use VA data on veterans’ health conditions when setting research priorities. The leadership of ORD’s rehabilitation research service, for example, told us they review VA data on the top service- connected conditions for which veterans are receiving disability benefits, and take that factor into consideration, along with less prevalent conditions such as spinal cord injuries, that also have a significant impact on veterans’ function and independence when setting priorities. (See text box for examples of research projects on spinal cord injury, which is one of the rehabilitation research service’s priorities.) Exoskeleton-assisted walking Exoskeletons are motorized prostheses that are worn outside a person’s clothes and provide powered hip and knee motion, to help veterans with spinal cord injuries stand and walk. In this photograph, a research participant with a spinal cord injury uses an exoskeleton at the Bronx VA medical center. Department of Veterans Affairs (VA) Research on Spinal Cord Injuries VA provides care for about 27,000 veterans with spinal cord injuries. Veterans with spinal cord injuries may have secondary bone loss, muscle atrophy, and other conditions. They also have an increased prevalence of diabetes, heart disease, stroke, bowel and bladder incontinence, chronic pain, and reduced quality of life, according to VA. VA’s National Center for the Medical Consequences of Spinal Cord Injury, located at the Bronx VA medical center, is one of the VA’s rehabilitation research service’s research centers. The Center’s mission is to improve quality of life and increase longevity in individuals with spinal cord injuries by identifying and intervening to reduce and prevent the secondary consequences of spinal cord injuries. Examples of the Center’s VA-funded research include: Studying the safety and efficacy of exoskeleton-assisted walking in rehabilitation settings and in home and community environments, Developing and testing innovative approaches to improve bowel function, Studying the impact of low blood pressure and developing approaches to help individuals maintain normal blood pressure, Studying individuals’ difficulties regulating their body temperature, and developing interventions to address this problem, and Using magnetic and electrical stimulation to enhance arm and leg function. External factors. Congress can play a role in shaping the research services’ priorities. For example, the health services research service has identified research on policies and programs included in recently enacted legislation, including the VA MISSION Act of 2018 and the Comprehensive Addiction and Recovery Act of 2016, as a priority. Input from other federal partners, such as the National Institutes of Health (NIH) or the Department of Defense (DOD), also can influence the priorities of ORD’s research services. Officials with the rehabilitation research service, for example, said they meet with DOD officials about research efforts and that input from DOD on the health issues seen among active-duty service members can help them anticipate what health issues those service members might face when they transition to veteran status. They can then use that information when deciding which clinical areas to prioritize. In addition, one director said that input from veterans service organizations can shape research priorities, while another director obtains input from veterans through a VA veteran engagement group that provides information on the needs of veterans. ORD’s service-level research priorities cover a wide range of areas, such as service-connected conditions and conditions that veterans may experience as they age. As of October 2019, the services had each identified between 10 and 20 research priorities. (See the box below for examples.) Examples of Office of Research and Development’s Service-Level Research Priorities, as of October 2019 Traumatic brain injury (TBI) is a research priority for all the services. For example, the effect of prolonged opioid use on TBI outcomes is a priority for the rehabilitation research service. Post-traumatic stress disorder (PTSD) is a research priority for several services. For example, PTSD and the conditions that commonly co-occur with this condition is a priority for the clinical science research service. Pain is a research priority for all the services. For example, pain mechanisms and treatments, including alternatives to opioids, is a priority for the clinical science research service. Spinal cord injuries are a research priority for some services. For example, disability—including spinal cord injury and TBI— is a priority for the health services research service. Suicide prevention is a research priority for multiple services. The biomedical laboratory service, for example, has an emphasis on biological markers of suicide. Aging-related issues are a research priority for several services. For example, “long-term care, aging, and caregiver support” is a priority for the health services research service. ORD Uses a Range of Approaches to Incorporate Priorities When Funding VA Research Once priorities are set, ORD officials told us they use a range of approaches to incorporate those priorities when funding research projects, such as: encouraging researchers to study priority topics, considering priorities when deciding which projects to fund, and funding collaborative research efforts that are focused on specific priorities. In addition to the research priorities, ORD officials said they also consider other clinical and research needs when funding VA research, such as encouraging researchers to test new ideas in clinical areas that are not identified as priorities but are still important to veterans’ health. ORD officials’ approaches to incorporating priorities when funding research included the following examples: Encouraging researchers to study priority topics. ORD’s research services highlight their research priorities in their requests for research proposals. In some cases, they use targeted requests for research proposals solely on priority topics. In fiscal year 2019, ORD issued targeted requests for proposals linked to priorities such as suicide prevention, TBI, and the VA MISSION Act. In other cases, the research services highlight their research priorities in general requests for applications, which permit VA researchers to submit proposals on both priority and non-priority topics. For example, in 2019, the rehabilitation research service issued a general request for research proposals, stating that four research priorities—the prosthetic needs of women veterans, exoskeleton research related to patients with stroke and TBI, non-pharmacological interventions for chronic pain, and the effects of prolonged opioid use on long-term outcomes from TBI—were of particular interest for that funding cycle. (See text box for examples of VA research projects on priority topics.) Department of Veterans Affairs (VA) Research on Traumatic Brain Injury (TBI) and Stress Disorders Traumatic brain injury (TBI), a common injury among veterans of conflicts in Iraq and Afghanistan, can lead to a number of physical, cognitive, and emotional problems, such as memory and attention issues. These veterans may also experience post-traumatic stress disorder (PTSD), which can lead to anger, irritability, depression, substance abuse, and other symptoms, according to VA. VA’s Translational Research Center for TBI and Stress Disorders, located at the Jamaica Plain (Boston, Mass.) VA medical center, is one of VA’s rehabilitation research service’s research centers. The Center seeks to better understand the complex cognitive and emotional problems faced by these returning veterans, with the goal of developing better treatment options. The Center runs a longitudinal cohort study that collects imaging, genetic, and other data on returning veterans. Examples of the Center’s VA-funded research projects include: A study to assess the efficacy of the STEP-Home program, a 12-week workshop to help veterans who have served in Iraq or Afghanistan. The program aims to strengthen behavioral and emotional skills so that veterans are better equipped to rejoin their families and civilian communities. Studies to identify sub-types of PTSD, and to assess the long-term effects of PTSD and mild TBI. Research on the use of non-invasive brain stimulation to help patients with PTSD. Development of the Boston Assessment of Traumatic Brain Injury- Lifetime tool, a clinical interview to characterize head injuries and diagnose TBI throughout a patient’s lifespan. Considering priorities when deciding which projects to fund. Directors from all of ORD’s four research services stated that the scientific merit of research proposals—based on the proposals’ significance to veterans’ health, feasibility, and other criteria—is a key factor in funding decisions. Several directors said they may decide to fund a meritorious project that addresses one of their research priorities in lieu of another project that was ranked similarly or higher but does not address a priority. Some of the directors noted that this only applies to a small share of funded projects, but is part of how they align research projects with priorities. Funding collaborative research efforts. The biomedical laboratory service has funded field-based meetings to plan collaborative multi- site research programs to speed the development of treatments for service-related illnesses and injuries. The director of this research service said that in 2019, these research-planning meetings focused on ORD’s national research priorities, such as TBI, PTSD, and pain and opioids, among other topics. Also, starting in 2019, the health services research service is providing funding for its research centers to collaborate with other VA researchers on three of its priority areas: suicide prevention, opioid reduction and pain, and access to care. Officials from this research service said they also hold “State of the Art” conferences that can help VA make progress on priority areas. For instance, VA officials held a September 2019 conference on managing pain and addiction, specifically focusing on strategies to improve opioid safety. VA officials said this conference involved a wide range of VA staff and resulted in recommendations about research priorities, including areas where more research is needed. In addition to the research priorities, ORD officials said they consider other clinical and research needs when determining which health care research efforts to fund. Rehabilitation research service officials specifically noted that if they did not fund research in non-priority clinical areas, it would hinder their goal of encouraging researchers to test new ideas in other areas that are important to veterans’ health, which the officials say can lead to discoveries. The importance of innovation was echoed by other ORD officials, as well. Some ORD research service directors said that while new research needs emerge over time—as stakeholders highlight particular clinical needs, or VA leadership changes—it is important for VA research efforts to anticipate veterans’ longer-term needs and focus on more enduring issues, too. Officials from one research service said, for example, that they encourage researchers to focus on issues that will still be important in several years, such as women veterans’ care, because research can take years to yield results. In addition, some ORD research service directors said that although service-connected conditions are key parts of their research portfolios, they also work to address other conditions. Clinical science service officials said, for example, that they have a broad charge to support research into any disease or condition that affects veterans’ health. One of their priorities is researching diseases with a high health care burden among veterans, which may or may not be related to veterans’ military service. Rehabilitation service officials noted that their work focuses on veterans’ disabilities and impairments incurred through military service but is not limited to service-connected conditions. The officials said that because VA provides lifetime care to veterans, their research portfolio addresses events that cause impairment and disability throughout a veteran’s lifespan, including the aging process. For example, their research portfolio includes research on medical conditions that veterans may experience as they age, such as stroke, and chronic conditions like diabetes and kidney disease. As part of their efforts to consider multiple clinical and research needs, officials from the health services research service told us they are analyzing their overall research portfolio to determine where more or less research funding may be warranted. The officials explained that as part of their strategic planning efforts, they are identifying any areas they have “under-invested” in, and any areas that have received significant funding in the past but might no longer need that degree of investment. Among other things, they are considering the extent to which research on health conditions is already being done by other research organizations, such as NIH. They noted that while addressing chronic diseases is important to VA and its veteran population, it is possible that research on certain diseases is being covered by other research partners. In contrast, they said, there are areas where VA may have a unique ability to contribute to research because of its nationwide health care system or because it is ahead of the curve in health care trends, such as in telehealth and in integrating mental health care into primary care settings. Officials said their portfolio analysis could result in some “resetting” of research priorities and funding after the analysis is completed in 2020. Looking forward, ORD officials shared examples of approaches they are taking to boost the agency’s ability to address its research priorities. For example, ORD officials said there are a limited number of VA researchers working in certain priority areas, such as suicide prevention, which the officials said can hinder their efforts to fund new research projects. Among the efforts to boost the number of researchers working on priority areas, officials from one research service said they recently began incorporating their research priorities into the service’s Career Development Program funding awards. In addition, in 2019, ORD implemented a new method to spur and track ORD progress in addressing priorities. According to ORD leadership, as part of this method, ORD staff will identify the actions and resources needed to address specific priorities, and meet quarterly with the Chief Research and Development Officer to review their progress and identify next steps. ORD’s QUERI Program and Other VA Entities Facilitate Translating Research into Clinical Practice VA has a variety of efforts to facilitate translating research findings into clinical practice to improve the care veterans receive. These efforts include those undertaken by ORD’s QUERI program, its health services research service, and VA’s Diffusion of Excellence Initiative, as discussed below. ORD’s QUERI provides a link between the research program, VA program offices, and VA providers. According to the QUERI director, QUERI serves as the center of VA’s efforts to translate research into clinical practice. QUERI’s overall mission is to improve veteran health by rapidly implementing research findings and interventions into clinical practice. QUERI is housed within ORD, but funded separately by non- research dollars. QUERI facilitates research implementation through activities such as the National Network of QUERI programs. According to the director of QUERI, these programs are partnered with VA national program offices, and they take various practices—often identified or developed through VA studies—and implement them at the regional or national level. For example, through its “Bridging the Care Continuum” QUERI investigators focus on improving the health of vulnerable veteran populations, such as homeless veterans, by implementing a co-occurring mental health and substance use treatment within multiple VA medical centers. (See text box for an example of implementation through a QUERI National Program.) In addition, QUERI funds resource centers with technical experts who can help promote and review best practices for implementation. Specifically, one resource center—the Center for Evaluation and Implementation Resources in Ann Arbor, Mich.—is available to VA researchers for consulting on strategies to translate research. Example of Quality Enhancement Research Initiative (QUERI) Research Translated into Clinical Practice: Telemedicine Outreach for Post-Traumatic Stress Disorder (PTSD) in Small Rural Community-Based Outpatient Clinics The goal of the Virtual Specialty Care QUERI National Program is to implement and evaluate promising clinical practices that incorporate technologies to improve access to specialty care for veterans in rural settings. One example of its efforts is the telemedicine program, based on VA-funded research demonstrating the effectiveness of using telemedicine outreach for veterans with PTSD. The Office of Rural Health and the Virtual Specialty Care QUERI partnered to implement this telemedicine program which provides evidence-based psychotherapy for veterans with PTSD via interactive video either from their homes or at community-based outpatient clinics, and connects veterans with care managers to coordinate their treatment. According to VA, as of June 2019 the telemedicine program is being implemented in six states and 1,073 “hard to reach” veterans have been engaged via the program. In 2019, QUERI published the “Implementation Roadmap,” a new resource—intended for a variety of users, including researchers, clinicians, and leadership—to advance research translation at VA and provide information on how to identify, implement, and sustain evidence- based practices to improve the quality of care for veterans. The Roadmap outlines the different stages of research implementation, specifically delineating when research is ready to be implemented into clinical practice. The QUERI director told us staff created the Roadmap as a teaching tool to provide guidance on how to implement research at VA and when to collaborate with QUERI. In addition, the director of QUERI told us the Roadmap demonstrates the cyclical nature of research and how implementation is part of a continuous scientific process, not an “end game.” QUERI officials said that throughout the process of implementation, new research questions might be generated, which QUERI can use to inform further investigation or follow up studies. ORD’s health services research service funds studies that focus on direct application of research in clinical practice. ORD’s health services research service supports research translation by funding studies focused on how interventions work in “real world” settings and on implementing VA research findings into clinical practice. For example, little is known about the quality of non-VA care for sex-specific services such as mammography, according to VA, despite increasing numbers of women veterans relying on such care due to limited availability within VA. One study funded by this research service looked at strategies for provision, coordination, and quality of oversight of non-VA care for women, and assessed perceptions and experiences with non-VA care among women veterans. Among other things, the study found VA sites providing mammography were more likely to notify women more quickly of abnormal results than non-VA sites, but non-VA sites were more likely to meet guidelines for timely follow-up. In addition to funding individual research studies through merit review, the health services research service funds 18 Centers of Innovation (COIN), each of which focus on one or more areas of research that address questions significant to clinical and operational partners. For example, officials from the dual-site COIN in Seattle and Denver, which focuses on veteran-centered and value-driven care, told us they are participating in a study co-funded by VA, NIH, and DOD evaluating non-pharmacological options to treat pain and co-occurring mental health conditions in veterans with chronic pain; the study will be overseen by VA’s Office of Patient-Centered Care and Cultural Transformation. According to the research service officials, the COINs are designed to bring researchers from multi-disciplinary research teams together to engage in research and establish partnerships that can affect VA policies, practices, and health care outcomes. (See text box for an example of research funded by the health services research service that has been translated into clinical practice.) Example of Research Translated into Clinical Practice across VA: Reducing Catheter-Associated Urinary Tract Infections The Center for Innovation (COIN) in Ann Arbor, Mich., partners with VA clinical, policy, and operations leaders to implement and evaluate ways to make health care safer, more effective, and affordable for veterans. For example, an investigator from this COIN, funded in part through a Career Development Award, conducted research on enhancing patient safety by reducing catheter-related infections. Then, in partnership with another VA researcher, this investigator conducted a study funded by the health services research service and created a “bundle” of activities to implement in VA hospitals throughout Michigan. This included removing catheters as soon as possible and increasing the use of recommended infection control practices. VA researchers assisted the National Center for Patient Safety in implementing the practices, and the success of the “bundle” resulted in its national implementation in more than 1,000 hospitals. VA reported that catheter-associated urinary tract infection rates decreased by 32 percent in participating general medical and surgical units. According to officials from the health services research service, in addition to funding studies and COINs, the service maintains four resource centers, which provide support to VA researchers in several areas, including data, health economics, and dissemination. For example, the Center for Information Dissemination and Education Resources circulates research findings through VA newsletters, cyberseminars, and publications and educates clinicians and researchers on sharing findings. In addition, the center coordinates meetings and conferences—such as the service’s joint national conference with QUERI—which provides an opportunity for VA researchers to present scientific findings and discuss the implementation of findings into practice. In 2017, the conference focused on accelerating the adoption and spread of practices and improving VA’s ability to utilize healthcare data to enhance care for veterans. Officials from the health services research service told us that, starting in 2019, they began implementing two new strategies to increase the impact of VA research on veterans’ health care. First, the service began a new effort to bring together and fund consortiums of researchers from multiple COINs each with a particular focus on implementing evidence-based practices in a given priority area. VA officials told us that as of October 2019, the health services research service established two consortiums of researchers to focus on suicide prevention and opioids and pain management. The service is planning to add two additional consortiums in 2020 to focus on access to care and telehealth and connected care. Second, in 2019 the research service provided additional funding opportunities for COINs to submit research proposals that include five- year goals for the impact of their research, such as VA policy changes or spreading the research to additional sites, and yearly milestones for achieving those goals. Per the request for applications, applicants’ proposals must provide information on how the COINs plan to apply health services research methods, including implementation research. According to VA, as of October 2019, 20 proposals had been selected to receive funding through this new strategy. Diffusion of Excellence Initiative aims to encourage practitioner implementation of research-based practices outside of ORD. VA’s Diffusion of Excellence Initiative, created in 2015, established an annual competition—known as VA’s “Shark Tank”—to engage employees in implementing innovative practices that will positively impact veterans. According to officials from the Diffusion of Excellence Initiative, many of these practices are based on evidence-based research. Under the competition, “investors” (directors from VA medical centers and VISNs) make offers on practices that have been successfully implemented in at least one VA medical center, and the winning investor receives facilitated implementation support so that the practice can be implemented at the investor’s medical center. The officials told us that several of these practices have been identified as exemplary practices and are now being used nationally across multiple VA health care settings. For example, they described one such practice, a tooth-brushing routine implemented for hospitalized veterans to decrease the risk of oral bacteria getting into the veterans’ lungs, which research had shown could increase their risk of pneumonia. According to officials from the Diffusion of Excellence Initiative, the practice decreased hospital-acquired pneumonia by 90 percent at the pilot site, and is being implemented in other VA health care settings. Other VA efforts to facilitate research translation into clinical practice. In addition to the changes to the existing efforts for facilitating research translation, VA has recently taken other actions to help ensure findings from VA research are integrated into practice. In response to ORD’s current strategic priority to “increase the substantial real-world impact of VA research,” the director of ORD established a workgroup to create “The Research Lifecycle,” which was published in October 2019. The lifecycle is a resource that specifies processes to help move research to direct application in routine clinical care. It describes the research and implementation process from identifying innovations that align with clinical priorities to ensuring practices are sustained in clinical care, beyond research and implementation. For example, one phase of the process involves evaluating interventions to determine if they are ready to be implemented into clinical practice. The director of QUERI told us that the information in the publication is broadly applicable across all ORD research and that like the QUERI Implementation Roadmap, the publication reiterates that research is a continuous process rather than a straight line with an endpoint. In addition to the research lifecycle, an official from the agency’s Cooperative Studies Program—which funds large, multi-center clinical trials—told us the program established a new requirement in 2019 that research proposals include an implementation plan. The goal of this change is to encourage researchers to think about research translation from the beginning—and how their work might be translated into clinical practice, according to the program official. Researchers planning to conduct these types of clinical trials will have the opportunity to consult with internal implementation experts to develop plans to translate the research into clinical practice, according to ORD officials. VA Officials Described Efforts to Coordinate on VA Research Priority Setting and Translation VA officials from both ORD and the national program offices we spoke with described their experiences coordinating on research. Coordination can help both to inform research priorities to make them most useful and applicable, and to encourage the translation of research into clinical practice, which can help VA meet its broader goal of ensuring its research is benefiting veterans’ health. National program offices—such as those for clinical specialties including mental health or spinal cord injury care—provide input to ORD both on research priorities and on efforts to translate research findings into clinical practice within their respective issue areas. For example, officials from the Office of Mental Health and Suicide Prevention told us that their lead staff for suicide prevention participated in strategic planning efforts with ORD to determine a “road map” for current and future research in this area. This VA program office official described working with ORD to provide clinical perspective on gaps in research and clinical care related to suicide prevention, among other things. Given the disproportionately higher rate of suicide among veterans compared with the civilian population, such coordination can help maximize VA’s efforts both in research and in clinical care. Among other things, the road map identifies remaining questions related to suicide prevention to be addressed by VA and other researchers, categorized by type of research (e.g., epidemiological or intervention). Coordination between the research program and national program offices also can facilitate the conduct of the research itself, encouraging research that is viable and relevant to be conducted and translated into practice. For example, ORD leadership told us that program office buy-in on VA research priorities and efforts can lead to VA clinicians being more willing and able to participate in VA research. ORD leadership also told us that ORD has recently begun requesting that researchers engage and collaborate with relevant program offices during the planning process for large multi-site clinical trials, including seeking input from program offices on research proposals. Potential questions for researchers to ask include: does the relevant program office think the proposal’s topic is clinically important; is the research proposal feasible; and will it answer a question that is important from a clinical perspective? According to ORD officials, because VA funds a small number of these types of trials—which are intended to provide a definitive answer to a clinical question—researchers want to be sure the studies are relevant to the needs of the program offices. One specific example both ORD and program office officials provided was related to their coordination on research on osseointegration—a medical procedure through which a metal rod is inserted into the bone at the site of an amputation, allowing a prosthetic limb to be attached through the skin directly to the remaining bone of the amputated limb. Officials from the rehabilitation research service told us that they have been working with program office officials to consider aspects of implementation prior to beginning a clinical trial, including the availability of the surgical procedure throughout VA and the types of post-operative care patients would need. These officials told us that their goal is to ensure the clinical trial is designed for translation. In addition, because national program offices establish policies that affect the provision of care across VA, program office officials told us that collaboration with ORD can help them to incorporate evidence-based practices in developing and rolling out these policies. For example, an official from the Spinal Cord Injury and Disorders System of Care program office told us that it incorporated research findings when it revised its national policy—including a new requirement for all spinal cord injury centers to have vocational rehabilitation counselors on staff. A program office official told us that the addition of this requirement resulted from VA research—led by a researcher clinician—that found that veterans with spinal cord injuries who received specialized vocational support services had the best chance of success for job placement and continued employment. In another example, ORD officials told us that VA researchers were serving as subject matter experts to the national program office developing the protocol and clinical guidelines for implementing intranasal ketamine as a new treatment for certain veterans with treatment-resistant depression. ORD and program office officials described using both informal and formal approaches to coordinate on research priorities and translation. For example, program office officials told us about occasional participation of ORD staff in their regular meetings and calls, as well as relationships between program office staff and individual researchers. ORD officials from one service told us that their scientific portfolio managers serve as a sort of liaison between researchers and clinical program office partners. These officials told us that because VA’s research program is intramural, there is an ongoing discussion with researchers and others within VA in setting research priorities. Although VA officials were mostly positive in describing coordination between the research program and program offices, some officials noted opportunities for improvement, as well. Specifically, officials from the three national program offices we spoke with said it would be beneficial to have a more formal or systematic approach for coordination with ORD. An official from one national program office said a more systematic process would be helpful, so that collaboration is not so dependent on individual relationships or personalities. Officials from another national program office noted that they considered having one staff person as a dedicated resource to liaise with ORD, but lacked resources to do so. Given limited time and competing priorities for researchers and program office officials, ORD officials told us that it would be best to focus on strategic coordination, and noted that some such efforts are underway. Specifically, ORD leadership acknowledged that it would be helpful for ORD and program offices to engage more in general—particularly related to ORD’s research priorities. However, because ORD leadership said it would not be efficient to have to go “door to door” to each individual program office or VISN to have those discussions, it would be more helpful to find more strategic ways to engage. For example, ORD leadership said that ORD’s inclusion in larger annual VHA strategic planning sessions could be a way to facilitate strategic coordination. Similarly, health services research service officials told us that service’s new effort to build consortiums of researchers focused together on a particular priority area may also facilitate coordination between researchers and clinical program offices, particularly on key topics for VA, such as suicide prevention and opioids. In addition, ORD leadership told us that ORD is focusing its efforts on “big ticket items”—such as larger studies or clinical trials through the Cooperative Studies Program—where there can be a big impact through collaboration with program offices, because a single study generally does not lead to changes in clinical practice. Another mechanism available to facilitate strategic coordination between the research program and national program offices is ORD’s QUERI— particularly its Partnered Evaluation Initiatives—through which researchers partner with national program offices to evaluate specific initiatives with potentially high impact on VA national policy. For example, QUERI investigators have partnered with the Office of Mental Health and Suicide Prevention to evaluate an upcoming initiative to send “caring letters” to veterans who have called the Veterans’ Crisis Line and have been engaged in VA care recently. Caring letters, letters noting that the veteran is cared about and matters, are an intervention shown to be effective in reducing suicides in various at-risk populations. In addition, VA program office officials told us that while most VA program offices have their own internal program evaluation services, they do not have sufficient resources to evaluate the effectiveness of all of their programs and policies, motivating them instead to work with QUERI and ORD. For example, the Office of Mental Health and Suicide Prevention has also partnered with QUERI on the STAR VA program, to examine non- pharmacological approaches to treating agitation and other issues in veterans with dementia. Program office officials told us that they can use the results of these evaluations to influence policy, standard operating procedures, and treatment in the field. In summary, VA is uniquely positioned to implement research into clinical practice because of the research program’s adjacency to such a large, integrated health care system. As we have noted, coordination between the research program and partner entities could help ensure VA-funded research results in the spread and adoption of evidence-based practices. VA recognizes the importance of this coordination and continues to actively pursue effective coordination strategies. Agency Comments We provided a draft of this report to VA for review and comment. VA provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of the Department of Veterans Affairs and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or farbj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Department of Veterans Affairs (VA) Locations Selected for Site Visits Appendix II: Information on Department of Veterans Affairs’ (VA) Intramural Research Program Funding Levels for Awards In fiscal year 2018, VA’s appropriation for its intramural research program totaled $722 million. Of this amount, $558 million was for awards made by the Office of Research and Development’s (ORD) four research services and the Cooperative Studies Program. Table 3 below presents data on VA’s intramural research program funding and awards for fiscal year 2018. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Raymond Sendejas (Assistant Director), Julie T. Stewart (Analyst-in-Charge), Lauren Anderson, Jennie F. Apter, Robin Burke, and Taylor German made key contributions to this report. Also contributing were Jacquelyn Hamilton and Vikki Porter.
Why GAO Did This Study In addition to providing health care services, VA funds research on veterans' health conditions, including chronic conditions (such as diabetes) as well as illnesses and injuries resulting from military service (such as TBI). VA's ORD manages the agency's research program, including its intramural research. In fiscal year 2018, VA resources for its intramural research program included an appropriation of $722 million. GAO was asked to review aspects of VA's research program. In this report, which focuses on VA's intramural research, GAO describes 1) how VA sets priorities for funding research, 2) VA efforts to facilitate translation of research into clinical practice, and 3) coordination between VA's research program and other VA entities. To perform this work, GAO reviewed VA policies, reports, and other documents about VA research efforts. GAO also interviewed officials from ORD, three VA national clinical program offices, and two VA offices that focus on implementing evidence-based practices. In addition, GAO conducted site visits with four VA medical centers. GAO selected those locations because they house VA-funded research centers that focus on a range of topics and ORD programs that focus on disseminating and translating research. At each location, GAO interviewed medical center officials and VA researchers. GAO also reviewed VA summary data on research projects and funding for fiscal year 2018. VA provided technical comments on a draft of this report, which GAO incorporated as appropriate. What GAO Found The Department of Veterans Affairs (VA) uses stakeholder input and other information to set priorities for funding research projects. VA's Office of Research and Development (ORD) manages VA's intramural research program—that is, research funded by and conducted within VA, by VA researchers. To set priorities, ORD considers input from VA and non-VA stakeholders (such as agency leaders and a federal research advisory council, respectively) and data on veterans' health conditions. ORD encourages VA researchers to study—and collaborate with other VA researchers on—priority topics, such as post-traumatic stress disorder (PTSD) and traumatic brain injury (TBI). ORD's Quality Enhancement Research Initiative (QUERI) and other VA entities facilitate translating research findings into clinical practice to improve care for veterans. QUERI is VA's central point of focus for research translation and provides a link between ORD, VA program offices, and providers. For example, one QUERI program is studying delivery of an evidence-based treatment for PTSD using telemedicine, specifically, by providing psychotherapy via video to veterans in rural areas. Another program recently adopted a new research translation strategy by establishing a requirement that research proposals for large, multi-center clinical trials include an implementation plan. VA officials said the goal of the new requirement is to encourage researchers to think about research translation from the beginning of a study—and how their work might be translated into practice. VA officials from both ORD and the national program offices GAO spoke with described a variety of efforts coordinating on research. Such coordination can help inform research priorities and help program offices incorporate evidence-based practices in developing and rolling out national policies. For example, ORD officials said that VA researchers were serving as subject matter experts to the national program office developing a protocol and clinical guidelines for a new treatment for certain veterans with depression that is resistant to existing treatments.
gao_GAO-20-199
gao_GAO-20-199_0
Background Enacted on January 23, 1995, the CAA, as amended, applies 13 federal civil rights, workplace, and labor laws to legislative branch employees who were previously exempted from such coverage. Table 1 lists the 13 laws included under the CAA. The CAA contained a series of specific requirements for the Office of Compliance to meet as it carried out its responsibility to administer and enforce the act. Toward this end, the Office of Compliance took a number of actions, such as administering a dispute resolution process; conducting investigations and inspections to ensure compliance with safety, health, and disability access standards; investigating and managing matters concerning labor management relations, and educating both employees and employing offices about their rights and responsibilities under the CAA. The Reform Act expanded the office’s duties and responsibilities, as well as the number of employees covered by the CAA. These new duties and responsibilities include, among other things: changing the name of the office to OCWR; substantially modifying the administrative dispute resolution process under the CAA, including creating additional procedures for preliminary hearing officer review of claims; appointing one or more advisers to provide confidential information to legislative branch employees about their rights under the CAA; extending CAA protections to unpaid staff, including interns, detailees, and fellows, as well as previously unprotected legislative branch employees; conducting a workplace climate survey; significantly expanding OCWR reporting obligations; creating a program to permanently retain records of investigations, mediations, hearings, and other proceedings; and establishing an electronic system to receive and keep track of claims. The act mandated that OCWR institute some of these requirements, such as changing the name of the office, immediately. Other requirements, such as establishing an electronic system to receive and keep track of claims, were to be met no later than 180 days after the implementation of the act, or by June 19, 2019. To implement its statutory requirements, OCWR currently has 28 full-time equivalent positions, which includes five part-time members of OCWR’s Board of Directors (counted as one full-time equivalent) appointed by congressional leadership. This represents an increase of five full-time equivalents since April 2018. OCWR Relies on External Entities to Provide IT Services and Systems, Including the Upgrade to Its Claims Management System OCWR relies extensively on IT services and systems provided by external parties to support its mission-related operations and protect claims data. For example, the Library provides network and end-user computing services for OCWR, including email; network services such as Internet access and file sharing; and end-user services and support, such as desktop support and software management. OCWR also relied on an external contractor to develop and maintain its legacy claims management system, known as the Case Management System (CMS). Since 2014, the office used CMS to manage claims submitted by covered legislative branch employees using one of four ways: in person at OCWR’s office; or by mail, email, or fax. After a claim was received, an OCWR employee would manually enter the claim information into CMS and update the information as it progressed through the dispute resolution process. In response to the Reform Act enacted in December 2018, OCWR initiated the SOCRATES project to meet the requirement of implementing an electronic system for claims. SOCRATES is intended to enable covered legislative branch employees to file a claim via a web-based form, and an OCWR employee to electronically manage the workflow of claims as they progress through the dispute resolution process. Specifically, the system is expected to maintain and track claim deadlines, generate correspondence, as well as update and store claim information. OCWR relied on both the Library and an external contractor to upgrade CMS to SOCRATES. As part of its SOCRATES implementation efforts, OCWR first moved the CMS application and claim data from its office to the Library, which began hosting the system in April 2019. Between April 2019 and June 2019, OCWR’s external contractor continued work to develop and implement new and updated components for CMS to facilitate the electronic filing and management of claims. In addition, the external contractor worked to develop and implement the web-based form to electronically capture claims. According to OCWR, SOCRATES is comprised of three components that are hosted by the Library: SOCRATES web-based form: This form is intended to be used by covered legislative branch employees to submit a claim alleging a violation of civil rights, workplace, or labor laws during their employment. Secure information sharing platform: This platform is intended to be a web-based, secure workflow file collaboration application. The platform allows for the sharing of claim related information between OCWR, the covered employee, the employee’s office, and any other relevant parties (e.g., employee representatives). SOCRATES internal CMS console: Based on updated functionality from OCWR’s CMS, this console is intended to provide secure, detailed workflow management of each claim that is submitted. Specifically, the console introduces new workflows based on the Reform Act’s updated requirements for a claim and allows OCWR employees to internally manage a claim. Figure 1 shows the updated claim filing process using SOCRATES. According to OCWR, testing of SOCRATES the week prior to its June 19, 2019, due date revealed numerous problems with the system. For example, if a user did not submit his or her claim within a certain amount of time, the system refreshed the webpage without saving the user’s data, forcing the user to restart the claim. As a result, OCWR delayed the deployment 7 days to allow time to resolve this issue and others. On June 26, 2019, OCWR deployed SOCRATES and began accepting claims via the web-based form. In addition to SOCRATES, OCWR relies on the external contractor to provide hosting and application support for FMA. FMA is used by OCWR to document reported violations of the Occupational Safety and Health Act. The CAA requires OCWR to conduct biennial inspections of the legislative branch to ascertain compliance with the act and to report its findings to Congress. The office also reports its findings to the legislative branch agency that is reportedly in violation of the act in a Hazard Summary Report. The agency is responsible for responding, and providing verification of the abatement of violations and hazards documented in the findings, to OCWR. Federal Information and Systems Are Increasingly Targeted by Cybersecurity Threats IT systems supporting federal agencies are inherently at risk. These systems are highly complex and dynamic, technologically diverse, and often geographically dispersed. This complexity increases the difficulty in identifying, managing, and protecting the numerous operating systems, applications, and devices comprising the systems and networks. Compounding the risk, federal systems and networks are also often interconnected with other internal and external systems and networks, including the internet. This increases the number of avenues of attack. Information and systems are subject to serious threats that can have adverse impacts on organizational operations and assets, individuals, other organizations, and the nation. These threats can include purposeful attacks, environmental disruptions, and human/machine errors, and may result in harm to the national and economic security interests of the United States. In recognition of the growing threat, we have designated information security as a government-wide high-risk area since 1997. In 2003, we expanded the information security high-risk area to include the protection of critical cyber infrastructure. We further expanded the information security high-risk area in 2015 to include protecting the privacy of personally identifiable information. Cybersecurity incidents continue to impact federal agencies, including those entities in the federal executive and legislative branch. For example, in fiscal year 2017, federal executive branch civilian agencies reported 35,277 incidents to the U.S. Computer Emergency Readiness Team. These incidents included web-based attacks, phishing, and the loss or theft of computing equipment. These incidents and others like them can pose a serious challenge to economic and national security and personal privacy. The following examples highlight the impact of incidents from legislative and executive branch entities: In January 2019, the Department of Justice announced that it had indicted two Ukrainian nationals for their roles in a large-scale, international conspiracy to hack into the Securities and Exchange Commission’s computer systems and profit by trading on critical information they stole. The indictment alleges that the two hacked into the commission’s Electronic Data Gathering, Analysis, and Retrieval system and stole thousands of files, including annual and quarterly earnings reports containing confidential, nonpublic, financial information, which publicly traded companies are required to disclose to the commission. In July 2016, the Library announced that it had experienced a significant distributed denial-of-service attack that affected multiple internal and external Library systems and services. Specifically, the attack successfully disrupted services to multiple Library systems and services including email, databases, and public web domains, such as Congress.gov. According to the Library, the attack was sophisticated in both the size of the attack and methods that the attack employed. In June 2015, the Office of Personnel Management reported that an intrusion into its systems had affected the personnel records of about 4.2 million current and former federal employees. Then, in July 2015, the agency reported that a separate, but related, incident had compromised its systems and the files related to background investigations for 21.5 million individuals. In total, the Office of Personnel Management estimated that 22.1 million individuals had some form of personally identifiable information stolen, with 3.6 million being a victim of both breaches. Key Cybersecurity Management Activities Relevant to OCWR Have Been Established in Law and Guidance Recognizing the importance of information security and privacy, Congress enacted the Federal Information Security Modernization Act of 2014 (FISMA), which requires federal agencies in the executive branch to develop, document, and implement an information security program and to evaluate the program for effectiveness. The act retains many of the requirements for federal agencies’ information security programs previously set by the Federal Information Security Management Act of 2002. As legislative branch entities, OCWR and the Library are not subject to FISMA. However, OCWR’s Executive Director and the Library’s Chief Information Officer have chosen to follow aspects of the law’s requirements. For example, an interagency agreement between OCWR and the Library describes plans to protect OCWR’s CMS application and claim data using NIST guidance that is intended to satisfy FISMA requirements and relates to managing risks to the information system. The 2002 act also assigns certain responsibilities to NIST, which is tasked with developing standards and guidelines for systems other than national security systems. These standards and guidelines must include, at a minimum, (1) standards to be used by all agencies to categorize all of their information and information systems based on the objectives of providing appropriate levels of information security, according to a range of risk levels; (2) guidelines recommending the types of information and information systems to be included in each category; and (3) minimum information security requirements for information and information system in each category. Accordingly, NIST developed a risk management framework of standards and guidelines for agencies to follow in developing information security programs. The framework addresses broad information security and risk management activities to be followed in developing information systems, including categorizing the system’s impact level; selecting, implementing, and assessing security controls; authorizing the system to operate (based on progress in remediating control weaknesses and an assessment of residual risk); and monitoring the efficacy of controls on an ongoing basis. GAO Has Previously Reported on OCWR Project Management Challenges and Information Security Weaknesses within the Library’s IT Environment In December 2019, we reported that OCWR faced management challenges in implementing its new requirements under the Reform Act, such as establishing a program to permanently retain records of investigations, mediations, hearings, and other proceedings. Specifically, we determined that OCWR did not always use project schedules to manage the implementation of the requirements of the Reform Act. For example, we noted that the office used a project schedule for developing the workplace climate survey, but did not use a project schedule to manage the SOCRATES project. We also determined that OCWR did not address risks associated with its records retention program. For example, we noted that the office had not yet developed policies and procedures to address the risks associated with permanently retaining sensitive records, such as ensuring they remain confidential when stored in multiple locations. Our report also identified weaknesses in OCWR’s IT planning, including that the office did not develop long-term strategies for recruiting and retaining staff with critical skills and competencies needed to achieve current and future agency goals. Accordingly, our report included six recommendations for the office related to incorporating key management practices into project planning and ensuring that it has the necessary skills and capacity to meet its mission. OCWR agreed with our recommendations and described plans to address them. We have also previously reported on weaknesses with the Library’s information security program, as well as specific security controls that support OCWR’s systems and services. In March 2015, we issued a report that identified weaknesses in the Library’s information security program. We made 10 recommendations to the Library aimed at better protecting IT systems and reducing the risk that the information they contain will be compromised. These recommendations included, among other things, developing contingency plans for all systems and conducting comprehensive and effective security testing for all systems within the time frames called for by Library policy. The Library generally agreed with our recommendations and described planned and ongoing actions to address them. As of January 2020, the Library fully implemented nine of the 10 recommendations and has taken steps to implement the remaining recommendation. We have work underway to determine whether the steps taken by the Library fully address the remaining recommendation. In a related June 2015 limited official use only report, we made 74 detailed security recommendations aimed at addressing specific weaknesses in the Library’s security controls. The Library generally agreed with our security recommendations and described planned and ongoing actions to address them as well. As of January 2020, the Library had fully implemented 72 of 74 detailed security control recommendations from this report and had plans to implement the remaining two recommendations by February 2020. OCWR Did Not Incorporate Key Cybersecurity Management Activities into Project Planning for Its Claim Management System Upgrade Effectively managing a project entails, among other things, developing a project schedule, defining and managing requirements, and effectively managing project risks. Project scheduling. The success of a program depends, in part, on having an integrated and reliable master schedule that defines, among other things, when work activities will occur, how long they will take, and how they relate to each other. A reliable schedule provides a road map for systematic execution of a program and a means by which to gauge progress, identify and address potential problems, and promote accountability. GAO’s Scheduling Assessment Guide lists 10 best practices associated with a high-quality and reliable schedule, including capturing and sequencing all activities, as well as establishing the duration of all activities. Requirements management. Requirements establish what the system is to do, how well it is to do it, and how it is to interact with other systems. The Software Engineering Institute’s Capability Maturity Model Integration® for Acquisition (CMMI-ACQ) and Capability Maturity Model Integration® for Development (CMMI- DEV) note that requirements management processes are important for enabling programs to ensure that their set of approved requirements is managed to support planning and execution needs. This should include steps to obtain stakeholder’s review and commitment to the requirements and to manage changes to requirements as customer needs evolve. Project risk management. The discipline of risk management is important to help ensure that projects are delivered on time, within budget, and with the promised functionality. According to leading practices for acquisition, the purpose of risk management is to identify potential issues that could endanger achievement of critical objectives before they occur. A continuous risk management approach effectively anticipates and mitigates risks that can have a critical impact on a project. Organizations that plan to acquire IT products and services for a project should also identify and assess risks associated with the acquisition process. Incorporating cybersecurity management activities (such as the selection and implementation of security controls) into each of these project planning areas can help to reduce cybersecurity risks and better protect critical assets. For example, according to NIST’s risk management framework, integrating system security requirements into a project’s planning activities, such as scheduling, can help to ensure that resources are available when needed and that project milestones are met. In addition, the framework notes that defining the system security requirements early and integrating them with other system requirements can result in a system having fewer deficiencies, and therefore, fewer security vulnerabilities that can be exploited in the future. The framework also describes the importance of identifying security risks early in a system project and addressing such risks on an ongoing basis. However, OCWR did not effectively manage the SOCRATES project because it did not establish a schedule, develop and manage requirements, and manage risks. Consequently, the office did not incorporate key cybersecurity management activities into each of these project planning areas. Specifically: OCWR did not manage the SOCRATES project using an established, approved project schedule that identified when cybersecurity activities would be completed. As discussed earlier, we previously reported that OCWR did not establish a project schedule to manage the SOCRATES project. Although the office drafted a project schedule in January 2019, this schedule was not finalized and used during the project. According to OCWR’s Director of the IT Governance, Risk Management, and InfoSec Compliance Program, the schedule was not used due to, among other things, challenges encountered in managing the interdependencies of SOCRATES development with the implementation of other Reform Act requirements (e.g., modifying the administrative dispute resolution process). Consequently, OCWR did not use a project schedule to manage key SOCRATES cybersecurity activities, including those to be completed by OCWR, the Library, and the contractor. To its credit, the Library provided an early project schedule with certain cybersecurity activities they performed related to CMS. For example, the Library’s project schedule documented initial activities the Library was to perform that related to procurement of equipment, installation of software, security testing, and vulnerability remediation in order to move CMS from OCWR to the Library. However, OCWR did not use a project schedule for the upgrade of CMS to SOCRATES that included the time frames for key cybersecurity management activities, such as selecting and documenting security controls, implementing controls, and assessing controls. The lack of a project schedule likely hindered OCWR’s ability to respond to changes during the project and execute key cybersecurity management activities in a timely manner. For example, in May 2019, OCWR made a decision to use a locally hosted platform at the Library for its secure information sharing platform instead of a cloud-based solution. Without a project schedule, OCWR was unable to assess the impact of this late change on the time available for completing remaining cybersecurity management activities. OCWR did not establish a requirements management process or develop a set of detailed system requirements, including cybersecurity requirements. OCWR did not establish a requirements management process that included steps to obtain stakeholders’ review and commitment to the requirements and to manage changes to the requirements. Instead, the office established a set of business flow diagrams, which identified how claim information would move within OCWR and SOCRATES. Further, OCWR did not establish a set of detailed system requirements, including the cybersecurity requirements (e.g., what cybersecurity controls were to be implemented). OCWR did not document and manage risks to the SOCRATES project, including those related to cybersecurity. OCWR did not document and manage risks for the SOCRATES project. Specifically, the office did not document and manage risks related to cybersecurity and did not mitigate those risks that could have had a critical impact on the project. For example, OCWR was not able to ensure that the Library tested all moderate-level security controls for the SOCRATES web-based form and secure information sharing platform before the system was deployed. However, this was not documented or managed by OCWR as a risk. In addition, as discussed later in this report, there were also risks associated with OCWR’s reliance on the Library and its external contractor that were implementing cybersecurity responsibilities on its behalf. For example, we identified shortfalls in the OCWR’s oversight of the planning and conducting of system security assessments. However, no risks related to the office’s reliance on external parties were documented or managed throughout the project. According to the Director of the IT Governance, Risk Management, and InfoSec Compliance Program, the office did not complete key project planning activities and documentation, in part, because of the compressed time frame associated with the project and the need to complete it by its mandated June 19, 2019, completion date. In aiming to meet this date, the OCWR official added that they held frequent discussions with the contractor and made changes “on the fly” to ensure that OCWR met the mandate. However, frequent discussions with the contractor does not negate the need to document and manage cybersecurity activities using leading project planning practices, including a project schedule, a requirements management process, and a risk management process. OCWR’s project management weaknesses also occurred, in part, because the office lacked policies and procedures for IT project scheduling, requirements management, and risk management. Such policies and procedures are critical to have in place as OCWR plans future IT projects. For example, as of October 2019, the office was planning to move its other key system, FMA, to the Library in 2020. Until OCWR develops and implements policies and procedures for incorporating cybersecurity management activities into its IT project planning using a project schedule, a requirements management process, and a risk management process, it will continue to have a limited ability to effectively manage and monitor the completion of cybersecurity activities and will face increased cybersecurity risks. OCWR Did Not Fully Implement Oversight Activities for Selected IT Systems Operated by External Parties on Its Behalf The responsibility for adequately mitigating risks arising from the use of externally-operated systems remains with the agency itself. NIST Special Publications 800-53 and 800-53A guide agencies in selecting security and privacy controls for systems and assessing them to ensure that the selected controls are in place and functioning as expected. Additional NIST special publications on IT security services and risk management (Special Publications 800-35 and 800-37) identify several key activities important for assessing the security and privacy controls of information systems operated by external entities. The key activities and the steps included in NIST Special Publications 800-35 and 800-37 are shown in table 2. For the two selected systems—SOCRATES and FMA—OCWR either partially implemented, or did not implement, system oversight activities. Table 3 details the extent to which OCWR implemented system oversight activities and is followed by a discussion of each activity. Establish security and privacy requirements. OCWR partially implemented this oversight activity for both SOCRATES and FMA. Communicate requirements to external entities. OCWR communicated certain security and privacy requirements to its external partners for these two systems. For example, the office’s agreements with the Library for SOCRATES stated that the system will be secured in accordance with NIST security guidelines, including Special Publication 800-37, and the Library’s security policy guidelines. However, OCWR did not always include language in agreements in sufficient detail to ensure that requirements were communicated effectively. For example, the office did not always provide sufficient language to communicate privacy requirements related to the protection of personally identifiable information within its SOCRATES or FMA agreements. Further, OCWR’s agreements—related to FMA—expired during our review and contained references to retired Library guidelines that are no longer applicable or enforceable with regard to OCWR’s external contractor. Select and document security and privacy controls. OCWR worked with the Library to select and document about 300 security and privacy controls and control enhancements for SOCRATES within a system security plan. Further, the office worked with the Library to support the selection of controls by documenting privacy risks and impacts to SOCRATES within a privacy impact assessment—as called for by NIST to assess the privacy risks associated with collecting and using personal information—that was referred to in the system security plan. However, OCWR did not adequately oversee the selection and documentation of security and privacy controls in the system security plan that was used to plan and conduct initial control assessments for SOCRATES. In particular, the office did not always ensure that the system security plan for SOCRATES provided an appropriate description of controls to be implemented to meet the security and privacy requirements. For example, in certain instances, the system security plan described SOCRATES as a low-impact system when describing the security controls used to protect the system. These descriptions differed though from its actual classification as a moderate-impact system, as documented within an interagency agreement between OCWR and the Library. As another example, the system security plan for SOCRATES incorrectly described a security control related to the maintenance of SOCRATES as not applicable to moderate- impact systems. However, NIST’s classification of this control describes it as applicable to moderate-impact systems. For the FMA system, OCWR relied on its external contractor to document a system security plan that generally described security requirements for the system. However, the plan did not document the privacy requirements or the specific security and privacy controls that were expected to be implemented for FMA as a low-impact system. For example, the plan did not specify an authority to report information to in the event of a security incident. Further, the plan did not include or refer to other necessary security and privacy documentation, such as a privacy impact assessment. As a result, OCWR did not adequately oversee the completion of this key step for its FMA system. Plan assessment of security controls. OCWR partially implemented this oversight activity for SOCRATES and did not implement it for FMA. Select an independent assessor. OCWR relied on the Library to select an assessor for SOCRATES who was independent. For example, for SOCRATES, the Library used an external contractor to initially assess the system and reported taking steps to verify that the assessor was independent from the Library. However, the office did not adequately oversee the completion of this key step for SOCRATES and did not ensure that the assessor used for the system was independent from the office. Specifically, OCWR allowed the Library to select the assessor for SOCRATES and did not take steps to verify the assessor’s independence. Further, for FMA, OCWR did not select an assessor to review the system. Develop a test plan. Although OCWR relied on the Library to develop a test plan for SOCRATES, the test plan used to conduct initial control testing was not approved by the office and did not specify the procedures that were to be followed to test each control from the SOCRATES system security plan. For example, the SOCRATES test plan specified a high-level procedure for collecting relevant artifacts but did not specify what particular documentation would be collected or reviewed for each control identified in the system security plan. Regarding FMA, OCWR and its external contractor did not develop a test plan. Conduct assessment. OCWR partially implemented this oversight activity, which includes executing the test plan, for SOCRATES and did not implement it for FMA. Specifically, OCWR worked with the Library to perform initial control testing for SOCRATES and document the results in an online tracking system; however, as previously mentioned, the office did not ensure that a test plan with detailed procedures to test each control was developed and approved prior to the initial testing of SOCRATES. As a result, the office did not adequately oversee the execution of the test plan by the Library to ensure that controls that were assessed as implemented were effectively operating as intended. For FMA, OCWR and its external contractor did not execute a test plan or document the results of any tests for the system. Review assessment. OCWR partially implemented this oversight activity, which includes developing POA&Ms for remediation of weaknesses, for SOCRATES and did not implement it for FMA. Specifically, OCWR worked with the Library to develop POA&M data for SOCRATES that included many of the recommended NIST elements, such as estimated completion dates and issue identification. For example, following initial control testing in March 2019, OCWR and the Library worked to develop POA&M data for 62 security control weaknesses, including 24 high-risk and 38 moderate- risk weaknesses. As of November 2019, there were seven POA&Ms, including six categorized as high-risk and one as moderate-risk, that OCWR and the Library had not yet addressed. However, as previously mentioned, the office did not ensure that a test plan that included detailed procedures to test each control was developed and approved prior to the initial testing of SOCRATES. Therefore, the office could not ensure that controls were tested appropriately to identify necessary remedial actions in POA&Ms. As a result, OCWR did not adequately oversee the completion of this step and ensure that key POA&Ms were appropriately documented. For FMA, without an executed test plan, OCWR and its external contractor could not complete or update POA&Ms for the system. According to OCWR officials, including the office’s Deputy Executive Director, part of the reason for these shortfalls was that the office did not obtain expertise in security to aid in the completion of these oversight activities until September 2018 when the office hired a new IT Manager. In addition, OCWR officials, including the Deputy Executive Director, could not explain why the contractor did not produce key oversight related artifacts, such as those related to the security testing of controls, as agreed upon in contracts covering FMA during the performance period. However, a key contributing reason that we identified for the shortfalls in OCWR’s oversight of external partners was that OCWR had not documented procedures to direct the office in performing such oversight activities effectively. The lack of documented oversight procedures and shortfalls in OCWR’s oversight of its external partners contributed to concerns with the deployment of SOCRATES. For example: As previously discussed, OCWR did not ensure that all moderate- level security controls for the SOCRATES web-based form and secure information sharing platform were tested before the system was deployed in June 2019. For example, a control related to testing contingency plans for the SOCRATES web-based form was not assessed until August 2019, approximately 2 months after the system was deployed. Although penetration testing of the CMS portion of SOCRATES was completed in May 2019, OCWR did not ensure that penetration testing of the SOCRATES web-based form and secure information sharing platform was conducted before deployment. Penetration testing for the SOCRATES web-based form and secure information sharing platform was subsequently completed in December 2019, approximately 6 months after the system was deployed. Until OCWR develops and implements effective oversight procedures over its external partners, it may not be able to mitigate risks that could result in the loss of sensitive data or compromise of the office’s external systems. We also assessed selected security controls in place for SOCRATES and FMA including, but not limited to, configuration management, patch management, and personnel security. We intend to issue a separate limited official use only report that discusses the results of this review. OCWR Has Not Fully Established an Effective Approach for Managing Organization-Wide Cybersecurity Risk NIST’s cybersecurity framework is intended to support federal agencies as they develop, implement, and continuously improve their cybersecurity risk management programs. In this regard, the framework identifies cybersecurity activities for achieving specific outcomes over the lifecycle of an organization’s management of cybersecurity risk. According to NIST, the first stage of the cybersecurity risk management lifecycle—which the framework refers to as “identify”—is focused on foundational activities for effective risk management that provide agencies with the organizational understanding to manage cybersecurity risk to systems, assets, data, and capabilities. Additional NIST guidance, including its risk management framework, provides information on implementing foundational activities and achieving desired outcomes that calls for, among other things, the following: A risk executive in the form of an individual or group that provides agency-wide oversight of risk activities and facilitates collaboration among stakeholders and consistent application of the risk management strategy. This functional role helps to ensure that risk management is institutionalized into the day-to-day operations of organizations as a priority and integral part of carrying out missions. A cybersecurity risk management strategy that articulates how an agency intends to assess, respond to, and monitor risk associated with the operation and use of the information systems it relies on to carry out the mission. The strategy should, among other things, make explicit an agency’s risk tolerance, accepted risk assessment methodologies, a process for consistently evaluating risk across the organization, risk response strategies, approaches for monitoring risk over time, and priorities for investing in risk management. Risk-based policies and procedures that act as the primary mechanisms through which current security requirements are communicated to help reduce the agency’s risk of unauthorized access or disruption of services. If properly implemented, these policies and procedures may be able to effectively reduce the risk that could come from cybersecurity threats such as unauthorized access or disruption of services. For example, establishing policies and procedures that incorporate NIST’s risk management framework can help to ensure that a consistent approach is used to conduct a complete security assessment before a system is deployed and that a designated agency official certifies the system for operation based on progress in remediating control weaknesses and an assessment of residual risk. To its credit, OCWR’s strategic plan for fiscal years 2019 through 2023 includes a goal of developing, among other things, cybersecurity risk policies and procedures. The strategic plan also describes the office’s plans to ensure compliance with applicable IT and cybersecurity standards. Nevertheless, OCWR has not yet fully established an effective approach to organization-wide cybersecurity risk management that includes foundational elements. Specifically, although the office’s Director of the IT Governance, Risk Management, and InfoSec Compliance Program stated that he was serving as the risk executive, this role and its related responsibilities are not documented in OCWR’s policies. In addition, OCWR has not developed an organization-wide cybersecurity risk management strategy or determined a time frame for when the policies and procedures discussed in its strategic plan will be implemented. According to the Director of the IT Governance, Risk Management, and InfoSec Compliance Program, the reason for these shortfalls in risk management was that the office’s top priority was completing work on the SOCRATES system, and then it planned to work on its cybersecurity policies and procedures. Additionally, the official stated that OCWR considers development of documentation to be a continual process, and that the office would like to develop and build procedures to lay a foundation for effective risk management. However, until OCWR establishes the role and responsibilities of the risk executive function in policy, the office will lack an understanding of who is ultimately responsible for overseeing the cybersecurity risk activities of the organization and what those responsibilities include. Further, until OCWR establishes and implements a strategy for managing its cybersecurity risks using NIST’s framework, its ability to make operational decisions that adequately address security risks and prioritize IT security investments will be hindered. Finally, until OCWR establishes a time frame for developing and implementing risk-based policies and procedures, it will lack assurance that consistent steps are being taken to categorize systems; select, implement, and assess system security controls; and make risk-based decisions on authorizing systems to operate. Conclusions Although OCWR completed the upgrade of its legacy claims management system through the SOCRATES project, the office did not incorporate cybersecurity activities into the project during planning. As a result, OCWR was left without a complete understanding of potential schedule issues, the system’s planned security requirements, and cybersecurity- related risks to the success of the project. These shortcomings existed, at least in part, because of a lack of OCWR policies and procedures that required cybersecurity management activities be incorporated into project scheduling, requirements management, and risk management. Until OCWR develops and implements such policies and procedures, future IT projects—such as the office’s planned transition of its FMA system to the Library—may face unnecessary cybersecurity risks and may not be carried out in an efficient and effective manner. OCWR made initial efforts to assess the implementation of security and privacy controls for the two selected externally-operated systems, but did not fully implement critical oversight activities. A contributing reason for these shortfalls is that OCWR had not documented procedures for the office to follow in order to perform such oversight of its external entities effectively. This ultimately contributed to OCWR not being able to first test important system security controls for ensuring the confidentiality, integrity, and availability of the system before it was deployed. Until OCWR establishes and implements specific procedures for overseeing external entities, it will have reduced assurance that external entities are adequately securing and protecting the office’s information. In addition, the office will face increased risks that system weaknesses may go undetected and unresolved, which could result in the loss of sensitive data or compromise of its systems. Given the increasing number and sophistication of cyber threats facing federal agencies, it is critical that organizations such as OCWR are well positioned to make consistent, informed risk-based decisions in protecting their systems and information against these threats. To its credit, OCWR has recognized the need for an improved organization-wide approach to its cybersecurity policies and IT governance in its most recent strategic plan. However, important elements of an effective organization-wide cybersecurity approach have not been fully implemented, including establishing the roles and responsibilities for the risk executive function in policy, a cybersecurity risk management strategy, and policies and procedures for managing cybersecurity risks. Until OCWR fully addresses these organization-wide cybersecurity risk management practices, its ability to ensure effective oversight and management of IT will remain limited. Moreover, OCWR may be limited in its ability to strengthen its risk posture, including ensuring effective cybersecurity across its relationships with external entities that are critical to its ability to provide IT services and systems needed to meet its mission. Recommendations We are making five recommendations to the Office of Congressional Workplace Rights: The Executive Director should ensure the development and implementation of policies and procedures for incorporating key cybersecurity activities into IT project planning, including scheduling, requirements management, and risk management. (Recommendation 1) The Executive Director should ensure the development and implementation of oversight procedures for each externally-operated system that include establishing security and privacy requirements, planning the assessment of security controls, conducting the assessment, and, reviewing the assessment. (Recommendation 2) The Executive Director should ensure the establishment of roles and responsibilities for a risk executive function. (Recommendation 3) The Executive Director should ensure the development and implementation of a cybersecurity risk management strategy. (Recommendation 4) The Executive Director should ensure commitment to a time frame for developing and implementing policies and procedures for managing cybersecurity risk. (Recommendation 5) Agency Comments, Third-Party Views, and Our Evaluation We provided a draft of this report to OCWR, the Library, and the third- party contractor for review and comment. In response, we received written comments from OCWR, which are reproduced in appendix II. In its comments, the office did not state whether it agreed or disagreed with our recommendations, but described initial actions taken and planned to address them. Specifically, OCWR noted that it has initiated several actions, such as revising the office’s IT systems project planning to ensure the development and implementation of policies and procedures incorporating key cybersecurity activities. Further, OCWR stated that it intends to implement additional changes, such as developing and implementing oversight procedures for each externally-operated system. Going forward, OCWR stated that it intends to update us on its progress in implementing the recommendations. We also received technical comments from the Library’s Deputy Chief Information Officer via email, which we incorporated as appropriate. In addition, the third-party contractor indicated via email that it had no concerns about, and worked with OCWR in responding to, the draft report. We are sending copies of this report to the appropriate congressional committees, the Executive Director of the Office of Congressional Workplace Rights, the Librarian of Congress, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9342 or marinosn@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The objectives of our review were to examine the extent to which the Office of Congressional Workplace Rights (OCWR) (1) incorporated key cybersecurity management activities into the project planning for its claims management system upgrade, (2) performed oversight of security controls and mitigated risks for selected systems operated by external parties on its behalf, and (3) established an effective organization-wide approach for managing cybersecurity risk. To assess OCWR’s incorporation of key cybersecurity management activities into the project planning for its claim management system upgrade (known as the Secure Online Claims Reporting and Tracking E- filing System, or SOCRATES), we reviewed available OCWR project planning documentation related to establishing a project schedule, requirements management process, and risk management process. This documentation included, for example, a draft SOCRATES project schedule, contract information, and business flow diagrams. We then compared OCWR’s documentation to leading practices for project planning, including those identified by the Software Engineering Institute. Three key areas needed to effectively managing projects are developing a project schedule; managing project requirements; and managing project risks. We also analyzed OCWR’s available project planning documentation to determine the extent that it incorporated key cybersecurity management activities, as identified by the National Institute of Standards and Technology (NIST) risk management framework. These key activities are: obtaining a system categorization, selecting and implementing security controls, assessing security controls, obtaining an authority to operate, and monitoring of security controls. Further, we conducted interviews with OCWR officials, including the General Counsel and the Director of the Information Technology (IT) Governance, Risk Management, and InfoSec Compliance Program, to assess the extent to which the office incorporated key cybersecurity management activities into its SOCRATES project planning. To assess the extent to which OCWR performed oversight of security controls and mitigated risks for selected externally-operated systems, we chose two systems—SOCRATES and the Facility Management Assistant (FMA). We chose these two systems because they process and maintain OCWR’s most sensitive information, including claims related to alleged violations of employee rights and protections and reported occupational safety and health violations. We then collected and reviewed cybersecurity policies, procedures, and documentation (e.g., system security plans) from the office and its external partners that related to protecting the security and privacy of information and systems. To assess the reliability of the SOCRATES system security plan and its security control testing data obtained from the Library’s online repository, we reviewed related documentation (e.g., security assessment results briefings), reviewed the data for obvious omissions (i.e., fields left blank), and performed electronic testing to identify outliers. We also interviewed Library officials to discuss the reliability of the data. Based on our assessment, we determined that the data were sufficiently reliable for the purpose of our reporting objectives. We then examined whether OCWR and its external partners implemented—for each selected system—four oversight activities important for assessing the security and privacy controls of information systems operated by external entities, as specified in federal requirements and guidance, including NIST Special Publications 800-35, and 800-37. The four oversight activities we examined were: (1) establishing security and privacy requirements, (2) planning the assessment of security controls, (3) conducting the assessment, and (4) reviewing the assessment. We chose these activities because of their importance to providing effective oversight of systems operated by external entities. Further, we assessed whether OCWR implemented policies and procedures set forth by the office, including contractor oversight activities performed by the responsible official. We also conducted interviews with officials from OCWR, including the General Counsel, Deputy Executive Director, and Director of the IT Governance, Risk Management, and InfoSec Compliance Program. In addition, we also interviewed key personnel from OCWR’s external partners, such as the Library’s Deputy Chief Information Officer and the President of the external contractor, to assess the extent of OCWR’s oversight activities for SOCRATES and FMA. We assessed selected security controls in place for SOCRATES and FMA including, but not limited to, configuration management, patch management, and personnel security. We intend to issue a separate limited official use only report that discusses the results of this review. To assess OCWR’s efforts to establish an effective organization-wide approach for cybersecurity risk management activities, we used NIST’s cybersecurity framework, which identifies foundational components of effective cybersecurity risk management. We also used additional guidance provided by NIST for implementing the foundational components and achieving desired outcomes. These components included the establishment of a risk executive function, cybersecurity risk management strategy, and risk-based security policies and procedures. We then evaluated OCWR’s organization-wide cybersecurity risk management approach by, among other things, analyzing available policies and plans, management reports, and strategic planning documentation against the foundational cybersecurity risk management components identified in NIST guidance. Further, we conducted semistructured interviews with relevant OCWR officials with responsibilities for managing their efforts to establish an approach for managing cybersecurity risk, including the General Counsel and the Director of the IT Governance, Risk Management, and InfoSec Compliance Program. We conducted this performance audit from January 2019 to February 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Office of Congressional Workplace Rights Appendix III: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Jon Ticehurst (Assistant Director), Lisa Hardman (Analyst in Charge), Edward Alexander, Jr., Angela Bell, Christina Bixby, David Blanding, Hannah Brookhart, Kisa Bushyeager, Christopher Businsky, West Coile, Linda Erickson, Rebecca Eyler, Kaelin Kuhn, Sukhjoot Singh, Eugene Stevens, and Adam Vodraska made key contributions to this report. Giny Cheong, Edda Emmanuelli-Perez, Elizabeth Fan, Steven Lozano, Rebecca Woiwode, and Edith Yuh also provided valuable assistance.
Why GAO Did This Study OCWR is an independent, nonpartisan office that administers and enforces various provisions related to fair employment, and occupational safety and health within the legislative branch. To meet its mission, OCWR relies extensively on external parties, such as the Library of Congress, for IT support. In December 2018, Congress passed the Congressional Accountability Act of 1995 Reform Act (Reform Act) which, among other things, required OCWR to create a secure, online system to receive and keep track of claims related to employee rights and protections, such as sexual harassment and discrimination. To meet this requirement, OCWR initiated the SOCRATES project to upgrade its legacy claims management system. The Reform Act included a provision for GAO to review OCWR's cybersecurity practices. This report examines the extent to which OCWR (1) incorporated key cybersecurity management activities into project planning for its claims management system upgrade, (2) performed oversight of security controls and mitigated risks for selected systems operated by external parties on its behalf and, (3) established an effective approach for managing organization-wide cybersecurity risk. To address these objectives, GAO compared OCWR IT policies, procedures, strategic plans, and documentation for two selected systems to leading IT project planning, system oversight, and cybersecurity management practices. What GAO Found The Office of Congressional Workplace Rights (OCWR) did not incorporate key cybersecurity management practices into the planning for its Secure Online Claims Reporting and Tracking E-filing System (SOCRATES) project. While OCWR drafted a SOCRATES project schedule, the office did not finalize and use this schedule to manage cybersecurity activities, such as the time frames for conducting information technology (IT) system security assessments. In addition, the office did not document project cybersecurity risks, such as the office's reliance on external parties to implement responsibilities on its behalf. These weaknesses were due, in part, to a lack of policies and procedures for IT project planning. Until OCWR establishes and implements such policies and procedures, it will continue to have a limited ability to effectively manage and monitor the completion of cybersecurity activities for its IT projects. OCWR did not fully implement important oversight activities for two selected systems—SOCRATES and the system used to document occupational safety and health violations known as the Facility Management Assistant (FMA)—operated by external entities (see table). These shortfalls contributed to concerns with the deployment of SOCRATES in June 2019. For example, important security controls needed to ensure the confidentiality, integrity, and availability of the system were not fully tested before the system was deployed. In addition, penetration testing—where evaluators mimic real-world attacks in an attempt to identify ways to circumvent the security features of the system—was not fully completed before deployment. GAO plans to issue a separate report with limited distribution on its assessment of security controls intended to, among other things, prevent successful attacks. Although OCWR's strategic plan includes a goal of developing cybersecurity policies and procedures, the office had not fully established an effective approach for managing organization-wide cybersecurity risk. For example, OCWR designated an executive to oversee risk, but had not established the responsibilities of the official in the office's policies. Until OCWR improves its appoach to managing cybersecurity risks, its ability to make operational decisions that adequately address security risks will be hindered. What GAO Recommends GAO is making five recommendations to OCWR to address weaknesses in cybersecurity management and oversight. OCWR did not state whether it agreed or disagreed with GAO's recommendations, but described actions planned or taken to address them.
gao_GAO-20-162
gao_GAO-20-162_0
Background Physician Shortages According to HRSA, the current demand for physicians in the United States will likely continue, with a projected shortage of 23,640 primary care physicians by 2025. While increasing physician supply is one way to reduce physician shortages, some experts have also suggested increasing the number of non-physician providers. For example, HRSA predicted that, with health care delivery changes that would allow for NPs and PAs to deliver a greater proportion of services than they do now, the projected shortage of 23,640 primary physicians in 2025 could be mitigated. According to the Bureau of Labor Statistics, in 2018, there were 756,800 physicians, over 189,100 NPs, and 118,800 PAs practicing in the United States. Graduate Training for Physicians, NPs, and PAs Physicians. Physician GME, also known as residency, provides the clinical training required for a physician to be eligible for licensure and board certification to practice medicine independently in the United States. Specifically, after completing medical school and receiving a medical degree, physicians enter a multi-year residency training program during which they complete their formal education as a physician, primarily in teaching hospitals. Completion of a residency can take from 3 to 7 years after graduation from medical school, depending on the specialty or subspecialty chosen by the physician. In some cases, physicians may choose to pursue additional training—referred to as fellowships—to become a subspecialist, such as a cardiologist. NPs and PAs. Since the first NP and PA training programs in the United States were founded in 1965, these professions and their educational requirements have evolved to allow them to furnish more care that was traditionally provided by physicians, such as diagnosing patients, prescribing medication, and performing certain procedures. The extent to which they can provide care independently from physician supervision varies by state. There are multiple pathways for students to become NPs. In general, after completion of a bachelor’s degree, a nurse may become an NP once he or she achieves a master’s or doctoral degree in nursing. Full-time master’s programs are generally 18 to 24 months and doctoral programs are generally 3 to 5 years. Both programs include classroom and clinical work. In addition, NP students may have varying amounts of hands-on nursing experience before entering an NP program. NP programs generally include the following focus areas: family practice, women’s health, acute care, adult/geriatric health, child health, neonatal health, and mental health. NPs are trained according to a nursing care model, which emphasizes providing comprehensive care for patients that encompasses their physical and other needs. After completion of a bachelor’s degree, students become PAs once they earn a master’s degree in physician assistant studies. The average full- time PA program takes about 27 months to complete, which includes classroom education followed by clinical work conducted through rotations in internal medicine, family medicine, surgery, pediatrics, obstetrics and gynecology, emergency medicine, and behavioral medicine. In addition, PA students have varying amounts of hands-on work experience in health care before entering a PA program. PAs are trained to approach patient care according to a medical model focused on assessing, diagnosing, and treating disease. Both NP and PA students are required to complete clinical work as part of their graduate programs by providing care to patients under the supervision of a preceptor—an experienced and licensed health care provider who provides instruction and supervision to the student during their clinical rotations. Upon graduation and after passing a national certification exam and obtaining a license in the state in which they choose to work, both NPs and PAs can begin practicing. NPs and PAs may also complete an optional post-graduate residency training program, but unlike physicians, they are not required to do so in order to obtain a state license to practice. Figure 1 shows an example of education and training paths for physicians, NPs, and PAs. Federal Funding for Physician, NP, and PA Training Physicians. The vast majority of federal funding for physician GME training is distributed by HHS through CMS’s Medicare GME program. In our 2018 report on GME funding, we found that Medicare GME payments for physicians totaled more than $10.3 billion in 2015. CMS pays for a hospital’s costs associated with GME training through two mechanisms— direct graduate medical education and indirect medical education payments—both of which are formula-based payments set by statute. Direct payments are for costs that include, for example, residents’ salaries and benefits, compensation for faculty who supervise the residents, and overhead costs. Indirect payments are for costs including higher patient care costs that teaching sites are thought to incur as a result of training residents, such as increased diagnostic testing and procedures performed. Payments to hospitals may also include funds for training in nonhospital settings. Other sources of federal GME funding for physicians include the Medicaid program, which is jointly administered by CMS and the states, programs administered by HRSA, and other federal agencies outside of HHS. NPs and PAs. HHS funding is available to train NPs and PAs, primarily through HRSA grants authorized under titles VII and VIII of the Public Health Service Act. Specifically, according to HRSA officials, funding for programs that included NP and PA training totaled approximately $136.2 million in fiscal year 2019. (See table 2 for a description of these programs.) In recent years, CMS also provided funding for graduate training for advanced practice registered nurses, including NPs, from fiscal years 2012 through 2018 as part of the Graduate Nurse Education Demonstration. The Graduate Nurse Education Demonstration was established by the Patient Protection and Affordable Care Act to determine whether payments for clinical training provided to hospitals would increase the number of advanced practice registered nurses, including NPs, and whether these payments would affect the number of advanced practice registered nurses by specialty. Stakeholders Identified Benefits and Challenges of Expanding the Medicare GME Program to Include NP and PA Graduate Training Officials from the stakeholder organizations we interviewed identified the potential benefits and challenges of expanding the Medicare GME program to include NP and PA graduate training. Benefits of Expanding the Medicare GME Program Predictability and stability of Medicare GME program funding. According to officials from five of the nine stakeholder organizations we interviewed—three NP and two PA organizations—a benefit of expanding the Medicare GME program is that it may create more predictability and stability for training funding for NPs and PAs. This would be beneficial by allowing NP and PA programs to do better long-range planning such as planning for the number of NP and PA students that can be admitted. Officials from two of these stakeholder organizations noted that a benefit of the Medicare GME program for physicians is that funding is historically more stable than the funding available to NPs and PAs through HRSA. Specifically, Medicare GME funding is mandatory, while funding for NP and PA training programs administered by HRSA is discretionary. During the annual appropriations process, Congress may choose to appropriate the amount requested by HRSA, to increase or decrease those levels, or to not appropriate any funds. For example, Congress appropriated $28.5 million in fiscal year 2018 for HRSA’s Nurse Faculty Loan Program, but then decreased this appropriation to $13.5 million in fiscal year 2019. Potential opportunity to pay preceptors. Officials from four of the nine stakeholder organizations—two NP and two PA organizations— noted that one benefit of expanding the Medicare GME program to include NP and PA graduate training is that funding could be used to pay preceptors as an incentive to supervise students. CMS and others have also reported that schools of nursing have faced significant challenges increasing enrollments, in part due to difficulty finding preceptors willing to supervise students. Similarly, officials from two PA organizations we interviewed noted that some programs may choose not to fill their available enrollment slots because they are concerned about finding enough preceptors to allow all their students to graduate. Officials from four of the stakeholder organizations we interviewed noted that supervising students can take time away from the preceptor’s productivity in seeing patients, and that some practices and health care systems do not allow their health care providers to serve as preceptors. Specifically, officials from two stakeholder organizations we interviewed said that, historically, these preceptors have volunteered as a way of “giving back” to their profession and have not been paid for their time. However, due to difficulties finding a sufficient number of volunteer preceptors, some graduate programs have begun reimbursing preceptors for their time in order to encourage their participation. CMS’s Graduate Nurse Education Demonstration also included funding for preceptors. Challenges of Expanding the Medicare GME Program Differences in training requirements. Officials from six of the nine stakeholder organizations—two NP, two PA, and two physician organizations—raised concerns about challenges that could occur because NP and PA clinical training requirements do not align with the current structure of the Medicare GME program. For example, officials from some of these organizations noted that the Medicare GME program is structured to fund physician residency training, which is required in order for physicians to practice, but NPs and PAs are not required to complete a residency after completing a graduate program in order to practice. Specifically, CMS makes GME payments to hospitals according to formulas outlined in statute based, in part, on the number of physician residency positions. Therefore, officials from some of these stakeholder organizations said that any change to Medicare GME to include NPs and PAs would need to consider how to allocate GME funding for NP and PA programs in light of these differences in training requirements between physicians, NPs, and PAs. Potential limitations on Medicare funding for physician training. Officials from seven of the nine stakeholder organizations we interviewed—four NP, one PA, and two physician organizations— expressed concern that expanding the Medicare GME program to increase the number of NPs and PAs without increasing overall funding may negatively impact the funding available for physician training. For example, officials from one stakeholder organization said that reallocating available Medicare GME dollars could be problematic and potentially diminish needed resources for others. An official from one of these stakeholder organizations said that there is currently not enough funding to provide residency training for all qualified physicians, and adding NPs and PAs to the existing pool of underfunded residency candidates would worsen the funding shortage. Officials from another stakeholder organization echoed this concern by noting that expanding the Medicare GME program could force NPs to compete with physician residents for patients and space. Available Estimates of NP and PA Training Costs Were Limited and Incomplete Through our review of the literature and our interviews with officials from stakeholder organizations, CMS, and HRSA, we identified two estimates of NP or PA graduate training costs. CMS’s evaluation of its Graduate Nurse Education Demonstration estimated the total cost of graduate clinical NP training to be about $47,000 per student, based on the funds paid to the demonstration sites from fiscal year 2012 through fiscal year 2018, and the Physician Assistant Education Association estimated the total cost of graduate PA training to be about $45,000 per student, based on the results of its 2018 survey. While these two estimates provide some information about the costs of training NPs and PAs, they provide limited and incomplete information on these costs. CMS Graduate Nurse Education Demonstration. CMS estimated NP graduate training costs totaling $47,172 per graduate according to its evaluation of the Graduate Nurse Education Demonstration, in which CMS funded graduate clinical training for advanced practice registered nurses—a category that includes NPs—over 6 years. This estimate represents the cost to CMS (defined as the total funds paid to the demonstration sites during the duration of the demonstration) for the clinical training of each graduating student. Part of the cost covered by the demonstration includes the costs of clinical preceptors. Congress appropriated a total of $200 million for the demonstration, which operated from fiscal years 2012 through 2018. The demonstration funded clinical training at five hospitals, which partnered with 19 schools of nursing, and multiple community- based care settings. The cost estimate underreports the total cost of NP graduate training because, while both clinical and classroom training are required for NP students to graduate, CMS’s demonstration only provided funding for clinical training, as specified by the Patient Protection and Affordable Care Act. Specifically, CMS does not include the costs associated with classroom training, certification, and licensure of advanced practice registered nursing students. In addition, the demonstration targeted advanced practice registered nurses, which is a broader category that includes NPs in addition to other types of specialty nurses such as certified nurse-midwives and certified registered nurse anesthetists. However, according to CMS, the vast majority of advanced practice registered nursing students enroll in NP programs. CMS’s evaluation also noted that the cost estimates are not generalizable because they are only based on information from the schools that participated in the demonstration. (See table 3.) Physician Assistant Education Association survey data. The Physician Assistant Education Association, whose members are graduate PA programs, estimated PA graduate training costs totaling $45,309 per student—an estimate based on the results of a published annual survey of its members in 2018. This represents the average cost to the member PA programs for training a student in a 27-month PA graduate program, based on expense data reported by programs from the 2017-2018 fiscal year. The Physician Assistant Education Association’s data are self- reported by PA graduate programs. In addition, these data likely underreport the total costs of graduate PA training because they exclude in-kind contributions from clinical sites. These contributions, such as the donated time from volunteer preceptors, are necessary for clinical training. Officials estimated that paying for costs supported by these in-kind contributions—which the Physician Assistant Education Association estimated to be about $11,300 per student— would likely add an additional 25 percent to the total estimated cost to train PAs. (See table 4.) Agency Comments and Third-Party Views We provided a draft of this report to HHS for review and comment. The department provided technical comments, which we incorporated as appropriate. We also provided relevant draft portions of this report to the nine professional associations that we interviewed and they provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate Congressional committees, the Secretary of Health and Human Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. Appendix I: GAO Contact and Staff Acknowledgments GAO Contact James Cosgrove, (202) 512-7114 or cosgrovej@gao.gov. Staff Acknowledgments In addition to the contact named above, Kelly DeMots (Assistant Director), Teresa Tam and Sarah-Lynn McGrath (Analysts-in-Charge), and Margaret Fisher made key contributions to this report. Also contributing were Leia Dickerson, Diona Martyn, Caitlin Scoville, Ethiene Salgado-Rodriguez, and Jennifer Whitworth.
Why GAO Did This Study An adequate, well-trained health care provider workforce is essential to ensure Americans have access to quality health care services. However, studies have shown the United States faces a shortage of physicians, making it increasingly difficult for people to access needed health care. Experts have identified ways to address this shortage, such as through strategies that increase the number of other types of non-physician providers, including NPs and PAs. For example, members of Congress and others have questioned whether expanding the scope of the Medicare GME program to include NPs and PAs could help mitigate the effects of a physician shortage. A Senate Committee on Appropriations report included a provision for GAO to examine the potential of making GME payments under the Medicare program for NPs and PAs. This report describes: (1) stakeholder views on the potential benefits and challenges of expanding the Medicare GME program to include NP and PA graduate training; and (2) available information on the estimated costs of NP and PA graduate training. GAO reviewed literature and interviewed officials from nine professional associations with knowledge of NP, PA, and physician graduate training; and agency officials. Based on these interviews, GAO identified sources of information on estimated costs and reviewed those sources. What GAO Found The federal government funds many education programs for health care providers, but the vast majority of this funding—more than $10.3 billion in 2015—supports physician residency training through the Department of Health and Human Services's (HHS) Medicare graduate medical education (GME) program. This program does not fund graduate training for nurse practitioners (NP) and physician assistants (PA) who deliver many of the same services as physicians, such as diagnosing patients and performing certain procedures. Instead, a smaller portion of federal funding—approximately $136 million in fiscal year 2019—is available to train them. Stakeholders GAO interviewed said that one benefit of expanding Medicare GME is that Medicare GME funding would provide more stable funding for NP and PA training, compared to existing programs. Stakeholders said one challenge of such an expansion is that clinical training requirements for NPs and PAs are different than physicians; therefore, any change to Medicare GME to include NPs and PAs would need to consider how to allocate GME funding in light of these differences. GAO identified two estimates of costs for completing an NP or PA graduate school program; while the estimates provide some information about these costs, they are limited and incomplete. The Centers for Medicare & Medicaid Services' (CMS) evaluation of its Graduate Nurse Education Demonstration estimated the total costs over the 2012-2018 demonstration period to be about $47,000 per NP student. While clinical and classroom training are required for NP students, CMS's demonstration only provided funding for clinical training, as specified by statute, and the estimate is not generalizable beyond the participating schools. The Physician Assistant Education Association estimated the total costs to be about $45,000 per PA student. The estimate is based on self-reported data from a 2018 survey of member PA programs and excludes in-kind contributions for clinical training. GAO received technical comments on this report from HHS and the professional associations interviewed and incorporated them as appropriate.
gao_GAO-20-166
gao_GAO-20-166_0
Background Several DOE Offices Manage Multiple Forms of Surplus Plutonium Three DOE offices manage 57.2 MT of plutonium declared surplus to defense needs. These offices—NNSA, EM, and DOE’s Office of Nuclear Energy (NE)—and their sites manage a variety of surplus plutonium in the form of pits, metal, oxide, spent nuclear fuel, and other reactor fuels, and they follow specific procedures to manage the plutonium safely and securely. NNSA manages over half of this surplus plutonium. According to NNSA, all three offices share the responsibility for final disposition of surplus plutonium. Figure 1 shows the amounts of surplus plutonium managed by the offices. Figure 2 shows the various forms of this surplus plutonium, including pits, non-pit metal, non-pit oxide, and spent nuclear fuel or other reactor fuels in the inventory, by DOE office. DOE’s Surplus Plutonium Disposition Strategies Have Changed over Time Since 1997, DOE’s surplus plutonium disposition strategies have changed in terms of the method of disposal and the location for disposal, according to DOE documents and officials. These disposition strategies have included immobilization, irradiation as MOX fuel, and dilution. In 1997, NNSA planned to immobilize surplus plutonium by encapsulating it in glass or ceramic materials but terminated its plans in 2002 due to budget constraints. In the mid-2000s, EM briefly considered vitrification, which is a form of immobilization using glass, but never developed a plan to implement it. NNSA planned to irradiate surplus plutonium as part of the MOX fuel strategy but terminated its plans in 2018 because of high costs. NNSA’s plans for irradiation of MOX fuel would also have required disposal of the spent nuclear fuel in a high-level waste repository. EM began implementing a dilute and dispose strategy for a separate portion of surplus plutonium in 2012, but suspended its efforts until it resumed them in 2016. NNSA’s 2018 conceptual plan for the dilute and dispose strategy would replace the MOX fuel strategy with final disposal of the diluted plutonium at WIPP. Figure 3 shows a timeline of the changes in DOE’s strategies since 1997, as well as some key events that have affected the strategies. See appendix II for a timeline of DOE’s disposition strategies and appendix III for a timeline of key events concerning DOE’s Surplus Plutonium Disposition Program. Even if NNSA and EM had successfully implemented strategies for immobilization, vitrification, or irradiation of MOX fuel, DOE would have had no place to dispose of the surplus plutonium that was prepared for disposal because it planned to dispose of this material in a high-level waste repository, and no high-level waste repository has yet been constructed. WIPP would not have been able to take surplus plutonium from these disposition strategies because federal law authorizing disposal of radioactive waste at WIPP specifically bans the disposal of high-level waste and spent nuclear fuel, and the final forms of the surplus plutonium from these disposition strategies would have included both. DOE’s plans for a high-level waste repository have also changed over time. No progress toward licensing and building a high-level waste repository has been made since DOE terminated its licensing efforts in 2010. A high- level waste repository is likely still decades away from becoming operational. Appendix IV contains more information on the progress DOE has made toward licensing and building a high-level waste repository. NNSA’s Dilute and Dispose Strategy Requires That Pits Be Dismantled and Plutonium Metal Be Converted to an Oxide NNSA’s current dilute and dispose strategy requires that surplus pits, as well as other surplus plutonium in metal form, be converted to plutonium oxide. NNSA’s now-terminated strategy to use surplus plutonium to make MOX fuel also required that surplus plutonium be converted to plutonium oxide. In the early 2000s, NNSA had planned to build a facility—the Pit Disassembly and Conversion Facility at SRS—that was to be dedicated to disassembling pits and converting them to plutonium oxide to meet the high plutonium oxide production requirements for manufacturing MOX fuel. Because of its high costs, however, NNSA canceled the Pit Disassembly and Conversion Facility in January 2012 after having spent $730.1 million on its design, as we reported. In August 2012, DOE provided a report to Congress that described a mix of plutonium oxide production capabilities to replace the canceled Pit Disassembly and Conversion Facility. According to the 2012 report, DOE planned to convert at least 2 MT of surplus plutonium pits to plutonium oxide by 2018 in PF-4 at LANL and an additional 3.7 MT of plutonium oxide at SRS by 2017. According to its 2012 report, NNSA planned for this plutonium oxide to be a reserve of advance feedstock for the MFFF. NNSA anticipated it would begin operations in 2019. According to NNSA, SRS turned out not to be cost-effective at producing plutonium oxide. Specifically, SRS produced 35 kilograms (0.035 MT) of plutonium oxide at SRS’s H Canyon facility over a 2.5-year period ending in 2018. NNSA discontinued plutonium oxide production at H Canyon and focused its plans on expanding ARIES operations at PF-4. According to NNSA, ARIES operations at PF-4 currently host the nation’s only cost-effective plutonium oxide production capability. In 1998, DOE established ARIES at PF-4 at LANL in New Mexico as a technology demonstration project to dismantle pits and convert plutonium metal into an oxide, incorporating automation to reduce liquid waste and workers’ exposure to radiation. ARIES’s technology for converting plutonium to plutonium oxide was designed to generate very little chemical waste and to permit the application of automation, which significantly reduces the risk of workers’ exposure to radiation. Pits have historically been disassembled by a cutting machine. Before ARIES’s technology, recovery of plutonium from cut pits was by an aqueous process—that is, by using liquid chemical processing—which generated significant volumes of both liquid and solid waste. In 2008, NNSA shifted the ARIES mission from a technology demonstration project to a small plutonium oxide production capability. According to NNSA officials, ARIES has produced approximately 1 MT of plutonium oxide from pits since it was established in 1998, with peak production of 242 kilograms (0.242 MT) in 2011 during a partial year of operations. NNSA officials explained that ARIES did not produce larger amounts of plutonium oxide because the agency was still evaluating alternatives for expanding plutonium oxide, but they estimated that ARIES could produce 300 kilograms to 400 kilograms in a full year of operations. In addition, LANL shut down the PF-4 facility, including ARIES, from June 2013 through September 2016 to correct safety and operational issues. During this time, plutonium oxide production using ARIES in PF-4 was suspended. Plutonium oxide is the preferred form for long-term storage of plutonium because it is relatively stable compared to other forms. Plutonium oxide is also the form of plutonium that is most suited for dilution. ARIES consists of glove boxes, furnaces, and other equipment to dismantle a pit and extract the plutonium; convert the plutonium into an oxide form; mill and blend the plutonium oxide; conduct physical and chemical analyses of the plutonium oxide; and package and store the plutonium oxide for eventual disposition. NNSA’s 2018 conceptual plan to dilute and dispose of surplus plutonium calls for plutonium metal to be converted to plutonium oxide using ARIES at PF-4 and then for the plutonium oxide to be diluted at SRS for eventual disposal at WIPP. Figure 4 shows the dilute and dispose strategy as described in NNSA’s 2018 conceptual plan. DOE Could Convert 43.8 MT, or About 77 Percent, of Surplus Plutonium in Its Inventory to Plutonium Oxide for Dilution and Disposal DOE could convert 43.8 MT, or about 77 percent, of surplus plutonium in its inventory of 57.2 MT to plutonium oxide for dilution and disposal because this plutonium is in a metal form suitable to oxidation, based on our review of DOE’s inventory of surplus plutonium. Most of this surplus plutonium metal—33.3 MT—is in the form of pits and is managed by NNSA. EM manages 6.5 MT of surplus plutonium metal and NE manages the remaining 4 MT of surplus plutonium metal reactor fuel at Idaho National Laboratory. Separately, EM also manages 6.4 MT of surplus plutonium that is already in oxide form. Figure 5 shows the forms of surplus plutonium in DOE’s inventory of 57.2 MT of surplus plutonium requiring disposition. As noted above, EM manages 6.4 MT, or 11 percent, of surplus plutonium that already exists as plutonium oxide. According to NNSA officials, SRS is currently diluting this oxide at a modest rate of about 20 kilograms (0.02 MT) annually. According to NNSA documents, the agency plans to add additional throughput capacity within a decade. The remaining 7 MT of surplus plutonium, or about 12 percent of DOE’s surplus plutonium inventory, is contained in spent nuclear fuel and is not suitable for conversion to plutonium oxide. This material would require additional chemical processing steps to make it suitable for conversion to plutonium oxide. DOE officials said that they planned to dispose of the 7 MT of spent nuclear fuel in a deep geologic repository, which would avoid necessitating development of facilities and processes for conversion to plutonium oxide. DOE officials said that this fuel could also be disposed of through other to-be-determined disposition paths. Currently, EM manages the spent nuclear fuel that contains 7 MT of this surplus plutonium at various locations throughout the country. NNSA’s Long-Term Plutonium Oxide Production Plan Is Uncertain because of Two Key Issues NNSA’s 2018 conceptual plan calls for converting 26.2 MT of surplus plutonium into oxide by 2045. In September 2019, NNSA approved the production of about 1.2 MT of plutonium oxide through 2025 at LANL. However, plans for converting additional surplus plutonium into plutonium oxide are uncertain primarily because of two issues. These issues are (1) NNSA’s plans for new pit production, which are still in development and which will also take place at LANL; and (2) issues surrounding the agency’s ability to ship newly produced plutonium oxide for dilution to DOE’s Savannah River Site (SRS) in South Carolina. According to agency officials, NNSA and DOE are taking several actions that, if successfully implemented, are designed to allow NNSA to meet its long- term plutonium oxide production goals. These actions include continuing to review plutonium oxide and pit production plans, increasing plutonium storage at LANL, reducing the amount of SRS’s surplus plutonium, and shipping the diluted plutonium from SRS to WIPP. NNSA’s 2018 Conceptual Plan Would Increase Plutonium Oxide Production at LANL NNSA’s 2018 conceptual plan called for expanding plutonium oxide production capacity in PF-4 for the dilute and dispose strategy to achieve production of 1.5 MT per year by 2033. NNSA planned to sustain this rate of production at LANL for 12 years to convert a total of 26.2 MT of pits to plutonium oxide before ramping down operations in 2045. The agency’s 2018 conceptual plan estimated that this increased production would cost approximately $5 billion over the life of the program. To achieve the 1.5 MT annual production rate, NNSA planned to expand the physical space of ARIES’s operations in PF-4 by about 50 percent, install new equipment such as glove boxes, purchase additional equipment, such as spare parts and new shipping containers, and hire over 200 new staff. To accommodate the larger workforce, NNSA also planned to construct a new employee entrance in PF-4. In September 2019, NNSA approved a short-term plan to produce a total of nearly 1.2 MT of plutonium oxide at PF-4 from 2019 through 2025. This short-term plan closely matches the total plutonium oxide production outlined in NNSA’s 2018 conceptual plan for the same time frame. Two Key Issues May Affect NNSA’s Long-Term Plutonium Oxide Plans In February 2019, NNSA officials said that they were reevaluating the agency’s long-term plutonium oxide production goals in the 2018 conceptual plan because of two key issues. These issues are space constraints relating to (1) the agency’s mission to produce new pits in PF- 4 and (2) requirements to remove plutonium from SRS. According to agency officials, NNSA and DOE are taking several actions designed to allow NNSA to meet the long-term plutonium oxide production goals described in its 2018 conceptual plan. New Pit Production Could Impede Plutonium Oxide Production, but NNSA is Taking Some Actions to Address This Issue As we reported in November 2018, NNSA officials said that a planned nuclear weapons refurbishment and future warhead programs will require the production of new pits. Almost all of the pits in the current U.S. nuclear weapons stockpile were produced before 1990, according to a May 2015 Congressional report. In May 2018, NNSA announced that it intended to build 30 pits annually in PF-4 at LANL by 2026 and 50 pits annually at the MFFF at SRS by 2030, under a plan to repurpose the MFFF for pit production. According to an August 2019 LANL presentation to potential subcontractors, this effort will include the installation of more than 140 new gloveboxes or other enclosures in PF-4 and the construction of more than 700,000 square feet of supporting infrastructure (such as offices, a parking garage, and a cafeteria). The President’s budget for fiscal year 2020 includes over $3 billion for this effort through 2024. In April 2019, the NNSA Administrator said meeting pit production requirements was the agency’s highest infrastructure priority. NNSA also may have to increase pit production at LANL beyond 30 pits per year. For example, in May 2018 the Nuclear Weapons Council stated that it was essential that NNSA provide resources for surge pit production capacity in PF-4 at LANL until pit production is fully established at SRS. In addition, the National Defense Authorization Act for fiscal year 2019 requires the Department of Defense and NNSA to contract with a federally funded research and development center to conduct an assessment of, among other things, a strategy for producing 80 pits per year at LANL. NNSA officials told us in February 2019 that as a result of pit production requirements, the agency might need to use a portion of the processing areas in PF-4 for pit production that the agency had planned to use for plutonium oxide production. Pit production requirements also may use more space in the high-security vault in PF-4 where plutonium must be temporarily stored. Also in February 2019, NNSA officials said that PF-4’s high-security storage space is already near full capacity and that pit production may demand storage space that NNSA had planned to use for plutonium oxide production. NNSA officials said that the agency is taking some actions that are designed to address increasing both pit and plutonium oxide production in PF-4. If successfully implemented, these actions are designed to allow the program to meet the milestones described in the 2018 conceptual plan, according to NNSA officials. These actions include: Reviewing use of operational space in PF-4. LANL reported in March 2019 that the requirement to produce 30 pits per year would have no significant negative impact on plutonium oxide production. However, LANL reported that a number of programs, including pit production, were planning to increase operations in PF-4, placing demands on the aging facility that could lead to more frequent maintenance outages. In August 2019, NNSA officials responsible for plutonium oxide production and pit production said they continue to believe that increased oxide production and pit production can be simultaneously accomplished in PF-4 but that they are continuing to review the issue as the agency’s pit production plans evolve. In NNSA’s comments on our report, the NNSA Administrator said the agency was working to balance the needs of both missions. The Administrator also noted that NNSA’s Office for Cost Estimating and Program Evaluation will assess the effect of plutonium oxide production on pit production as required by section 3120 of the National Defense Authorization Act for fiscal year 2019. The conference report accompanying the act also requires that we review this assessment, which we will initiate in late 2019. Increasing plutonium storage capacity. LANL also reported in March 2019 that it planned to implement several mitigation measures that would allow the storage of more plutonium oxide and other materials in the PF-4 vault. In addition, DOE and NNSA have “swapped” 1 MT of the declared surplus plutonium at SRS with 1 MT of plutonium residues and other primarily non-pit plutonium already stored in LANL’s PF-4 vault. NNSA officials said that the plutonium residues and other primarily non-pit plutonium at LANL would be considered surplus plutonium and would be converted to plutonium oxide, requiring less storage space. Without these mitigation measures, the PF-4 vault would fill up years earlier, according to NNSA officials. NNSA officials said they believe the swap will increase storage space through 2028, at which point LANL would need to ship plutonium oxide to SRS or face a suspension of plutonium oxide production. Requirement to Remove Plutonium from SRS Could Impede Shipping Plutonium Oxide There, but NNSA is Taking Some Actions to Address This Issue Storing quantities of plutonium oxide in PF-4’s high-security storage vault is critical because, according to NNSA officials, it is not likely that NNSA will ship plutonium oxide or other forms of plutonium to SRS until a dispute with the state of South Carolina is resolved. Specifically, the National Defense Authorization Act for fiscal year 2003 required DOE to prepare a plan for the construction and operation of the MFFF at SRS so that it could produce MOX fuel at an average rate of at least 1 MT per year. As subsequently amended, the law provides that if DOE did not meet this 1 MT production objective by January 1, 2014, then it was required to remove 1 MT of defense plutonium from South Carolina by January 1, 2016. If DOE missed that deadline, it was required to make substantial payments to South Carolina until the removal was completed. As NNSA faced delays and cost increases in constructing the MFFF and began to reevaluate its surplus disposition strategy, South Carolina sued DOE in February 2016 to begin removing plutonium from the state and to begin to make payments to the state of up to $100 million per year until the surplus plutonium is removed. In December 2017, the court ordered DOE to remove 1 MT of plutonium from South Carolina by 2020. In response, according to court filings, NNSA moved 0.5 MT of plutonium from SRS to its Nevada National Security Site prior to November 2018 and moved another 0.5 MT of plutonium off-site in August 2019. DOE is still required by statue to remove an amount of defense plutonium or defense plutonium material equal to that which was transferred to SRS after April 15, 2002, but not processed by the MOX facility by January 2022. The officials told us that because of this continuing requirement and the threat of further lawsuits by South Carolina, it was unlikely that NNSA could ship plutonium oxide to SRS until the surplus plutonium at SRS is removed. NNSA officials said that the agency is taking some actions designed to address these issues. These actions include: Increasing plutonium oxide production rates with a priority on oxidizing plutonium material from SRS. NNSA officials said in August 2019 that they are in discussions with LANL to increase the short-term production of plutonium oxide to speed the removal of surplus plutonium from South Carolina. According to NNSA officials, NNSA and LANL are considering increasing plutonium oxide production through 2025 beyond what is called for in their short-term plan that the agency approved in September 2019. This would involve shipping additional surplus plutonium metal from SRS to LANL and prioritize converting this material to plutonium oxide. According to agency officials, LANL would produce additional plutonium oxide production by using new ARIES equipment installed in PF-4 in 2019. To achieve this increased production, NNSA officials said that LANL would need to hire 70 personnel through 2025 to operate ARIES. Agency officials said that these steps would increase total plutonium oxide production to approximately 2.1 MT through 2025, an increase of nearly 1 MT over the short-term plan NNSA approved in September 2019. Increasing dilution and disposal rates of the inventory of plutonium oxide already at SRS. DOE and NNSA officials said that they would also increase dilution of existing plutonium oxide at SRS beyond what is called for in the 2018 conceptual plan to help reduce the inventory of plutonium metal already there. In April 2019, NNSA officials said their current dilution rate at SRS was about 20 kilograms (0.02 MT) annually, but that they plan to increase that rate to 1.5 MT by the late 2020s. Under its 2018 conceptual plan, NNSA had planned to achieve that dilution rate by 2031, but the budget request for NNSA for fiscal year 2020 shows that NNSA plans to complete installation of the capability necessary to achieve that dilution rate by as early as fiscal year 2028. The effort—known as the Surplus Plutonium Disposition project—has an estimated cost range from $200 million to $589 million. It includes removing unnecessary equipment from SRS, accelerating the project’s construction schedule, installing long-lead procurement items early in construction, and hiring and certifying additional personnel. According to NNSA officials, this increase in dilution capacity by 2028 would enable NNSA to begin shipping plutonium oxide to SRS for dilution and disposal without suspending plutonium oxide production at PF-4. While NNSA is taking actions to address pit production and shipment issues, the agency continues to work on refining the long-term plutonium oxide production goals in its 2018 conceptual plan. However, NNSA officials said that establishing firm long-term plutonium oxide production plans now would be premature and that the agency would use the next several years to balance plutonium oxide production, pit production, and shipment issues as they refine long-term production plans. Agency Comments We provided a draft of this report to NNSA and DOE for review and comment. In its response to our draft report, reproduced in appendix V, NNSA said that it and DOE are working to balance the needs of its dilute and dispose program, which includes oxide production, and pit production, as well as the need to remove plutonium from the state of South Carolina. NNSA said, as noted in our report, that its Office for Cost Estimating and Program Evaluation would assess the effects of increased plutonium oxide production on pit production. NNSA also said that even with delays in production of plutonium oxide, the dilution and disposition of surplus plutonium will still be substantially less expensive than if the agency had maintained its MOX fuel approach. As stated in our report, we have a large body of work that has examined the MOX fuel approach, NNSA’s management of the MOX project, and DOE’s $17 billion cost estimate to complete the project, which we assessed as being reliable. In addition, NNSA provided us with technical comments and additional documentation, which we incorporated into our report as appropriate. Some of the information that NNSA provided helped clarify near-term plutonium oxide production plans as well as the agency’s progress in balancing the plutonium oxide production plans, pit production, and the need to move plutonium out of the state of South Carolina. This information is incorporated in our report and is reflected in the report’s revised title. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology Our report (1) determines the amount of surplus plutonium in the Department of Energy’s (DOE) inventory that could be converted to plutonium oxide for dilution and disposal and (2) examines DOE’s capacity to produce plutonium oxide. To determine the amount of surplus plutonium in DOE’s inventory that could be converted to plutonium oxide for dilution and disposal, we reviewed relevant DOE documents and interviewed officials from DOE, including from DOE’s National Nuclear Security Administration (NNSA) and DOE’s Office of Environmental Management (EM), on the amounts and forms of surplus plutonium in DOE’s inventory that would require conversion to an oxide prior to final disposition. Our review included DOE’s plans for converting surplus plutonium to plutonium oxide beginning in 1997, when DOE first decided to convert surplus plutonium to plutonium oxide for disposition. We also visited the Los Alamos National Laboratory (LANL) in New Mexico to review documentation and interview officials in the Surplus Plutonium Disposition Program for information on past and current inventories of surplus plutonium. NNSA’s Advanced Recovery and Integrated Extraction System (ARIES), the program that currently converts surplus plutonium to plutonium oxide, resides in Plutonium Facility-4 (PF-4) at LANL. To examine DOE’s capacity to produce plutonium oxide, we reviewed relevant DOE documents and interviewed officials from DOE, including from NNSA and EM, on the status of plutonium oxide production in PF-4 and at DOE’s Savannah River Site, where surplus plutonium was converted to plutonium oxide over a 2 1/2-year period. We reviewed relevant DOE documents and interviewed officials from DOE, including from NNSA and EM, on their plans. For example, we reviewed records of decision and environmental impact statements that DOE issued during its management of the Surplus Plutonium Disposition Program. We reviewed planning documents related to the dilute and dispose strategy, including DOE’s life-cycle cost estimate and supporting documents covering issues such as time frames and conversion rates. We visited the ARIES program in PF-4 in January 2018 to review documentation and conduct interviews with officials responsible for plutonium oxide production and the planned expansion of plutonium oxide production. The site visit included a tour of PF-4, ARIES and its operations, and potential spaces in PF-4 for expansion of ARIES operations for converting surplus plutonium metal to oxide. We conducted this performance audit from October 2017 to October 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Timeline of the Changes to Department of Energy (DOE) Disposition Strategies under the Surplus Plutonium Disposition Program DOE first established the Surplus Plutonium Disposition Program in 1997 to dispose of surplus, weapons-usable plutonium at the end of the Cold War. As of April 2019, the United States has declared a total of 61.5 metric tons (MT) of plutonium as surplus to defense needs. DOE has disposed of 3.2 MT of surplus plutonium at the Waste Isolation Pilot Plant (WIPP), an underground repository for transuranic waste located near Carlsbad, New Mexico, and is in the process of disposing of an additional 1.1 MT of surplus plutonium. This leaves 57.2 MT of surplus plutonium in its inventory, as of May 2019. The table below shows the timeline of changes to DOE strategies for managing surplus plutonium for final disposition. Appendix III: Timeline of Key Events Concerning the Department of Energy’s (DOE) Surplus Plutonium Disposition Program 1997 - DOE announces the Surplus Plutonium Disposition Plan, including the Mixed Oxide Fuel Fabrication Facility (MFFF). 2000 - The United States and Russia enter into the Plutonium Management and Disposition Agreement (PMDA), each agreeing to dispose of at least 34 metric tons (MT) of plutonium at a rate of at least 2 MT per year. 2000 - DOE announced it will construct the MFFF. 2002 - The National Defense Authorization Act for fiscal year 2003 requires DOE to prepare a plan for the construction and operation of the MFFF at the Savannah River Site in South Carolina and requires, among other things, that DOE remove 1 MT of plutonium from South Carolina by January 1, 2011, if mixed oxide (MOX) production objectives of an average rate of at least 1 MT per year were not achieved by January 1, 2009. Failure to meet these deadlines would require DOE to make substantial annual payments to South Carolina. 2005 – The Energy and Water Development Appropriations Act for Fiscal Year 2006 extends the original plutonium production and removal deadlines by 3 years (thus making the 1 MT plutonium production deadline January 1, 2012, and removal deadline January 1, 2014). 2014 – The National Defense Authorization Act for fiscal year 2015 requires DOE to issue a report that would study the plan for the MFFF as well as possible alternatives to the MFFF. 2015 - The National Defense Authorization Act for Fiscal Year 2016 requires DOE to carry out an analysis of alternatives for the Surplus Plutonium Disposition Program. 2015 and 2017 - Explanatory statements accompanying fiscal years 2016 and 2017 appropriations legislation contained specific direction to explore design issues associated with the dilute and dispose alternative. 2016 - South Carolina sues DOE in federal district court, contending that DOE failed to meet the MOX-related statutory deadlines. South Carolina sought monetary relief and an injunction compelling the federal government to remove 1 MT of plutonium from the state. 2016 - DOE issues a Record of Decision stating that it would remove plutonium from South Carolina using the dilute and dispose strategy. 2017 - Federal district court issues an injunction ordering DOE to remove 1 MT of plutonium from South Carolina and ordering the parties to negotiate a new deadline. 2017 - The National Defense Authorization Act for Fiscal Year 2018 allowed DOE to terminate construction of MFFF if, among other things, DOE identified an alternative that would cost less than half of the MOX fuel strategy. 2017 - South Carolina and DOE fail to agree on a deadline for removing 1 MT of plutonium from the state, so in December the court imposes a deadline of January 1, 2020. 2018 - Federal appellate court rejects DOE’s appeal of the district court’s order to remove 1 MT of plutonium from South Carolina by January 1, 2020. 2018 - DOE terminates the MOX contract for the government’s convenience. 2019 – DOE acknowledges that it had shipped 0.5 MT of plutonium from South Carolina to Nevada sometime before November 2018 and shipped an additional 0.5 MT out of South Carolina to another state sometime before August 2019. Appendix IV: Timeline of Key Events Relating to a High-Level Waste Repository for Disposing of Certain Surplus Plutonium The Nuclear Waste Policy Act of 1982 directed, among other things, that DOE study sites for a repository and that the President evaluate the capacity for the disposal of high-level waste resulting from atomic energy defense activities at one or more repositories developed for the disposal of commercial used (spent) nuclear fuel. In 1985, President Reagan found that there was no basis to conclude that a separate defense high- level waste repository was required. Table 2 shows the changes in plans for developing a high-level waste repository from 2002 through 2018. Appendix V: Comments from the Department of Energy Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, the following individuals made contributions to this report: Jonathan Gill (Assistant Director); Robert Sánchez (Analyst in Charge); Antoinette Capaccio; Robert (Scott) Fletcher; Cindy Gilbert; Richard Johnson; Sheryl Stein; Sara Sullivan; and Curtis (Wade) Tanner.
Why GAO Did This Study The United States has 57.2 MT of weapons-usable plutonium that it has declared surplus and that still requires disposition. This plutonium exists in various metal and non-metal forms, including pits—the central core of a nuclear weapon. To prevent insidious use of this plutonium, DOE plans to disassemble pits into metal; convert the plutonium metal to plutonium oxide, a powder-like substance; dilute it with inert material; and dispose of it at WIPP. In May 2018, NNSA issued a plan conceptualizing the dilution and disposal of 34 MT of surplus plutonium at an estimated cost of $19 billion over the next 3 decades. Under this conceptual plan, pit disassembly and production of plutonium oxide would take place at one facility and dilution would be performed in another, with both operations expanding over the next decade. GAO was asked to review DOE's plans for plutonium oxide production to dispose of surplus plutonium. This report (1) examines the amount of surplus plutonium in DOE's inventory that could be converted to plutonium oxide for dilution and disposal and (2) examines DOE's capacity to produce plutonium oxide. GAO reviewed the inventory of surplus plutonium, plutonium oxide production requirements and production capacity, and DOE planning documents, and interviewed DOE officials. What GAO Found Of the Department of Energy‘s (DOE) inventory of surplus plutonium, about 43.8 metric tons (MT), or 77 percent, is plutonium metal that could be converted to plutonium oxide for dilution and disposal. Of this amount, the National Nuclear Security Administration (NNSA) manages 33.3 MT in the form of pits, DOE's Office of Environmental Management (EM) manages 6.5 MT, and DOE's Office of Nuclear Energy manages 4 MT in the form of reactor fuel. EM manages another 11 percent, or 6.4 MT, of DOE's surplus plutonium that is already in oxide form. Most of this is suitable for dilution and disposal at the Waste Isolation Pilot Plant (WIPP), a repository in New Mexico. An additional 12 percent, or 7 MT, of DOE's surplus plutonium is contained in spent nuclear fuel that is planned for disposal in a geologic repository. See figure. NNSA's 2018 conceptual plan calls for converting 26.2 MT of this surplus plutonium into oxide by 2045. In September 2019, NNSA approved the production of about 1.2 MT of plutonium oxide through 2025 at its Los Alamos National Laboratory (LANL) located in New Mexico. However, plans for converting additional surplus plutonium into plutonium oxide are uncertain because of two issues. These issues include NNSA's still-developing plans for new pit production, which will also take place at LANL, and issues surrounding the agency's ability to ship newly produced plutonium oxide for dilution to DOE's Savannah River Site (SRS) in South Carolina. According to agency officials, NNSA and DOE are taking several actions that, if successfully implemented, are designed to allow NNSA to meet its long-term plutonium oxide production goals. These actions include continuing to review plutonium oxide and pit production plans, increasing plutonium storage at LANL, reducing the amount of SRS's surplus plutonium, and accelerating the shipment of diluted plutonium from SRS to WIPP.
gao_GAO-19-470
gao_GAO-19-470_0
Background Overview of CBP’s Roles and Responsibilities CBP is the nation’s largest federal law enforcement agency. CBP’s Border Patrol and AMO are the uniformed law enforcement arms responsible for securing U.S. borders between ports of entry in the air, land, and maritime environments. Border Patrol has primary responsibility for securing U.S. land borders between ports of entry. Its area of responsibility along the northern border is divided among eight sectors: Blaine, Spokane, Havre, Grand Forks, Detroit, Buffalo, Swanton, and Houlton. Each Border Patrol sector is further divided into Border Patrol stations and each station is assigned a certain geographic area of responsibility within a sector. Along the northern border, there are a total of 49 stations or between four to eight stations per sector. For a map of Border Patrol’s northern border sectors, see figure 1. Border Patrol agents secure the border between ports of entry, in part, through patrolling international land borders and waterways to detect and prevent the illegal trafficking of people, narcotics, and contraband into the United States. AMO has primary responsibility for securing U.S. borders in the air, marine, and land domains and its operations along the northern border are divided among three branches: Bellingham Air and Marine Branch in Washington, Great Lakes Air and Marine Branch in Michigan, and Manassas Air Branch in Virginia. Each branch is further divided into units to conduct air or maritime missions and there are a total of seven air units and nine marine units along the northern border. For a map of AMO’s northern border operating locations, see figure 2. AMO Air Interdiction Agents are federal law enforcement agents who pilot aircraft, while Marine Interdiction Agents are federal law enforcement agents who operate vessels. Air and Marine Interdiction Agents secure the air and maritime environments along the border, in part, through conducting surveillance and investigative activities to interdict smuggled narcotics and other contraband. Additional offices within CBP that support the activities of Border Patrol and AMO along the northern border include the Office of Facilities and Asset Management, Office of Information and Technology (OIT), and Office of Intelligence. The Office of Facilities and Asset Management is responsible for oversight and management of CBP’s real and personal property portfolios, including managing CBP’s facilities and motor vehicle fleets. OIT is responsible for managing CBP’s technology infrastructure and information technology (IT) operations. These, according to OIT, enable CBP mission readiness and improve the ability of all employees, including agents in the field, to be proactive and responsive to new threats. OIT manages all IT networks, computers, systems, data, tactical communications, and other resources to support CBP employees. OIT is also to provide day-to-day field support primarily through Field Technology Officers who provide services to CBP’s offices and components, such as repairing equipment, upgrading systems and networks, restoring system outages, responding to cybersecurity incidents, and deploying new technology and equipment. The Office of Intelligence is to develop, coordinate, and implement CBP’s intelligence capabilities into a cohesive intelligence enterprise that supports CBP’s primary mission to secure the borders while facilitating legitimate trade and travel. The Office of Intelligence’s Field Intelligence Division is to provide CBP law enforcement personnel with current and relevant intelligence to inform decision makers and those who respond to border related crimes, threats, and hazards. In this division, there are two field intelligence groups with areas of responsibility along the northern border—the Pacific Northwest Field Intelligence Group in Washington and the Great Lakes Field Intelligence Group in Michigan. In addition, through CBP’s National Border Geospatial Intelligence Strategy, the Office of Intelligence produces geospatial intelligence products for Border Patrol sectors to identify areas of potential illicit cross-border activity. Resources Used by CBP to Secure the Northern Border Between Ports of Entry Border Patrol and AMO use a variety of technologies, facilities, and other resources to secure the northern border between ports of entry. Figure 3 illustrates examples of resources used by Border Patrol and AMO, which include the following: surveillance technology, such as Remote Video Surveillance Systems—systems of towers with cameras that transmit information to video monitors at a Border Patrol facility—and unattended ground sensors—remotely monitored sensors placed in or on the ground, or on ground-based platforms, to detect, track, identify, and differentiate humans, animals, and vehicles—used by Border Patrol agents to detect and identify illicit cross-border activity; radar systems to detect and identify aircraft and vessel incursions; IT and communication systems to conduct operations and ensure the safety and security of agents while on duty, including databases and systems for processing detainees, infrastructure to operate surveillance technology, and tactical communication equipment such as land mobile radios; aircraft, including fixed- and rotary-wing aircraft, vehicles, including all- terrain vehicles and snowmobiles, and large and small vessels; tactical infrastructure, including fencing, roads, and border markers and signs; and facilities, including buildings to house workstations and offices for agents and civilian personnel, short-term detention facilities to process and hold individuals arrested by Border Patrol agents, forward operating bases in remote locations to support Border Patrol agent operations, and hangars for aircraft and vessel storage and repair. Collaborative Efforts Used by CBP to Secure the Northern Border Task Forces and Partnerships CBP participates in a variety of collaborative efforts—including task forces, joint operations, and partnerships with federal, state, and local law enforcement agencies—to support its efforts to secure the northern border between ports of entry. According to CBP officials, collaborative efforts involve sharing intelligence and other information that informs and guides the efficient use of agents and resources to conduct enforcement activities. For example, AMO’s Air and Marine Operations Center coordinates with federal, state, local, and international law enforcement agencies to detect, identify, track, and coordinate interdiction of suspect aviation and maritime activity near and at the borders, including the northern border, and within the United States. Moreover, Border Patrol’s Northern Border Coordination Center serves as a centralized coordination center for information sharing among Border Patrol’s eight northern border sectors, as well as with domestic and international law enforcement partners, focusing primarily on counter-terrorism and illicit criminal networks. Border Patrol also collaborates with county, state, tribal, local, and other law enforcement agencies through administration of the Operation Stonegarden Grant Program, a part of the Homeland Security Grant Program, to support border security activities. The grant program provides funding to state, local, and tribal law enforcement agencies to support joint efforts to secure U.S. borders. For example, grantees may receive reimbursement for operational overtime costs associated with law enforcement activities and equipment purchases, such as sensors, in support of border security activities. CBP’s collaborative efforts along the northern border also include participation in various task forces with federal, state, and local law enforcement agencies. Specifically, Border Patrol and AMO agents may be assigned as task force officers to conduct or support casework, investigations, and coordination among federal, state, and local law enforcement agencies. For example, Border Patrol and AMO agents are assigned as task force officers along the northern border on the U.S. Immigration and Customs Enforcement-led Border Enforcement Security Task Force in Washington, Michigan, and New York to identify, investigate, disrupt, and dismantle transnational criminal organizations. According to Border Patrol and AMO officials, task force officers help enhance partnerships, information sharing, and situational awareness along the northern border. CBP also partners with other DHS components to support its efforts to secure the northern border between ports of entry. For example, through the Puget Sound Regional Coordinating Mechanism, CBP—including Border Patrol and AMO—and the U.S. Coast Guard coordinate daily and conduct joint operations along the maritime border between the state of Washington and province of British Columbia. CBP also works with DHS’s Science and Technology Directorate to identify, develop, and evaluate technology to address capability gaps across the northern border. For example, DHS’s Science and Technology Directorate, in collaboration with Swanton Border Patrol sector, deployed land surveillance technology along the northern border. CBP Collaboration with the Government of Canada CBP also collaborates with law enforcement agencies within the government of Canada through the Cross-Border Law Enforcement Advisory Committee and Integrated Border Enforcement Team Program. The Cross-Border Law Enforcement Advisory Committee is a national- level committee—comprised of the Royal Canadian Mounted Police, Canada Border Services Agency, U.S. Immigration and Customs Enforcement, CBP, and U.S. Coast Guard—that provides guidance to initiatives involving partnerships between United States and Canadian law enforcement agencies along the shared border. The Integrated Border Enforcement Team Program includes the Royal Canadian Mounted Police, Canada Border Services Agency, U.S. Immigration and Customs Enforcement, CBP, and U.S. Coast Guard. According to CBP, the priority of the program is to seek and identify mutual national security threats and combat illicit cross-border activity. According to CBP and government of Canada officials, program activities may include real-time tactical intelligence sharing between Canadian and U.S. law enforcement agencies and periodic meetings to coordinate operations. These officials stated that the program helps to facilitate timely information sharing in accordance with Canadian and U.S. laws and regulations. For example, through the Integrated Border Enforcement Team Charter, Border Patrol and the Royal Canadian Mounted Police may share information related to cross-border criminal activity—such as suspected or known illegal entries between ports of entry—without delay. CBP Identified Terrorism, Contraband Smuggling, and Violations of U.S. Immigration Law as Threats along the Northern Border Between Ports of Entry Contraband Smuggling According to DHS’s 2017 Northern Border Threat Analysis Report, the most common threat to U.S. public safety along the northern border continues to be contraband smuggling; specifically, the bidirectional flow of illicit drugs. In its fiscal year 2018 intelligence reports for its eight northern border sectors, Border Patrol also reported contraband smuggling as a significant threat along the northern border between ports of entry, including bidirectional drug smuggling. According to Border Patrol data for fiscal years 2013 through 2017, 2 percent of Border Patrol’s total drug seizures occurred along the northern border. Examples of smuggling activities include criminal groups with known ties to or hired by Mexican drug trafficking organizations suspected of smuggling narcotics into Canada and smuggling bulk currency from Canada into the United States between land border ports of entry. Border Patrol, in its intelligence reports, also identified contraband smuggling for the purpose of evading customs duties, involving products such as tobacco, prohibited fruits, and medicinal products. Further, according to Border Patrol, criminal organizations smuggle contraband between ports of entry because certain items such as tobacco, agricultural, and medicinal products are prohibited for import even if properly declared at a port of entry. In 2017, AMO reported contraband smuggling across the northern border both into and out of the United States between ports of entry. In its 2017 Northern Border Non-Commercial General Aviation Threat Overview, AMO’s Air and Marine Operations Center identified illicit activity along the northern border using general aviation aircraft, including aircraft operating in a suspicious manner at low attitude (low-flying aircraft). Violations of U.S. Immigration Law According to Border Patrol’s annual fiscal year 2018 intelligence report, violations of U.S. immigration and travel controls, which Border Patrol refers to generally as “illegal immigration,” along the northern border is a threat and is frequently bidirectional between the United States and Canada. Additionally, our analysis of Border Patrol data from fiscal years 2013 through 2017 showed that Border Patrol agents apprehended 14,319 potentially removable aliens—foreign nationals who Border Patrol suspected or determined were removable from the United States—along the northern border or approximately 1 percent of its total nationwide apprehensions of potentially removable aliens (1.97 million aliens). According to DHS’s 2017 Northern Border Threat Analysis Report, known illegal crossings between ports of entry by individuals on the northern border conform to established migration patterns between large population centers. Further, the report states that terrain, weather, and distance are factors that constrain migrant travel between ports of entry in remote areas of the border. According to Border Patrol officials, the majority of individuals apprehended along the northern border are suspected or known to have illegally entered the United States across the southwest border and traveled to the northern border region before being detected, while a smaller number of individuals are suspected or known to have illegally entered the United States from Canada between ports of entry. Specifically, of the potentially removable aliens apprehended by Border Patrol along the northern border during this period, we found that 61 percent (8,727) were individuals suspected or known to have illegally entered the United States from Mexico, while 19 percent (2,782) were individuals suspected or known to have illegally entered the United States from Canada. The Swanton Border Patrol sector apprehended the highest percentage of individuals who illegally entered the United States from Canada between ports of entry during this period, 43 percent (1,206 individuals) of the total number across all eight northern border sectors. Border Patrol, in its fiscal year 2018 intelligence reports for its eight northern border sectors, also identified alien smuggling—bringing into, or harboring or transporting within, the United States, foreign nationals in violation of U.S. immigration law—organizations operating along the northern border between ports of entry as a threat. Examples of alien smuggling activities include alien smuggling organizations using private residences along international waterways to provide locations for staging an illegal entry. According to Border Patrol officials, criminal organizations operating along the U.S.-Canada border frequently conduct bidirectional alien smuggling activities between the United States and Canada as agents encounter numerous types of groups being smuggled into Canada. CBP Identified Northern Border Staffing and Resource Challenges and Actions to Address Them but Faces Competing Priorities CBP identified staffing and resource challenges to its operations and enforcement activities across the northern border and has identified actions to address them, but faces competing priorities. Border Patrol and AMO officials we met with identified agent staffing challenges along the northern border across all sectors and branches that limit enforcement activities, including Border Patrol agent availability to conduct patrol missions and a limited number and frequency of AMO missions due to AMO agent availability. Border Patrol and AMO officials also identified resource challenges along the northern border across all sectors and branches, including radar and surveillance technology used to surveil the air, maritime, and land environments; IT and communication technology, including network infrastructure and bandwidth that allow agents to access CBP systems and tactical communications, such as land mobile radios for agent communication during border security missions; and infrastructure and facilities, including tactical infrastructure—roads, fencing, and border markers—and facilities used by agents to secure the border. It is unknown whether the staffing and resource challenges identified by CBP to secure the northern border between ports of entry will be addressed due to competing southwest border security priorities. CBP identified actions and ongoing efforts to address agent staffing and resource challenges to secure the northern border between ports of entry. In June 2018, DHS released a Northern Border Strategy to establish actions that are intended to, among other things, improve DHS’s efforts to safeguard the northern border against various threats. DHS is developing an implementation plan for its Northern Border Strategy which will, among other things, identify actions to address gaps in capabilities to secure the northern border between ports of entry. However, it is unknown whether CBP’s northern border staffing and resource challenges will be addressed due to competing priorities with southwest border security. For example, instructions in Executive Order 13767 require DHS to obtain complete operational control—prevention of all unlawful entries into the United States, including entries by terrorists, other unlawful aliens, instruments of terrorism, narcotics, and other contraband—of the southwest border, in part through hiring thousands of agents and constructing a physical barrier. CBP Identified Staffing Challenges in Securing the Northern Border and Has Ongoing Efforts to Improve Recruitment, Hiring, and Retention Border Patrol Staffing Border Patrol officials identified staffing challenges across the northern border sectors that have affected enforcement activities. Officials from northern border sectors told us that an insufficient number of agents authorized or onboard at its sectors and stations limits their ability to conduct enforcement activities and may, at times, pose risks to agent safety. In addition, Border Patrol officials from northern border sectors stated that agent availability for enforcement activities is further limited by detainee transportation and supervision duties and requests for law enforcement assistance from other agencies. For example, Border Patrol sector officials stated that detainee transportation duties result in agents being unable to conduct enforcement activities for up to 1 day and duties related to supervision of detainees during court proceedings and meetings with federal prosecutors may result in agents being unable to conduct enforcement activities for up to 1 week. Further, responding to local calls for assistance during assaults may result in agents being unable to conduct enforcement operations for multiple hours. Also, Border Patrol officials from northern border sectors stated that vacancies in civilian Law Enforcement Communication Assistant positions affect enforcement activities. Law Enforcement Communication Assistant duties at each northern border sector are dispatching and officer safety checks, monitoring surveillance camera feeds and unattended ground sensor activation, and conducting intelligence research checks for agents on duty across all stations in the sector. Border Patrol officials told us it is difficult to recruit and retain qualified applicants for vacant positions due to the lower General Schedule grade of the position across Border Patrol, which is not competitive with salaries for similar positions offered through state and local law enforcement agencies. In August 2018, Border Patrol officials stated that they created an additional position, the Law Enforcement Information Specialist, with additional duties and responsibilities at a higher General Schedule grade. AMO Staffing AMO identified staffing challenges across its northern border branches which, according to AMO officials, have affected the frequency and number of air and maritime missions. Specifically, officials at AMO branches told us that an insufficient number of agents authorized or onboard at its branches and units limits the frequency and number of air and maritime missions AMO is able to conduct along the northern border. For example, AMO officials stated that an insufficient number of Marine Interdiction Agents limits the number of daily and weekly maritime patrol missions. For air missions, AMO officials stated that an insufficient number of Air Interdiction Agents may limit the ability to fulfill immediate or previously unscheduled requests for air support. AMO officials from northern border branches also cited agent recruitment, hiring, and retention as a challenge for filling vacant positions. For example, officials stated that AMO faces competition with commercial airline companies for recruitment and retention of qualified individuals with commercial pilot certificates, including higher salaries, as well as delays from CBP’s lengthy application process. AMO officials from northern branches also stated that agent availability for air and maritime missions is sometimes limited due to temporary duty assignments to support national missions, which can limit local operations along the northern border. AMO officials stated that these temporary duty assignments involve relocation of Air Interdiction Agents, aircraft, and maintenance staff to other operating locations for multiple weeks. For example, in 2017, Air Interdiction Agents flew missions to support recovery efforts after the hurricanes in Texas, Florida, and Puerto Rico. In 2018, Air Interdiction Agents supported security operations during the Super Bowl in Minneapolis, Minnesota. CBP’s Ongoing Efforts to Address Staffing Challenges CBP is taking actions to address agent recruitment, hiring, and retention. We reported in June 2018 on CBP’s actions to address challenges for recruitment, hiring, and retention of Border Patrol and AMO agents, such as increased participation in recruitment events and offering relocation opportunities for existing employees. According to CBP’s Fiscal Year 2019 Congressional Budget Justification, newly hired Border Patrol agents will be assigned to the southwest border to allow for the reassignment of more experienced agents to the northern border. As of August 2019, Border Patrol officials expected that all sectors in fiscal year 2019, including the northern border sectors, would receive an increase in the number of authorized agent positions. Border Patrol officials also stated that as of June 2018, they were completing a Personnel Requirements Determination Initiative to analyze agent allocations across its sectors and stations to develop a staffing allocation model to optimally align staff according to workload and area of responsibility conditions. In June 2018, we also reported that AMO had taken steps to address staffing challenges, such as implementing voluntary paid relocation opportunities and pursuing additional human capital flexibilities to address its difficulty in retaining Air Interdiction Agents, including a group retention incentive and a special salary rate. AMO personnel who are non- bargaining unit employees and have served for at least 3 years in their current location are also eligible for noncompetitive paid relocations. According to AMO officials, these opportunities are posted every few months and eligible personnel can apply for transfers to a specific duty station based on the needs of the operational component. In September 2017, AMO submitted an official request for a 10 percent group retention incentive for Air Interdiction Agents staffed to the northern border, among other locations. According to the request, the incentive is intended to help AMO retain qualified pilots in these hard-to-fill locations by raising their salaries to be more competitive with commercial airlines. Border Patrol officials we met with stated that Border Patrol’s Operational Mobility and Resident Agent Programs have helped northern border sectors to address agent staffing challenges. The Operational Mobility Program provides Border Patrol agents with opportunities for a paid relocation to a more desirable location at a lower cost to CBP than an official permanent change of station transfer. Border Patrol officials stated that the use of the Operational Mobility Program resulted in agents electing to relocate to northern border sectors from other duty stations. The Resident Agent Program operates in locations where Border Patrol’s routine presence is extremely limited and is intended to improve situational awareness by the creation of partnerships, expansion of community outreach, and development and dissemination of intelligence. The Resident Agent location is the physical residence of an agent in a location where there is not an official Border Patrol station. CBP Identified Resource Challenges Affecting Northern Border Security and Actions to Address Them Air and Maritime Radar Officials from Border Patrol sectors and AMO branches stated that there are gaps in air radar coverage along the northern border, limiting their ability to detect and identify aircraft incursions. CBP has taken actions to address these gaps in air radar coverage. In December 2017, CBP completed an AMO-led assessment of air radar capabilities, which identified coverage gaps and needs across the United States, including at the northern border. In May 2018, AMO officials stated that they began working with the Department of Defense to test technology along the northern border to address gaps in air radar coverage. Officials from Border Patrol sectors and AMO branches stated that there are limited maritime radar capabilities to detect and identify vessel incursions along the northern border. CBP has taken actions to address these gaps in maritime radar capabilities. Border Patrol, through its Maritime Detection Project, plans to deploy additional maritime radar technology in Detroit and Buffalo sectors to expand maritime radar coverage on Lake Erie. Also, in 2017, CBP participated in a 1-year DHS pilot project with the government of Canada to share radar information in an area along the northern border to detect vessel incursions. AMO, through its Multi-Role Enforcement Aircraft, conducts maritime radar patrols along portions of the northern border to address gaps in maritime radar coverage on some of the Great Lakes and parts of the Pacific Northwest, to detect and identify vessel incursions. Land Surveillance Technology Border Patrol sector officials stated that there are challenges with land surveillance technology that is used for agents to detect, identify, and respond to illicit cross-border activity along the northern border. Further, Border Patrol headquarters and sector officials stated that there are gaps in surveillance technology coverage along the northern border to detect and identify illicit cross-border activity. In addition, Border Patrol officials also identified challenges with Legacy Remote Video Surveillance Systems. For example, officials we met with identified system outages due to delays in maintenance and replacement of parts, and poor quality video surveillance camera images. In March 2017, CBP completed a Border Patrol-led assessment of land surveillance capabilities to assess gaps, including gaps in surveillance technology coverage across all Border Patrol sectors. Network Infrastructure and Bandwidth Officials from Border Patrol sectors and AMO branches we met with identified inadequate network infrastructure—including network infrastructure and equipment nearing or past its useful life—and bandwidth that have affected enforcement activities and other required tasks along the northern border. For example, Border Patrol officials stated that inadequate network infrastructure and bandwidth has delayed or prevented the processing of detainees at some stations. AMO officials also stated that inadequate bandwidth limits the ability of agents to use BigPipe, a system used to coordinate operations with partner agencies during air and maritime missions. In September 2017, DHS’s Office of Inspector General found that outdated IT infrastructure and equipment contributed to CBP-wide system performance and availability challenges; a considerable portion of IT equipment and infrastructure had reached its useful life; and OIT was unable to replace infrastructure past its useful life because of financial constraints. CBP’s Fiscal Year 2019 Congressional Budget Justification identifies actions to improve network infrastructure and bandwidth, including deploying new workstations and replacing network infrastructure components that are past their useful life to provide reliable operations and address vulnerabilities. OIT officials stated that pilot projects using virtual private network connections are being implemented at CBP locations to address bandwidth challenges and reduce costs. Tactical Communications Officials from Border Patrol sectors and AMO branches we met with identified challenges with tactical communications, including gaps in land mobile radio coverage along the northern border. Border Patrol and AMO agents responsible for securing the northern border depend on land mobile radio systems for secure, reliable, and timely exchanges of critical information to effectively carry out their mission. Border Patrol and AMO officials we met with identified lack of coverage in certain areas, which impacts agent communication during enforcement activities. CBP has taken actions to identify coverage gaps and deploy additional equipment to improve communications coverage along the northern border. For example, CBP has deployed additional equipment to improve tactical communication coverage in Border Patrol’s Houlton sector in Maine through its Tactical Communication Modernization Program from fiscal years 2009 through 2017. Border Patrol officials stated that they are deploying repeater tower sites—technology used for retransmitting and extending the range of radio communications—and other technology to mitigate dead spots and gaps in coverage in three sectors. According to CBP’s Fiscal Year 2019 Congressional Budget Justification, updated handheld and mobile radios are being provided to Border Patrol and AMO, including northern border locations, to improve tactical communications and interoperability with law enforcement partners. Tactical Infrastructure Border Patrol sector officials identified challenges due to limited tactical infrastructure, such as a lack of barriers to impede vehicle incursions and access to roads along the border that make it difficult to impede illegal entries. For example in one sector, officials stated that a lack of vehicle barriers leads to a gap in Border Patrol’s ability to impede illicit vehicle incursions. In other sectors, officials stated that Border Patrol agents face challenges accessing border areas due to a lack of roads or access to maintained roads. Officials from northern border sectors also stated that agents face challenges preventing illegal entries due to a lack of barriers and a lack of signs or markers indicating the location of the international border. Facilities Officials from Border Patrol sectors and AMO branches we met with noted that certain facilities do not have space to accommodate the number of assigned agents and civilian personnel along the northern border. For example, in one sector, officials stated that there is lack of space to accommodate Law Enforcement Communication Assistants to monitor surveillance technology and direct agents to respond to potential illicit activity. Border Patrol officials in other sectors also stated that certain stations in their sectors do not have adequate facilities to process and house detainees. For example, one station lacks a dedicated processing and interview area and detainees are processed in an open location next to agent workstations, which may pose a safety risk to agents, according to officials. In November 2018, Office of Facilities and Asset Management officials identified 20 new and major construction projects planned for the northern border, including replacement of Border Patrol facilities with identified challenges; however, these projects have been deferred due to lack of funding. Further, according to Office of Facilities and Asset Management officials, CBP has insufficient funds to address deferred maintenance projects and a limited number of maintenance staff to repair facilities. Vehicles and Usage Reporting Technology Officials from Border Patrol sectors we met with identified aging vehicles that are beyond their expected service life, which affect enforcement activities along the northern border. According to Border Patrol officials, funding is not available to replace aging vehicles across all sectors, but funds are allocated annually to replace a percentage of vehicles in the northern border sectors that are beyond their expected service life. Further, Border Patrol officials stated that the harsh climate along the northern border creates additional burdens on agent vehicles prior to those vehicles reaching the end of their expected service life. Officials from Border Patrol sectors we met with identified agent vehicles that lack the technology needed to complete monthly motor vehicle utilization reports required by the DHS Stop Asset and Vehicle Excess Act. In August 2018, Border Patrol officials stated that CBP was in the process of awarding a contract for installation of vehicle reporting technology in agent vehicles, including across the northern border sectors. DHS Is Developing an Implementation Plan for Its Northern Border Strategy but Faces Competing Priorities with the Southwest Border In addition to the actions identified above by CBP to address northern border staffing and resource challenges, DHS is developing an implementation plan for its Northern Border Strategy, which includes a goal to enhance border security operations. The strategy states that the implementation plan is intended to outline roles, responsibilities, programs, and timelines for accomplishing the strategy’s goals and objectives for fiscal years 2020 to 2024. According to DHS officials, the department plans to use the strategy and corresponding implementation plan to prioritize departmental resources and achieve the specified outcomes over the 5-year period. According to DHS officials, the implementation plan is expected to be completed in 2019 and will identify actions to address gaps in capabilities to secure the northern border between ports of entry; for example, gaps in domain awareness and associated technology, among other things. It is unknown whether the staffing and resource challenges identified by CBP to secure the northern border between ports of entry will be addressed due to competing southwest border security priorities. According to Border Patrol and AMO headquarters officials, resources are allocated across their operating areas based on threats and volume of illicit activity, which are greatest on the southwest border. Further, Border Patrol and AMO headquarters officials stated that resource allocation is prioritized to the southwest border to also meet instructions in Executive Order 13767 to obtain complete operational control—prevention of all unlawful entries into the United States, including entries by terrorists, other unlawful (i.e. inadmissible) aliens, instruments of terrorism, narcotics, and other contraband—of the southwest border. While DHS is implementing its Northern Border Strategy, including developing an implementation plan, addressing CBP’s northern border staffing and resource challenge will compete with its other enforcement priorities along the southwest border. CBP Has Not Developed Performance Measures to Assess Its Effectiveness at Securing the Northern Border Between Ports of Entry While CBP has performance measures (strategic and management) that assess certain border security operations or programs, some of which include data from the northern border, it does not have specific measures to assess its effectiveness at securing the northern border between ports of entry. More specifically, Border Patrol has two strategic measures that include data from the northern border, but these measures do not assess Border Patrol’s effectiveness at securing the northern border between ports of entry. The two measures—the percent of recurring border surveillance implemented in remote, low-risk areas between ports of entry and the percent of time Border Patrol meets its goal of responding to potential illegal activity in remote, low-risk areas—are based on information from CBP’s National Border Geospatial Intelligence Strategy. The measures assess Border Patrol’s use of reports developed using geospatial intelligence technology of potential illicit cross-border activity. However, this technology is not applied in maritime environments, so the measures do not include data from two northern border sectors. Further, Border Patrol’s two strategic measures combine data from the southwest and northern borders. Border Patrol has four management measures that also contain data from the northern border. These measures are (1) the number of joint operations conducted along the northern border by Border Patrol agents and Canadian law enforcement; (2) the percent of apprehensions at Border Patrol checkpoints; (3) the percent of Border Patrol agents who are trained and certified to perform enforcement actions; and (4) the percent of Border Patrol equipment assessed as ready to support law enforcement operations. Border Patrol’s four management measures include data from the northern border, but do not assess Border Patrol’s effectiveness at securing the northern border between ports of entry. Although one management measure tracks the number of joint operations conducted along the northern border by Border Patrol agents and Canadian law enforcement personnel, that measure does not assess Border Patrol’s performance in conducting those joint operations or their effectiveness. Border Patrol’s three additional management measures include data from the northern border combined with other areas, such as the southwest border, and therefore are not specific to the northern border. AMO’s one strategic and one management measure include data from the northern border, but do not assess AMO’s effectiveness at securing the northern border between ports of entry in the air and maritime environments. For the strategic measure, AMO reports the percent of detected conventional aircraft incursions resolved. The measure represents the percent of conventional aircraft detected visually or by sensor technology, suspected of illicit cross-border activity, which are brought to a successful resolution by its Air and Marine Operations Center. For the management measure, AMO reports air mission launch rate, which is the percent of all requests made for aircraft to which AMO was able to respond. These two measures include data across all border areas, including the northern border, but are not specific to the northern border. Border Patrol officials stated that they have not developed or implemented performance measures to assess their effectiveness at securing the northern border between ports of entry because of competing priorities related to developing measures for southwest border security. According to Border Patrol officials responsible for developing and implementing performance measures, Border Patrol’s priority is to develop measures to assess the effectiveness of its efforts to secure the southwest border, such as the effort to achieve complete operational control as outlined in the Executive Order 13767 instructions and the fiscal year 2018 DHS agency priority goal. Specifically, Border Patrol is required to implement a measure to assess operational control for all southwest border sectors by the end of fiscal year 2019. Border Patrol defines operational control as its ability to impede or deny illegal border crossings, maintain situational awareness, and apply appropriate, time- bound law enforcement response and resolution between the ports of entry. According to Border Patrol officials, the ongoing efforts to develop measures for the southwest border will eventually be applied to the northern border, but it is unknown how these ongoing efforts will be implemented to assess Border Patrol’s performance at securing the northern border between ports of entry. Border Patrol officials stated that following the implementation of operational control for the southwest border, Border Patrol plans to implement the operational control measure along the northern border in fiscal year 2020. Border Patrol officials stated that they are in the early stages of this process, and could not provide any information on how operational control will be implemented for its operations along the northern border. Further, Border Patrol officials could not provide information on how operational control will be used to assess Border Patrol’s performance for securing the northern border between ports of entry. Additionally, in 2012 we recommended that Border Patrol establish milestones and time frames for developing performance measures to support implementation of its 2012-2016 Strategic Plan, including assessing progress made in securing the northern border between ports of entry and informing resource identification and allocation efforts. DHS concurred with our recommendations, and Border Patrol made progress in developing new performance measures for border security. However, we closed the recommendations as not implemented in September 2017 because the measures identified did not apply to the entire northern or coastal borders, as well as the remaining uncertainty about when Border Patrol would develop a new strategic plan. AMO officials stated that they have not implemented performance measures to assess AMO’s effectiveness at securing the northern border between ports of entry in the air and maritime environments because of difficulties in creating region-specific performance targets. Specifically, AMO officials stated that it is difficult to set performance targets for a specific region, such as the northern border, because the threat environment is constantly changing. Also, the officials stated that AMO is waiting for completion of the Northern Border Strategy implementation plan before developing any performance measures specific to the northern border. Additionally, Border Patrol and AMO have ongoing efforts to develop border security metrics pursuant to the National Defense Authorization Act for Fiscal Year 2017. The act directs DHS to annually report metrics and associated data and methodology, including metrics for border security between ports of entry. Consistent with GPRAMA, agencies should establish a balanced set of performance measures, which reinforces the need for agencies to have a variety of measures across program areas. Furthermore, Standards for Internal Control in the Federal Government state that management should determine whether performance measures for the defined objectives are appropriate for evaluating the entity’s performance using targets and milestones. The standards also state that management should track entity achievements and compare actual performance to planned or expected results using established activities such as comparisons and assessments. Border Patrol and AMO could leverage and use their ongoing efforts to develop and implement performance measures to assess effectiveness at securing the northern border between ports of entry. For example, Border Patrol and AMO could use the metrics developed in accordance with the Fiscal Year 2017 National Defense Authorization Act to help inform the development of northern border performance measures. Developing and implementing such measures could help Border Patrol and AMO better assess the effectiveness of their northern border operations between ports of entry, including challenges due to limited staffing and resources, and take corrective actions, as necessary. Conclusions The United States and Canada share the longest common non-militarized border between two countries, spanning nearly 4,000 miles; however, CBP has historically focused attention and resources, including resources to develop and implement performance measures, primarily on the nearly 2,000 mile U.S.-Mexico border. While Border Patrol and AMO have performance measures that assess specific border security operations or programs that include data from the northern border, these measures generally combine data with other border regions and collectively the measures do not assess effectiveness at securing the northern border between ports of entry. Without northern border performance measures, Border Patrol and AMO cannot assess their effectiveness at securing the northern border between ports of entry. Developing and implementing northern border performance measures could help Border Patrol and AMO assess its northern border operations and address identified challenges. Recommendations We are making two recommendations, one to Border Patrol and one to AMO. The Chief of Border Patrol should develop and implement performance measures to assess its effectiveness at securing the northern border between ports of entry (Recommendation 1). The Executive Assistant Commissioner of AMO should develop and implement performance measures to assess its effectiveness at securing the northern border between ports of entry in the air and maritime environments (Recommendation 2). Agency Comments and Our Evaluation We provided a draft of this report to DHS for review and comment. DHS provided written comments, which are reproduced in full in appendix V, and technical comments, which we incorporated as appropriate. DHS concurred with both recommendations in the report and described actions Border Patrol and AMO plan to take in response. Border Patrol plans to develop and apply a measure of operational control to its northern border sectors; however, to meet the intent of our recommendation, Border Patrol will also need to use its measure of operational control to assess its effectiveness at securing the northern border between ports of entry. AMO plans to develop a performance measure to assess its effectiveness at securing the northern border between ports of entry and seek DHS approval through completion of a Performance Measure Definition Form. These actions, if effectively implemented by AMO, should address the intent of the recommendation. We are sending copies of this report to the appropriate congressional committees and the Acting Secretary of the Department of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or gamblerr@gao.gov. Contacts points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology This report addresses the following questions: 1. What threats has U.S. Customs and Border Protection (CBP) identified along the U.S.-Canada (northern) border between ports of entry? 2. What challenges, if any, has CBP identified in its staffing and resources to secure the northern border between ports of entry, and what actions, if any, has CBP taken to address those challenges? 3. To what extent has CBP developed and implemented performance measures to assess the effectiveness of securing the northern border between ports of entry? To address all three questions, we interviewed Department of Homeland Security (DHS) and CBP officials from headquarters and field locations. Specifically, we met with headquarters officials from DHS’s Office of Strategy, Policy, and Plans; Office of Program Analysis and Evaluation; Science and Technology Directorate; U.S. Coast Guard; and U.S. Immigration and Customs Enforcement. From CBP, we met with headquarters officials from the Air and Marine Operations (AMO), U.S. Border Patrol (Border Patrol), Office of Information and Technology, Office of Intelligence, Office of Facilities and Asset Management, and Office of Accountability/Performance Management and Analysis Division. We also met with officials from the government of Canada to discuss their views on northern border security. For a list of government agencies and entities interviewed in field locations, see table 1. In addition, we conducted site visits in Michigan, New York, Vermont, Virginia, and Washington, as well as the Canadian provinces of British Columbia, Ontario, and Quebec. We chose these locations based on deployment of CBP resources—surveillance technology such as Remote Video Surveillance Systems—and reported levels of illicit cross-border activity by Border Patrol, including arrests of individuals and seizures of narcotics. Findings from our site visits cannot be generalized to all CBP locations along the northern border, but provide valuable insights into our research questions. To address the first question, we reviewed DHS and CBP policies, procedures, reports, and assessments describing threats along the northern border between ports of entry. Specifically, we reviewed DHS’s 2017 Northern Border Threat Analysis Report and the June 2018 Northern Border Strategy. We reviewed Border Patrol policies and procedures related to identifying and documenting threats and intelligence reports, referred to as Intelligence Estimates, completed in each northern border sector for fiscal years 2017 and 2018. In addition, we reviewed Border Patrol’s national intelligence estimates for fiscal years 2017 and 2018. We also reviewed documents describing the results of Border Patrol’s Threats, Targets, and Operations Assessments and Intelligence Preparation for the Operation Environment process completed for northern border sectors from 2014 through 2017. To analyze the number apprehensions and drug seizures along the northern border, we obtained data from the Enforcement Integrated Database for fiscal years 2013 through 2017, a time period for which complete data were available at the time of our review. We assessed the reliability of apprehension and seizure data by performing electronic testing for obvious errors in accuracy and completeness, reviewing existing information about the data and the systems that produced them, and interviewing agency officials knowledgeable about the data. As a result of our data reliability assessment, we determined that Border Patrol’s apprehension and seizure data were sufficiently reliable for our intended use. From AMO, we reviewed the 2017 Northern Border Non- Commercial General Aviation Threat Overview and information collected by the Air and Marine Operations Center on vessel and aircraft border incursions detected along the northern border from fiscal years 2013 through 2017. To address the second question, we reviewed CBP’s Fiscal Year 2019 Congressional Budget Justification. We also reviewed the results from Border Patrol’s capability gap assessment process for all eight northern border sectors completed for fiscal year 2017 and associated operational plans completed in September 2018; Border Patrol’s Surveillance Capability Assessment completed in April 2017; and AMO’s capability gap assessment completed in fiscal year 2016. We reviewed CBP capability analysis reports which included requirements along the northern border. In addition, we reviewed our relevant past work and DHS Office of Inspector General reports on northern border security. To determine the staffing and resource challenges across all eight northern border sectors and three AMO branches, we also met with officials at each sector and branch and reviewed supporting documentation. Specifically, we analyzed responses provided by officials in all eight northern border sectors and three AMO branches and supporting documentation to determine challenges mentioned by officials at two or more locations. We also reviewed supporting documentation, including inventories of assets such as vehicles, vessels, aircraft, radar and land surveillance technology, tactical communication equipment, and facilities information. We obtained Border Patrol, AMO, and Office of Information and Technology staffing information as of September 1, 2018, the most recent data available at the time of our review, including the number of authorized, onboard, and vacant positions. To assess the reliability of this staffing information, we examined the information for any anomalies and interviewed agency officials knowledgeable about the data. We found the staffing information data were sufficiently reliable for our purposes of reporting the number of authorized, onboard, and vacant positions. To address the third question, we reviewed and analyzed documentation that describes DHS and CBP processes for developing and implementing performance measures, including DHS’s Annual Performance Report for Fiscal Years 2017-2019, CBP’s Fiscal Year 2019 Congressional Budget Justification, and Performance Measure Definition Forms for recently developed performance measures. We reviewed reports, assessments, and strategies that describe current DHS and CBP performance measure initiatives. We also reviewed information from CBP’s National Border Geospatial Intelligence Strategy, including information on reports derived from geospatial intelligence technology, used as the basis for two of Border Patrol’s performance measures that contain data from the northern border. Additionally, we reviewed DHS’s most recent border security metrics report. We compared CBP’s actions to develop and implement performance measures to Standards for Internal Control in the Federal Government and the principles outlined in Government Performance and Results Act (GPRA) Modernization Act of 2010. We compiled the descriptive information in the northern Border Patrol sector profiles in appendix II from a variety of sources. We obtained information on each sector’s geography and area of responsibility from Border Patrol documentation. We obtained information on the number of authorized agents from Border Patrol as of September 1, 2018. We obtained information on the major urban areas within each sector and population estimates from the U.S. Census Bureau and the data are current as of July 1, 2017, the most recent estimates available at the time of our review. Finally, we obtained geographic information on the location of each northern Border Patrol sector and its stations from Border Patrol and located the data geographically using MapInfo. To analyze the number of apprehensions and drug seizures for each northern Border Patrol sector, we obtained data from the Enforcement Integrated Database for fiscal years 2013 through 2017, a time period for which complete data were available at the time of our review. The data fields we obtained included the individual’s immigration status at entry and country of citizenship and the drug type and quantity in pounds seized. Our analysis categorizes the sector’s apprehensions by the top four to six countries of citizenship of the individuals apprehended by Border Patrol and their immigration status at entry. Present without admission from Canada indicates the individual was suspected to be inadmissible for illegally entering the United States from Canada; present without admission from Mexico indicates the individual was suspected to be inadmissible for illegally entering the United States from Mexico; and the other category is a combination of all remaining categories, such as lawful permanent residents or other foreign nationals who may or may not be lawfully present in the United States. Our analysis also categorizes the sector’s number of drug seizures by the top three to six types of drugs that Border Patrol seized most frequently, as well as the quantity in pounds of those seizures. We assessed the reliability of apprehension and seizure data by performing electronic testing for obvious errors in accuracy and completeness, reviewing existing information about the data and the systems that produced them, and interviewing agency officials knowledgeable about the data. As a result of our data reliability assessment, we determined that Border Patrol’s apprehension and seizure data were sufficiently reliable for our intended use. We compiled the descriptive information in the northern region AMO branch profiles in appendix III from information provided by each branch and AMO headquarters. We obtained information on staffing for the three northern border branches as of September 2018. We obtained the geographic information on location of each northern region AMO branch and unit from AMO and located the data geographically using MapInfo. For total flight and float hours across all AMO operating locations and regions, we reviewed CBP data on flight and float hours from fiscal years 2013 through 2017, a time period for which complete data were available at the time of our review. For Border Patrol riverine float hours across all locations, we reviewed and analyzed float hour data from fiscal year 2017, the most recent year for which complete data were available at the time of our review. For data on air and marine missions across AMO’s northern region branches and units, we reviewed CBP data on seizures of narcotics, apprehensions, and arrests from fiscal years 2013 through 2017, a time period for which complete data were available at the time of our review. To determine the reliability of CBP’s data on flight and float hours, and mission information for seizures of narcotics, apprehensions, and arrests data, we examined the data for any anomalies, reviewed CBP guidance and documents for data collection and entry, and interviewed CBP officials to understand their methods for collecting, reporting, and validating the data. We found these data were sufficiently reliable for our purposes of reporting summary data across fiscal years 2013 through 2017. To obtain information on irregular northbound migration in appendix IV, we met with DHS and Border Patrol officials—including the three sectors (Blaine, Grand Forks, and Swanton sectors) with the highest reported levels of irregular northbound migration at the time of our review—and reviewed intelligence reports and assessments. We obtained the descriptive information in appendix IV on irregular northbound migration from a variety of sources. For data from the government of Canada on the number of asylum claimants, we downloaded publicly reported summary data on asylum claimants from the government of Canada for 2012 through 2017. For data on the number of individuals illegally entering Canada between ports of entry known to Border Patrol, we collected and reviewed information from Blaine, Grand Forks, and Swanton sectors for calendar years 2012 through 2017. To determine the reliability of data, we interviewed officials at each sector to understand their methods for collecting, reporting, and validating the data. According to Border Patrol officials at Blaine, Grand Forks, and Swanton sectors, the number of individuals illegally entering Canada between ports of entry was tracked through agent reporting and detection by land surveillance technology, such as surveillance cameras and unattended ground sensors. Based on Border Patrol’s methods for collecting, reporting, and validating the data, we found these data were sufficiently reliable for our purposes of reporting summary-level data. The performance audit upon which this report is based was conducted from October 2017 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We subsequently worked with DHS from March 2019 to June 2019 to prepare this nonsensitive version of the original sensitive report for public release. Appendix II: U.S. Border Patrol Northern Border Sector Profiles To provide a descriptive overview of the northern border sectors, we developed a profile for each of the eight U.S. Border Patrol (Border Patrol) sectors located along the U.S.-Canada (northern) border: Blaine, Washington; Spokane, Washington; Havre, Montana; Grand Forks, North Dakota; Detroit, Michigan; Buffalo, New York; Swanton, Vermont; and Houlton, Maine. These profiles are listed in order from the western-most sector to the eastern-most sector and contain an overview of each sector’s geography and area of responsibility and an analysis of apprehensions and drug seizures from fiscal years 2013 through 2017. Blaine, Washington Sector profile Spokane, Washington Sector profile Havre, Montana Sector profile Grand Forks, North Dakota Sector profile Detroit, Michigan Sector profile Buffalo, New York Sector profile Swanton, Vermont Sector profile Houlton, Maine Sector profile Appendix III: Air and Marine Operations Northern Region Branches Overview of Air and Marine Operations’s Northern Region Within U.S. Customs and Border Protection (CBP), Air and Marine Operations (AMO) conducts multifaceted missions consisting of direct support to U.S. Border Patrol (Border Patrol) and collaborative efforts with U.S. Immigration and Customs Enforcement’s Homeland Security Investigations and other federal, state, and local partner agencies. This includes, but is not limited to, investigative operations, surveillance missions, warrant service, and criminal apprehensions. AMO conducts missions along the U.S.-Canada (northern) border through three branches: Bellingham Air and Marine Branch in Bellingham, Washington; Great Lakes Air and Marine Branch at Selfridge Air National Guard Base, Michigan; and Manassas Air Branch in Manassas, Virginia. Each branch is further divided into units to conduct air or maritime missions. According to AMO data for fiscal years 2013 through 2017, AMO’s Northern Region accounted for 14 percent and 22 percent of total AMO flight and float hours, respectively, as shown in table 10. AMO implements a requirements determination process for annual aircraft flight and vessel float hours based on known mission requirements, funding levels, available assets, and the needs of law enforcement partners. Further, flight and float hours allocated across AMO’s regions are prioritized through CBP’s Flight and Float Hour Executive Oversight Council, which prioritizes flight and float hour allocations considering Department of Homeland Security and CBP’s strategic objectives and border security requirements, threats, and capacity that will be executed over the course of the upcoming year. In February 2018, CBP also created the Flight and Float Hour Executive Steering Committee comprised of Border Patrol and AMO executive leadership to perform periodic audits of flight hour execution, review changing operational environments, validate planning assumptions, and perform an evaluation on overall return on investment to best ensure that CBP asset utilization is consistently aligned with its priorities and threats. Bellingham Air and Marine Branch AMO’s Bellingham Air and Marine Branch is located in Bellingham, Washington, and is comprised of the Spokane and Montana Air Units and Port Angeles and Bellingham Marine Units. For a map of those operating locations, see figure 20. As of the end of September 2018, Bellingham Air and Marine Branch had 38 authorized Air Interdiction Agent positions and 20 authorized Marine Interdiction Agent positions. According to data provided by AMO for fiscal years 2013 through 2017, missions completed by Bellingham Air and Marine Branch resulted in: 51 apprehensions of potentially removable aliens; 963 arrests of individuals; and 536 drug seizures, including: 204 methamphetamine seizures (1,033 pounds); 93 cocaine seizures (778 pounds); 155 heroin seizures (305 pounds); 65 marijuana seizures (14,132 pounds); and 19 other drug seizures (608 pounds). Great Lakes Air and Marine Branch AMO’s Great Lakes Air and Marine Branch is located at Selfridge Air National Guard Base, Michigan and is comprised of the Buffalo and Chicago Air Units and the Sault Sainte Marie, Port Huron, Trenton, Sandusky, Erie, Buffalo, and Rochester Marine Units. For a map of those operating locations, see figure 21. As of September 2018, Great Lakes Air and Marine Branch had 27 authorized Air Interdiction Agent positions and 49 authorized Marine Interdiction Agent positions. According to data provided by AMO for fiscal years 2013 through 2017, missions completed by Great Lakes Air and Marine Branch resulted in: 157 apprehensions of potentially removable aliens; 2,571 arrests of individuals; and 1,475 drug seizures, including: 553 marijuana seizures (6,974 pounds); 474 cocaine seizures (4,408 pounds); 296 heroin seizures (425 pounds); 87 methamphetamine seizures (1,347 pounds); and 65 other drug seizures (107 pounds). Manassas Air Branch AMO’s Manassas Air Branch is located in Manassas, Virginia, and is comprised of the New York, Plattsburgh, and Houlton Air Units. For a map of those operating locations, see figure 22. As of September 2018, Manassas Branch had 35 authorized Air Interdiction Agent positions. According to data provided by AMO for fiscal years 2013 through 2017, missions completed by Manassas Air Branch resulted in: 57 apprehensions of potentially removable aliens; 1,347 arrests of individuals; and 472 drug seizures, including: 161 marijuana seizures (12,015 pounds); 141 heroin seizures (141 pounds); 134 cocaine seizures (707 pounds); 25 methamphetamine seizures (39 pounds); and 11 other drug seizures (107 pounds). Appendix IV: Irregular Northbound Migration from the United States to Canada Irregular northbound migration—northbound movement of foreign nationals from the United States across the northern border into Canada between official ports of entry typically to make an asylum claim— increased in 2017. Specifically, in 2017 the Royal Canadian Mounted Police reported approximately 20,000 irregular northbound migrants intercepted between official ports of entry. The majority of interceptions were reported in the province of Quebec (91 percent) with additional interceptions noted in Manitoba (5 percent) and British Columbia (3 percent). In comparison, from 2012 to 2016 the total number of asylum claimants for all of Canada (including at and between official ports of entry) ranged from approximately 10,000 to 24,000 per year. The total number of asylum claimants for all of Canada (including at and between official ports of entry) increased from approximately 24,000 claimants in 2016 to approximately 50,000 claimants in 2017. According to Border Patrol officials, in 2017 the number of individuals crossing from the United States into Canada, other than those crossing through official ports of entry, increased within 3 of 8 Border Patrol sectors along the northern border: Blaine, Washington; Grand Forks, North Dakota; and Swanton, Vermont. Blaine Border Patrol Sector. The number of individuals entering Canada between official ports of entry in British Columbia, north of Blaine sector’s area of responsibility, known to Border Patrol was approximately 1,200 individuals during the 4-year period from 2012 through 2015, according to sector officials. In 2016, the number of individuals known to Blaine sector increased to approximately 1,100 individuals, and then increased again to approximately 1,400 individuals in 2017. Grand Forks Border Patrol Sector. The number of individuals entering Canada between official ports of entry in Manitoba, north of Grand Forks sector’s area of responsibility, known to Border Patrol was approximately 580 individuals during the 4-year period from 2012 through 2015, according to sector officials. In 2016, the number of individuals known to Grand Forks sector increased to approximately 400 individuals, and then increased to approximately 1,000 individuals in 2017. Swanton Border Patrol Sector. The number of individuals entering Canada between official ports of entry in Quebec, north of Swanton sector’s area of responsibility, known to Border Patrol was approximately 1,000 individuals during the 4-year period from 2012 through 2015, according to sector officials. In 2016, the number of individuals known to Swanton sector increased to approximately 1,100 individuals, and then increased to approximately 16,800 individuals in 2017. According to Swanton Border Patrol Sector officials, the majority of known entries into Canada by irregular northbound migrants between official ports of entry have occurred along Roxham Road in Champlain, New York. For a photo of a facility constructed by the government of Canada to process irregular northbound migrants north of Roxham Road, see figure 23. Department of Homeland Security (DHS) and Border Patrol officials we met with identified a bi-national agreement associated with the increased number of irregular northbound migrants from the United States to Canada from 2016 through 2017. Irregular northbound migrants entering Canada between official ports of entry are not subject to the framework established by the 2002 Safe Third Country Agreement signed by Canada and the United States, which governs the processing of asylum claims along the shared land border and applies only to those individuals entering at an official port of entry, not between ports of entry. Therefore, individuals who enter Canada by land between official ports of entry to make an asylum claim may be allowed to stay in Canada rather than have their claim handled by the United States. Individuals seeking to travel to Canada to make an asylum claim, whether or not they may have a valid asylum claim, are made aware of the potential ability to enter and remain in Canada pending an asylum decision due to wide sharing of this information through social media and reporting in the press. Otherwise, for those attempting to enter Canada through an official land port of entry to claim asylum, claimants may be returned to pursue their asylum claim in the country of last presence, which would be the United States, unless they qualify for one of the exceptions in the agreement. According to DHS officials, Canadian data indicates a large percentage of irregular northbound migrants had previously obtained nonimmigrant visas, primarily B1/B2 visas, which authorized their temporary travel to the United States, and subsequently entered Canada between official ports of entry to claim asylum. DHS, in collaboration with the U.S. Department of State, worked to identify, and as appropriate, revoke visas of individuals seeking to enter Canada between official ports of entry. Border Patrol intelligence reporting in 2017 identified visa fraud concerns because individuals obtained visas to enter the United States, when it appeared that their main intention was to enter Canada other than through a port of entry and claim asylum. Border Patrol officials stated that the widespread perception among irregular northbound migrants they encounter is that Canada’s asylum policies are more welcoming than those of the United States, which has also contributed to the increased trend in irregular northbound migration. These officials cited both U.S. and Canadian reporting on the 2016 U.S. Presidential Election, along with a welcoming statement by the government of Canada, and perceived generosity of benefits upon application for asylum in Canada as reasons that migrants seek to enter Canada between official ports of entry and claim asylum. According to Border Patrol officials, the northbound asylum flows from the United States to Canada could potentially lead to future attempts to enter the United States illegally between ports of entry from Canada by individuals whose asylum claims are rejected by the government of Canada. According to anecdotal reporting to Border Patrol officials, some of the irregular northbound migrants who entered Canada from the United States were unable to gain status in Canada or the process was not what they had anticipated. According to the officials, these individuals subsequently attempted to reenter the United States in an effort to gain legal status in the United States. For example, Swanton Border Patrol Sector reported two incidents in April 2018 in which groups of individuals who were apprehended attempting to illegally enter the United States from Canada stated that they were seeking to reenter the United States after their asylum claims were rejected by the government of Canada. Appendix V: Comments from the Department of Homeland Security Appendix VI: GAO Contact and Staff Acknowledgements GAO Contact Staff Acknowledgments In addition to the contact named above, Christopher Ferencik (Assistant Director), David Alexander, Michele Fejfar, Eric Hauswirth, Grant Holyoak, John Mingus, Sasan J. “Jon” Najmi, Claire Peachey, Carl Potenzieri, and Natalie Swabb made key contributions to this report.
Why GAO Did This Study The United States and Canada share the longest common non-militarized border between two countries, spanning nearly 4,000 miles of land and maritime borders from the states of Washington to Maine. CBP, within DHS, has primary responsibility for securing U.S. borders at and between ports of entry. GAO was asked to review CBP's efforts to secure the northern border between ports of entry. This report examines, among other things, (1) the staffing and resource challenges that CBP identified and actions it has taken to address those challenges and (2) the extent to which CBP has developed and implemented performance measures to assess its effectiveness at securing the northern border between ports of entry. GAO reviewed agency documentation and met with DHS and CBP officials in headquarters and field locations. This is a public version of a sensitive report that GAO issued in March 2019. Information that DHS deemed sensitive has been omitted. What GAO Found U.S. Customs and Border Protection (CBP) identified staffing and resource challenges affecting its enforcement activities along the U.S.-Canada (northern) border and actions to address them, but faces competing priorities. The U.S. Border Patrol (Border Patrol) and Air and Marine Operations (AMO) are the components within CBP responsible for securing U.S. borders between ports of entry in the land, air, and maritime environments. Border Patrol identified an insufficient number of agents that limited patrol missions along the northern border. AMO identified an insufficient number of agents along the northern border, which limited the number and frequency of air and maritime missions. Border Patrol and AMO also identified a variety of resource challenges along the northern border, such as limited radar and surveillance technology coverage and inadequate facilities to process and temporarily hold apprehended individuals. While the Department of Homeland Security (DHS) and CBP identified actions to address staffing and resource challenges, it is unknown whether these challenges will be addressed. This is primarily because CBP's priority is to secure the U.S.-Mexico (southwest) border. Issued in January 2017, Executive Order 13767 directed DHS to take actions to secure the southwest border by, among other things, constructing physical barriers and hiring thousands of agents. While CBP has performance measures that assess selected border security operations or programs, some of which include data from the northern border, it does not have specific measures to assess its effectiveness at securing the northern border between ports of entry. For example, Border Patrol has performance measures that assess security in remote areas on the northern border, but the measures do not include data from maritime border areas. Developing and implementing such measures could help Border Patrol and AMO better assess the effectiveness of their northern border operations between ports of entry, including addressing challenges due to limited staffing and resources. What GAO Recommends GAO is making two recommendations, that Border Patrol and AMO each develop and implement performance measures to assess their effectiveness at securing the northern border between ports of entry. DHS concurred with both recommendations.
gao_GAO-20-296
gao_GAO-20-296_0
Background DOD Nuclear Enterprise The DOD nuclear enterprise includes strategic and nonstrategic nuclear forces and the supporting infrastructure and personnel to build, maintain, and control these assets. The strategic nuclear forces include a triad of Air Force ICBMs; Air Force nuclear-capable bomber aircraft; and Navy submarine-launched ballistic missiles carried by SSBNs; as well as associated nuclear munitions; air refueling; and NC3 capabilities. NC3 capabilities are a key part of the defense nuclear enterprise, used to support planning, situation monitoring, and communication of force direction between the President and nuclear forces. Consistent with the New Strategic Arms Reduction Treaty (New START), the United States has limited the number of deployed delivery systems for each of the three legs of the strategic nuclear triad (see fig. 1). The 2018 Nuclear Posture Review states that the triad’s synergy and overlapping attributes help ensure the enduring survivability of deterrence capabilities against attack and the capacity to hold at risk a range of adversary targets throughout a crisis or conflict. In addition to the strategic nuclear triad, the defense nuclear enterprise includes nonstrategic nuclear forces: forward-deployed fighters—referred to as dual-capable fighter aircraft―that are able to deliver conventional or nuclear munitions; their associated nuclear weapons; and the supporting infrastructure and personnel to build, maintain, and control nuclear assets. NC3 capabilities are fielded through a large and complex system comprising numerous land-, air-, and space-based components used to ensure connectivity between the President and nuclear forces. Responsibilities for managing NC3 are distributed among many DOD components including military departments, combatant commands, defense agencies, the Joint Staff, and the Office of the Secretary of Defense. NC3 capabilities provide the President with the means to authorize the use of nuclear weapons in a crisis. NC3 capabilities support five important functions: Force management: assignment, training, deployment, maintenance, and logistics support of nuclear forces and weapons before, during, and after any crisis. Planning: development and modification of plans for the employment of nuclear weapons and other operations in support of nuclear employment. Situation monitoring: collection, maintenance, assessment, and dissemination of information on friendly forces, adversary forces and possible targets, emerging nuclear powers, and worldwide events of interest. Decision making: assessment, review, and consultation that occur when the employment or movement of nuclear weapons is considered. Force direction: implementation of decisions regarding the execution, termination, destruction, and disablement of nuclear weapons. Oversight of 2014 Nuclear Enterprise Reviews’ Recommendations The NDERG is the principle integrated civilian–military governance body for the DOD nuclear enterprise. It was established in 2014 by the Secretary of Defense to ensure the long-term health of the nuclear enterprise by addressing resourcing, personnel, organizational, and enterprise policy issues identified in the 2014 nuclear enterprise reviews. The NDERG also maintains senior-leader awareness of ongoing issues of importance in the nuclear enterprise, ensures effective sustainment of these critical nuclear capabilities, and provides a forum for strategic-level coordination and integration of issues arising from other oversight committees and councils related to the nuclear enterprise. The NDERG consists of a group of senior officials chaired by the Deputy Secretary of Defense with the Vice Chairman of the Joint Chiefs of Staff as vice chair. The NDERG is supported by a Nuclear Deterrent Working Group, which meets biweekly and reviews the status of the implementation of the recommendations of the nuclear enterprise reviews, and a Nuclear Deterrent Senior Oversight Group, which meets quarterly and reviews any recommendations that the Working Group believes are ready for the NDERG to close. The Nuclear Deterrent Senior Oversight Group―co- chaired by the Deputy Assistant Secretary of Defense for Nuclear Matters, the Deputy Assistant Secretary of Defense for Nuclear and Missile Defense Policy, the Joint Staff Deputy Director for Strategic Stability, and a senior-level representative of the Director of CAPE―also receives annual briefings on DOD components’ assessments of their progress, reviews organizational changes, and discusses other issues related to the management, operations, and health of the nuclear enterprise—including human resources and culture, operational availability, sustainment, and modernization and recapitalization issues not directly addressed in other forums. The Deputy Secretary of Defense updates the Secretary of Defense on the NDERG’s progress as requested. In November 2014, the Secretary of Defense directed DOD to address the recommendations from the 2014 nuclear enterprise reviews and directed CAPE to track and assess implementation efforts. The Joint Staff, the Navy, the Air Force, offices within the Office of the Secretary of Defense, and U.S. Strategic Command support CAPE’s efforts. CAPE compiled the recommendations from the 2014 nuclear enterprise reviews. In total, CAPE identified 175 distinct recommendations from the three documents associated with the reviews. CAPE then identified 247 sub- recommendations within those recommendations, which were directed to multiple military services or other DOD components. For example, if a recommendation was directed to both the Air Force and the Navy, then one sub-recommendation was made to the Air Force and one to the Navy. CAPE then worked with the military services to identify offices of primary responsibility for implementing actions to address the recommendations, any offices with coordinating responsibility, and any resources necessary to implement each recommendation. CAPE has developed a centralized tracking tool to collect information on progress in meeting milestones and metrics. As shown in figure 2, the tracking tool includes fields for the underlying problem statement, or root cause, and for the recommendation and time frames with milestones for implementing the recommendation. The tracking tool also includes performance measures (referred to as metrics in the tracking tool) to assess both the progress (through “process metrics”) and the effectiveness of the implementation actions (through “outcome metrics”). The outcome metrics aid DOD in determining whether implemented recommendations have addressed the underlying problem that was the impetus for the original recommendation. The tracking tool contains hundreds of unique milestones and metrics, and additional milestones and metrics may be added as they are identified. The Air Force and the Navy also have developed their own methods of tracking their service-specific recommendations. In December 2016, the Deputy Secretary of Defense issued a memorandum that directed the transition of the tracking and analysis responsibilities related to implementing the recommendations of the 2014 nuclear enterprise reviews from CAPE to the military departments and other DOD components. However, CAPE remains responsible for providing guidance to inform the analyses conducted by the military departments and other DOD components, overseeing these analyses, and assessing recommendations for closure. The aim of these changes was to enhance ownership and embed the principles of robust analysis, continuous monitoring, and responsibility throughout the department. In January 2018, in response to a GAO recommendation, CAPE issued additional guidance to aid the military departments and other DOD components in identifying, assessing, and documenting risks associated with the 2014 recommendations. The guidance instructs components to document key risks, defined by CAPE as a risk that requires mitigation by the leadership of the DOD components or a risk that cannot be mitigated within a component’s existing authorities and resources—for example, one that cannot be mitigated within the Air Force or Navy and must be raised to a higher authority. As we reported in November 2018, in response to the January 2018 guidance for tracking risks, the Air Force and the Navy included in the centralized tracking tool information on key risks for the recommendations they were responsible for or an indication of the absence of any key risk. Oversight of the 2015 NC3 Report’s Recommendations The Council on Oversight of the National Leadership Command, Control, and Communications System (NLC3S Council) was established by statute and is responsible for oversight of the command, control, and communications system for the national leadership of the United States. Additionally, as recommended in the 2015 NC3 report, the NLC3S Council reviews the recommendations from the report and assesses them for closure. The NLC3S Council is supported by the National Leadership Command Capabilities Executive Management Board, which comprises a Senior Steering Group and four working groups—Stakeholders, Resources, Assessments, and Nuclear Command and Control Issues. The Executive Management Board ensures that the council is informed of and presents issues that require senior leadership–level decisions. In 2018, the Secretary of Defense approved the designation of the Commander of U.S. Strategic Command as the NC3 enterprise lead with increased responsibilities for operations, requirements, and systems engineering and integration. At that time, the Secretary of Defense also approved the designation of the Under Secretary of Defense for Acquisition and Sustainment as the NC3 enterprise capability portfolio manager with increased responsibilities for resources and acquisition. In November 2018, we recommended that DOD update applicable guidance (such as the NLC3S Council’s and Executive Management Board’s charters) and identify whether there is a need to request changes to statutory or presidential guidance in order to clarify changes to roles and responsibilities for oversight of NC3. According to DOD officials, DOD is in the process of implementing these recommendations, with the intent of having the Commander of U.S. Strategic Command and the Under Secretary of Defense for Acquisition and Sustainment provide leadership with respect to NC3 capabilities, while the Executive Management Board maintains its role for those systems that primarily relate to non-NC3 systems, with all three entities reporting on their respective issues to the NLC3S Council. The NLC3S Council is co-chaired by the Under Secretary of Defense for Acquisition and Sustainment and the Vice Chairman of the Joint Chiefs of Staff. Members of the council include the Under Secretary of Defense for Policy; the Under Secretary of Defense for Research and Engineering; the Under Secretary of Defense for Intelligence; the Commander, U.S. Strategic Command; the Commander, North American Aerospace Defense Command/U.S. Northern Command; the Director, National Security Agency; and the DOD CIO. The DOD CIO also serves as the Secretariat for the NLC3S Council and tracks the implementation of recommendations from the 2015 NC3 report, among other activities. Additional organizations may participate in the NLC3S Council’s meetings to provide subject-matter expertise. Regular participants in the NLC3S Council include the Office of the Under Secretary of Defense (Comptroller); senior leaders from the Army, the Navy, and the Air Force; the Director, Defense Information Systems Agency; the Director, White House Military Office; and Director, CAPE. Sustainment and Maintenance of the Nuclear Enterprise The 2014 nuclear enterprise reviews included Operations and Maintenance as 1 of 11 categories. Recommendations within this category are primarily related to the sustainment and maintenance of nuclear weapon systems. The reviews identified several Operations and Maintenance core issues related to, among other things, maintenance infrastructure, lack of leadership visibility into sustainment issues, fragmented logistics support, and aging systems and support equipment leading to parts obsolescence issues. Of the 175 recommendations included in the 2014 nuclear enterprise reviews, 30 were categorized as Operations and Maintenance. Other categories in the 2014 reviews, such as Investment and Personnel, also include some recommendations that are related to sustainment and maintenance. DOD conducts sustainment and maintenance on nuclear enterprise weapon systems to ensure that these systems are available to support current military operations and maintain the capability to meet future requirements. Sustainment of weapon systems comprises logistics and personnel services required to maintain and prolong operations of the weapon system. DOD conducts maintenance at two levels: field level and depot level. Field-level maintenance is performed at the unit level on the unit’s own equipment, requires a relatively fewer number of skill sets, and occurs more frequently. Depot-level maintenance includes the overhaul, upgrade, or rebuilding of equipment, occurs less frequently, and requires a greater number of skill sets. Depot maintenance includes inspection, repair, overhaul, or the modification or rebuild of end items, assemblies, subassemblies, and parts that, among other things, require extensive industrial facilities, specialized tools and equipment, or uniquely experienced and trained personnel that are not available in other maintenance activities. Key Sustainment and Maintenance Organizations A number of DOD organizations are involved in the sustainment and maintenance of nuclear weapon systems. Some key organizations include the following: Defense Logistics Agency. The Defense Logistics Agency manages approximately one-fifth of the value of DOD’s overall inventory and provides billions of dollars in consumable items on an annual basis for depot maintenance conducted at defense industrial sites—Army and Marine Corps depots, Navy Fleet Readiness Centers and Navy shipyards, and Air Force Air Logistics Complexes—where combat vehicles, planes, helicopters, and ships are repaired and overhauled. Air Force Materiel Command. Air Force Materiel Command conducts research, development, test, and evaluation, and provides acquisition management services and logistics support necessary to keep Air Force weapon systems ready for war. One of six centers within Air Force Materiel Command, the Air Force Nuclear Weapons Center is the nuclear-focused center synchronizing all aspects of nuclear materiel management on behalf of the Air Force Materiel Command commander. Naval Sea Systems Command and Naval Air Systems Command. Naval Sea Systems Command’s affiliated Program Executive Offices—including the Program Executive Office for submarines and the Program Executive Office for the Ohio-class SSBN and its replacement, the Columbia-class SSBN—are responsible for life-cycle management of their assigned programs. Similarly, Naval Air Systems Command provides full life-cycle support of naval aviation aircraft, weapons, and systems. DOD and the Military Services Have Made Progress in Implementing Recommendations to Improve the Nuclear Enterprise but Have Not Kept Tracking Information Current DOD continues to make progress in implementing the recommendations from the 2014 nuclear enterprise reviews and the 2015 NC3 report, but the key tracking tools used to provide visibility on the status of the recommendations from these reviews do not provide current and complete information. For example, expected completion dates for key metrics and milestones—key methods of evaluating the department’s progress—are not up to date. Additionally, the NDERG is working to develop an additional approach for tracking long-term risks and opportunities to monitor the health of the defense nuclear enterprise. Current and complete information regarding the status and metrics for enduring recommendations from the 2014 and 2015 studies would help inform this effort. Progress in Implementing Recommendations Continues 2014 Nuclear Enterprise Reviews’ Recommendations DOD continues to make progress in implementing the recommendations of the 2014 nuclear enterprise reviews. As of our last report, in November 2018, DOD had closed 151 sub-recommendations. Based on our review of CAPE’s centralized tracking tool, the NDERG has closed five additional sub-recommendations since then. As a result, as of August 2019, the NDERG has closed 156 of the 247 sub-recommendations (see fig. 3). 2015 NC3 Report’s Recommendations DOD continues to make progress in implementing the recommendations of the 2015 NC3 report. Since we last reported, in November 2018, DOD has closed two additional recommendations. As of August 2019, the NLC3S Council has closed seven of the 13 recommendations from the NC3 report (see fig. 4). The DOD CIO has provided guidance to improve the tracking and evaluation of DOD’s progress in implementing the recommendations of the 2015 NC3 report, in response to our second October 2017 recommendation. Information on Implementation Is Not Kept Current and Complete in DOD Tracking Tools The military services and other DOD components have not kept information on the implementation status of the 2014 nuclear enterprise reviews’ recommendations and 2015 NC3 report’s recommendations current and complete. As we have previously reported, CAPE developed a centralized tracking tool to aid in evaluating the actions that have been taken to implement the recommendations from the 2014 nuclear enterprise reviews and inform senior leaders across the defense nuclear enterprise. DOD CIO collects information on the status of the 2015 NC3 report’s recommendations in a layout similar to that used for the 2014 recommendations. Information on 2014 Nuclear Enterprise Reviews’ Recommendations Is Not Kept Current The military departments and other DOD components are responsible for tracking and evaluating the implementation status of the 2014 nuclear enterprise reviews’ recommendations; CAPE provides guidance to aid in these efforts. CAPE’s 2016 guidance indicates that the military departments and DOD components should, as appropriate, use metrics and milestones to analyze progress. The guidance also states that existing data should be used, where possible, to minimize the workload of this effort. The centralized tracking tool developed by CAPE is the primary means by which progress is tracked. For each of the hundreds of metrics and milestones identified, the tracking tool includes expected completion dates and indicates which have been met and which are behind schedule. The tool identifies both process metrics, to aid in assessing the progress of implementation efforts, and outcome metrics, to aid in determining whether implemented recommendations have addressed the underlying problem that was the impetus for the original recommendation. However our review has found, for those metrics and milestones that are behind schedule, many of the completion dates have not been updated to reflect when they are expected to be completed, even if years have passed since the original completion date lapsed. According to officials from CAPE, the original dates were left in the tracking tool to maintain visibility on how far past their initial expected completion dates these metrics and milestones had gone without being resolved. We previously found that the Air Force and Navy used their own tracking tools in addition to DOD’s centralized tracking tool. According to Air Force officials, they still are using their internal tracking tool to help them note progress within the Air Force before providing inputs to DOD’s centralized tracking tool. However, according to Navy officials, they are no longer maintaining their internal tracking tool, because they determined that those efforts were unnecessary and redundant with providing inputs to the centralized tracking tool for the relatively few recommendations that the Navy still has open. CAPE’s 2016 guidance indicates that the goals of monitoring the implementation of the 2014 nuclear enterprise reviews’ recommendations are to track progress toward addressing systemic issues and to assess changes in the overall health of the enterprise. This information provides stakeholders within the defense nuclear enterprise with key means of monitoring progress and evaluating the outcomes of these efforts. DOD’s approach has been to measure the effectiveness of actions taken by gathering supporting data and measuring the effectiveness of each recommendation separately. However, DOD officials have noted that some enduring recommendations—including recommendations associated with changing a service’s culture or morale—will take time to evaluate. In some cases, data related to outcome metrics may not be available to evaluate the effectiveness of actions taken until years after a service has taken key actions to address the recommendation. According to DOD officials, this framework was established to avoid prematurely assuming that actions taken have successfully addressed underlying problems. The need for the military departments and other DOD components to keep information current, particularly estimated dates for the completion of activities, has been emphasized at meetings of the Nuclear Deterrent Working Group. Further, a July 2018 memorandum from the Deputy Secretary of Defense reiterated that the components of the nuclear enterprise, which includes the Air Force and the Navy, will continue to track progress in implementing the recommendations from the 2014 nuclear enterprise reviews through 2020. According to officials from the Office of the Deputy Assistant Secretary of Defense for Nuclear Matters and CAPE, the use of the centralized tracking tool is likely to extend beyond 2020, and the Nuclear Deterrent Working Group—which supports the NDERG and its Nuclear Deterrent Senior Oversight Group—is using information from the centralized tracking tool to support additional work. In the context of transitioning from the current centralized tracking tool— which tracks the recommendations of the 2014 nuclear enterprise reviews—to enduring metrics used to characterize the health of the nuclear enterprise, as discussed later in this report, the Deputy Assistant Secretary of Defense for Nuclear Matters stated that it was not a good use of limited personnel resources to request that all metrics and milestones be updated. This is because many of the 2014 recommendations were minor and quickly closed. He noted that improved information about critical recommendations transitioning to enduring recommendations would be of use. Information on NC3 Report Recommendations Is Incomplete The approach that the DOD CIO has established to track the recommendations from the 2015 NC3 report largely mirrors the approach developed for tracking the 2014 nuclear enterprise reviews’ recommendations. However, DOD CIO officials have noted that the 2015 NC3 report recommendations are more narrowly scoped than some of the recommendations from the 2014 reviews and therefore their tracking is less extensive. DOD CIO has issued guidance that requests that DOD components provide quarterly updates on the progress of implementing the recommendations. It specifies that the components should provide current metrics used to track progress, as well as key milestones, at a minimum by quarter, for the next year. The guidance further states that, as appropriate, both process metrics—to measure whether actions taken address a recommendation—and outcome metrics—to measure end results of interest—should be used. However, metric and milestone information for many of the recommendations in the tracking tool is out of date or incomplete. In particular, many of the recommendations do not have outcome metrics identified. DOD CIO’s guidance does request quarterly updates from the components and provides some information on content for those updates, but it does not specify that the information should be kept current and complete in the tracking tool. Therefore, information like process and outcome metrics may not be complete and kept current beyond the next year. Keeping Tracking Tools Current and Complete Can Aid DOD Standards for Internal Control in the Federal Government states that an organization’s management should use high-quality information, which is defined as information from relevant and reliable data that is appropriate, current, complete, accurate, accessible, and provided on a timely basis. CAPE’s guidance provides a framework for information that DOD components should consider as they evaluate and track progress made for the 2014 recommendations. The guidance notes that, although the intent of the recommendations is enduring and the systemic issues identified by the 2014 nuclear enterprise reviews should be addressed, the specific approaches to the recommendations can be revised to address the recommendations more effectively. Similarly, the DOD CIO’s guidance provides a framework for information that DOD components should consider as they evaluate and track progress made for the 2015 NC3 report recommendations. For tracking both the 2014 nuclear enterprise reviews’ and 2015 NC3 report’s recommendations, DOD’s approaches are limited by the quality and completeness of the data collected and tracked in the centralized tracking tools. Specifically, CAPE’s general guidance for tracking the 2014 nuclear enterprise reviews’ recommendations does not include a specific requirement to periodically update the information to keep it current. DOD CIO’s guidance for tracking the 2015 NC3 report recommendations does request quarterly updates but does not specifically require information included in the tracking tool be complete. Without current and complete information—including revised dates for when metrics and milestones will be complete—the tracking tools used to track the 2014 and 2015 recommendations do not provide a complete and accurate picture of when tasks are expected to be completed, whether progress is still being made to address the many issues the department has identified, whether any efforts have stalled, or any additional challenges. Additionally, without an accurate picture of the department’s progress in addressing these recommendations, the Nuclear Deterrent Working Group has less information to leverage to support additional work to track enduring issues on behalf of the NDERG. DOD Is Working to Develop an Approach to Identifying and Tracking Long-Term Nuclear Risks and Opportunities In addition to tracking the 2014 recommendations, the July 2018 memorandum from the Deputy Secretary of Defense stated that stakeholders will develop metrics to capture long-term risks and identify opportunities for regular reporting to the NDERG. The NDERG Charter, issued in early June 2019, provides further direction to the Nuclear Deterrent Senior Oversight Group and its Nuclear Deterrent Working Group, including that members should develop metrics, data, tools, and briefing materials to support the NDERG efforts to identify, track, and address issues, risks, and opportunities across the nuclear enterprise. The charter further directs the Nuclear Deterrent Senior Oversight Group and Nuclear Deterrent Working Group members to recommend disposition of the long-term recommendations from the 2014 nuclear enterprise reviews and of the long-term efforts to achieve management, operations, and health outcomes directed by the 2018 Nuclear Posture Review. In order to address the direction from the July 2018 Deputy Secretary of Defense memorandum and the June 2019 NDERG Charter, DOD officials stated that the co-chairs of the Nuclear Deterrent Senior Oversight Group have been working with defense nuclear enterprise stakeholders to identify long-term issues that should be tracked to monitor the health of the enterprise. According to agency officials, they would like to adjust how long-term issues that relate to the enduring recommendations from the 2014 nuclear enterprise reviews are monitored. Examples include the need to sustain the current weapon systems until they are replaced, providing adequate funding for the acquisition of new systems, and improving the morale of nuclear forces. Since these recommendations are not expected to be closed as completed within the next few years, the Nuclear Deterrent Senior Oversight Group wants to find ways to improve how the recommendations can be tracked to monitor the health of the enterprise. According to DOD officials, they are currently working to identify relevant metrics from the existing tracking tool as well as existing data sources that might be leveraged to support the long-term monitoring of the health of the enterprise. This may be particularly helpful if the use of the existing tool is discontinued at some point after the 2020 time frame. The efforts of the military services and other DOD components to maintain current and complete information using the existing tracking tools for the 2014 and 2015 recommendations has the potential to aid the department. In particular, existing tools can be helpful for tracking and assessing both enduring recommendations from those reviews as well as additional efforts by the NDERG to assess and monitor the health of the nuclear enterprise. For example, existing outcome metrics can aid in the assessment of whether completed actions have addressed underlying issues that affect the health of the enterprise, identified risks can aid the department in addressing issues as they arise, and the use of the tools themselves can help maintain visibility across the DOD nuclear enterprise, including aiding the communication of timely information to senior leaders. DOD and the Military Services Are Experiencing Challenges to the Sustainment and Maintenance of Nuclear Systems and Have Various Initiatives to Mitigate Those Challenges DOD and the military services are experiencing challenges related to sustainment and maintenance of nuclear weapon systems—including challenges identified in recommendations from the 2014 nuclear enterprise reviews—and have ongoing and planned initiatives intended to mitigate these challenges. The military services face challenges related to operating weapon systems beyond their initial design life, parts availability and parts obsolescence, small fleet size, and the maintenance workforce. DOD and the services are mitigating sustainment and maintenance challenges through initiatives to increase parts availability and to improve depot-level maintenance, and through increased tracking of sustainment and maintenance problems. Challenges to Sustaining and Maintaining Nuclear Weapon Systems We reviewed sustainment and maintenance for the following nuclear weapon systems: Minuteman III. The Minuteman III ICBM is a strategic weapon system using a ballistic missile of intercontinental range. Missiles are dispersed in hardened silos to protect against attack and connected to an underground launch control center through a system of hardened cables. B-2 Spirit. The B-2 Spirit is a multirole bomber capable of delivering both conventional and nuclear munitions. B-52 Stratofortress. The B-52 Stratofortress is a long-range, heavy bomber that can perform a variety of missions. AGM-86B ALCM. The AGM-86B ALCM is a long-range, self-guided missile with a nuclear warhead that is carried by the B-52H bomber. E-4B NAOC. The E-4B NAOC is the primary survivable element of the National Military Command System through which the President, as Commander in Chief, and Secretary of Defense exercise national and nuclear command and control of military forces in day-to-day and crisis operations. In case of national emergency or destruction of ground command and control centers, the aircraft provides a highly survivable NC3 center to direct U.S. forces, execute emergency war orders, and coordinate actions by civil authorities. E-6B Mercury. The E-6B Mercury is a communications relay and, when manned, a strategic airborne command post aircraft. It provides survivable, reliable, and endurable airborne NC3 capabilities needed to direct, command, and control U.S. strategic nuclear forces. Ohio-class SSBN. The Ohio-class SSBNs are the most survivable leg of the strategic triad, serving as launch platforms for submarine- launched ballistic missiles. They are designed specifically for stealth and the precise delivery of nuclear warheads. Table 1 shows examples of sustainment challenges affecting these systems. According to DOD and service officials, while there are acquisition programs under way to replace most of these systems, the current nuclear enterprise systems remain necessary for years to come. The 2014 nuclear enterprise reviews included recommendations to sustain and maintain these systems until they are replaced, such as a recommendation to “fully fund increasing maintenance needs as the triad ages.” See appendixes II–VI for additional information and specific sustainment and maintenance challenges and initiatives for select systems. Weapon Systems Operating Beyond Their Initial Design Life Almost all of the nuclear weapon systems we reviewed are experiencing challenges related to aging. Specifically, these weapon systems are being deployed beyond their originally intended service lives, which adds to the challenges of sustaining these systems. DOD, along with the Department of Energy, has undertaken an extensive, multifaceted effort to sustain and modernize U.S. nuclear weapons capabilities, including the nuclear weapons stockpile; the research and production infrastructure; and the NC3 system. Some of these sustainment efforts are directly linked to recommendations from the nuclear enterprise reviews of 2014 and the 2015 NC3 report. For example, the 2014 nuclear enterprise reviews recommended that the Air Force establish bomber and ICBM sustainment plans for aging platforms. The 2014 nuclear enterprise reviews also resulted in a recommendation to fully fund increasing maintenance needs as the nuclear triad ages. Table 2 provides additional examples of related recommendations from the 2014 reviews. According to DOD officials, as these nuclear weapon systems have aged they have required more maintenance in order to sustain them through their extended service lives, and they will continue to do so until they are replaced by new systems. For example, Air Force officials cited aircraft age as the major factor leading to corrosion and other airframe issues that the B-52 is experiencing. The first B-52 model was initially deployed in 1952, and the B-52H—the model currently in use—became operational in 1962. The Air Force now plans to sustain the B-52 until at least 2050, which will require increased maintenance and a series of modernization programs in the 2020s. The E-4B, first deployed in 1980, is also experiencing significant corrosion in the galley area, necessitating a fleet- wide galley replacement program. Neither the B-52 nor the E-4B have replacement programs identified. According to Air Force officials, aging components have also led to structural problems with the Minuteman III ICBM. The Minuteman III was deployed in 1970 with an original planned service life of 10 years. The Minuteman III is now expected to last until the 2030s, when it will be replaced by the Ground Based Strategic Deterrent system. In addition to the weapon systems, support components and support infrastructure are also experiencing age-related challenges. For example, according to Air Force officials, the support infrastructure for the Minuteman III in use today, known as the real property installed equipment, is the original infrastructure that was fielded with the Minuteman I weapon system in 1960, which reached operational capability in 1962. These officials stated that challenges at these facilities include corrosion, water intrusion, collapsed conduits, misaligned doors, and bulging walls. The need to sustain nuclear support equipment is reflected in a nuclear enterprise review recommendation to prioritize nuclear support and test equipment. Parts Availability Issues and Parts Obsolescence Parts availability issues and parts obsolescence also affect maintenance on existing weapon systems across the nuclear enterprise. In many cases, the industrial base that produced specific parts for a weapon system is no longer active or is no longer producing the part, so when parts break there are no replacements available. For example, Air Force officials working to maintain the B-52 fleet told us that they have trouble finding suppliers who will produce the necessary parts for such an old airframe. Similarly, the Ohio-class SSBN program is experiencing challenges in sustaining submarines through their planned 42-year service life. The Ohio-class was initially intended to be operational for 30 years. Since it will be in service longer than expected, the Navy is finding that parts not originally intended to be replaced now need replacement. Navy officials stated that obsolescence has a greater impact for these parts that were never expected to fail and therefore do not have an industrial base to support replacements than for parts that the Navy has always planned to replace at some point during the Ohio-class service life. In certain scenarios, maintainers across several weapon systems have had to reengineer parts, because the original blueprints do not exist. Maintainers we spoke to reported long lead times to have parts fabricated and delivered, which extends the time that a system is offline for maintenance. The 2014 nuclear enterprise reviews included multiple recommendations to address parts obsolescence and availability problems in both the Air Force and the Navy, including the examples shown in table 3. Additionally, maintainers may cannibalize parts, a process by which parts are taken from one asset for use in another. This process is conducted during maintenance for both Air Force and Navy nuclear weapon systems. For example, according to Air Force officials, parts are routinely cannibalized from B-2 aircraft that are undergoing modifications so that they can be used on the operational B-2 aircraft. Similarly, Navy officials stated that parts are cannibalized from other classes of submarines to sustain Ohio-class SSBNs when replacement parts are not available elsewhere. Parts cannibalization has also occurred during engineered refueling overhauls. According to Navy officials, in the past, SSBNs completing refueling overhauls have cannibalized parts from SSBNs that are beginning to be overhauled. The final Ohio-class SSBN to undergo an overhaul, the USS Louisiana, will not have that option, because there will be no other SSBNs from which to cannibalize parts; all SSBNs except the USS Louisiana and USS Wyoming have already completed their overhauls. According to officials from the Office of the Chief of Naval Operations, they are not concerned about not being able to cannibalize parts for the remaining overhauls. Small Fleet Size Several legacy nuclear systems have a limited number of assets, which can create challenges for meeting operational requirements while at the same time conducting maintenance. In particular, the size of a fleet can create challenges when it becomes difficult or impossible to meet operational requirements. According to Air Force and Navy officials, maintenance challenges stemming from small fleet sizes particularly affect the B-2, E-4B, and E-6B weapon systems. Scheduling maintenance is one such challenge, because taking one aircraft down for maintenance will have a proportionally greater effect on the number of aircraft available for operations than it would for a larger fleet. For example, according to Air Force officials, the B-2 is experiencing challenges related to maintaining aircraft availability during the extensive modernizations that are being conducted, including integration of a new weapon and upgrades to its radar system. Scheduling this modernization process, part of the effort to sustain the B-2, is challenging given that there are only 20 aircraft in the fleet. Taking an aircraft down for maintenance limits the number of aircraft available for operational use by U.S. Strategic Command. Similarly, Air Force officials told us that the time needed for maintenance and modernization efforts on the E-4B was a primary factor leading to decreased aircraft availability of the E-4B, because of the small number of aircraft in the fleet—four in total. Having only four aircraft means that delays currently experienced during depot maintenance and installation of modifications have larger effects on the overall availability of the fleet. One aircraft unavailable as it undergoes these actions results in one quarter of the fleet being unavailable for operations. Additionally, unscheduled maintenance could exacerbate the issue of scheduling challenges and conflict with operational requirements. Having a small fleet with some systems in maintenance could also impede the force’s ability to surge if needed. The B-52 fleet has experienced a unique challenge, because it has recently been used extensively in conventional operations. According to Air Force officials, it takes time to change a B-52 configuration from conventional to nuclear to ready the aircraft for a nuclear mission, which may affect aircraft availability. According to officials from the Office of the Deputy Assistant Secretary of Defense for Nuclear Matters, reduced availability also negatively affects readiness through the reduction of training opportunities. Maintenance Workforce Challenges Security-clearance backlogs for the maintenance workforce are a challenge with respect to certain nuclear weapon systems. Without at least a secret security clearance, maintainers may be limited in the activities they can perform on a nuclear system. For example, an Air Force official explained that without a clearance maintainers are not only limited in the activities they can perform on the B-2, but they cannot complete some of the training they need. To mitigate this challenge, the Air Force sometimes chooses to issue interim clearances. But in so doing unit commanders must accept additional risk. Specifically, since background investigations may not be complete at the time these interim clearances are issued, it is possible that someone who has been issued an interim clearance will ultimately be found ineligible for that security clearance due to information discovered during their background investigation. Similarly, there is a backlog of top secret clearances for missile-wing personnel working with the Minuteman III, including maintainers. Again, the services sometimes choose to issue interim clearances, but leadership must accept that risk, and interim clearances may have limitations. For example, according to officials from one of the missile wings we spoke with, a missileer in that wing with an interim top secret clearance can complete training for the Minuteman III but cannot be certified to be on a two-person alert team. The 2014 nuclear enterprise reviews included several recommendations to improve various issues related to workforce, including the examples shown in table 4. We have previously found that problems related to security-clearance backlogs and the resulting delays in determining clearance eligibility and issuing initial clearances can result in millions of dollars of additional costs to the federal government. We have also found that the backlogs can result in longer periods needed to complete national security–related contracts and lost opportunity costs if prospective employees decide to work elsewhere rather than wait to get a clearance. Further, we have found that the backlogs can result in diminishing quality of the work because industrial contractors may be performing government contracts with personnel who have the necessary security clearances but are not the most experienced and best-qualified personnel for the positions involved. Additionally, we identified the personnel security-clearance process as a high-risk area in March 2019 and we continue to monitor progress addressing the weaknesses in this area. Sustainment and Maintenance Initiatives Parts Availability and Obsolescence The services have taken steps to ease the effect of parts availability issues and obsolescence. For example, partly in response to nuclear enterprise review recommendations, the Air Force has broadened the definition of the Minuteman III weapon system—a process the Air Force refers to as demarcation—and instituted programmed depot maintenance for the weapon system. The Air Force’s demarcation effort centralized parts funding and inventory management for all of the essential components of the Minuteman III and integrated the entire weapon system into a standard Air Force supply process. According to Air Force officials, the Air Force is also working with the Defense Logistics Agency to identify and catalog parts that previously had no identification numbers associated with them. Officials said that programmed depot maintenance is expected to result in a steady, predictive demand level for parts, which will help the Air Force ensure that parts are available and incentivize vendors to manufacture parts, including previously obsolete parts for which there was no steady source of supply. Additionally, both of these efforts are expected to reduce the likelihood that parts will be unavailable when needed. Navy officials explained that for the Ohio-class SSBN, when an industrial base supplier is not able to meet the need for certain obsolete parts, the Navy purchases enough parts to “stock the shelf” by including in one contract enough quantities of the part to last for the life of the SSBNs. Additionally, the Navy has developed programs such as the Trident Planned Equipment Replacement Program, which has identified over 300 critical parts and has them manufactured and ready to be used for replacement when SSBNs are undergoing planned maintenance. The Defense Logistics Agency has increased its support to the nuclear enterprise to help ensure that parts are available when they are needed. In 2015, the Defense Logistics Agency established a Nuclear Support Office from its headquarters staff to synchronize resources to ensure responsive support to the DOD nuclear enterprise. According to Defense Logistics Agency officials, the office has 13 people, three of whom are embedded at U.S. Strategic Command, Air Force Space Command, and Air Force Global Strike Command. In the Defense Logistics Agency’s 2018–2026 strategic plan, supporting the nuclear enterprise is the top objective. According to Defense Logistics Agency officials, they also have a series of new initiatives to increase materiel availability and accomplish activities such as paying for the cost of reverse engineering to fill in voids that exist in technical data for nuclear enterprise systems; working in additive manufacturing to set the standard for 3D printing and polymers across DOD and subsequently printing parts on demand; and identifying weaknesses in the industrial base and focusing investments in those areas. The focus of the material availability effort is presently to find out how to help the services when they cannot find a part and to address it in one of the initiatives. Depot-Level Maintenance Processes The Air Force and Navy have taken steps to improve depot-level maintenance across the nuclear enterprise. For example, the Air Force introduced programmed depot maintenance for the Minuteman III weapon system in 2014 and transformed ICBM weapon system sustainment processes into a standardized, integrated planning and support model. For the E-4B, according to E-4B program officials, the Air Force has initiated incentivized programmed depot maintenance gates that provide contractors additional financial incentive to complete increments of depot maintenance, as well as the entire depot maintenance process, on time or early. The E-4B program office is implementing this incentive structure in an effort to decrease the E-4B’s time spent in depot maintenance. Additionally, the Air Force has several initiatives under way to mitigate B- 2 sustainment and maintenance challenges, including increasing the intervals between depot-level maintenance and merging modernization and depot maintenance efforts so that the aircraft is down less and available more. In addition, there are multiple ongoing initiatives to improve the B-2’s supply chain, including using predictive analysis and forecasting tools to help determine how many spare parts to keep in stock. To sustain the Ohio-class SSBN fleet, the Navy has conducted engineered refueling overhauls on all SSBNs except for the USS Wyoming and USS Louisiana, the last two SSBNs to enter service. This major maintenance is intended to help sustain the Ohio-class SSBN fleet until its service life reaches 42 years and it is replaced by the Columbia- class SSBN. These engineered refueling overhauls have taken longer than originally anticipated. Navy officials attribute these delays to the submarines requiring more maintenance work than expected as well as some delays in acquiring parts. Increased Tracking of Sustainment and Maintenance Issues Over the past several years, DOD and the services have increased their attention to and tracking of nuclear weapon systems maintenance and sustainment issues. As we have previously found, DOD and the military services have taken steps to improve oversight of the nuclear enterprise in response to the 2014 reviews. For example, DOD has established or participated in a number of oversight organizations that aid in the management of the defense nuclear enterprise, including the NDERG, which was established in 2014 by the Secretary of Defense to ensure the long-term health of the nuclear enterprise by addressing issues identified in the 2014 nuclear enterprise reviews, including sustainment and maintenance-related issues. The Air Force and Navy have also taken actions to improve oversight of sustainment and maintenance. For example, the Air Force, through its Nuclear Mission Assessment effort, uses independent analyses of various data sources to recognize challenges within the Air Force nuclear enterprise, including sustainment and maintenance problems. Additionally, the Air Force implemented the Nuclear Weapon System Enterprise Review, which was developed in 2016 by the Air Force Nuclear Weapons Center with support from Air Force Materiel Command. According to Air Force documentation, the review provided timely insight into the comprehensive health of individual nuclear weapon systems and provided an assessment of how well the enterprise is performing. Nuclear weapon systems that were specifically reported on in the Nuclear Weapon System Enterprise Review included ALCM, Minuteman III, and NC3 systems. The Air Force modeled its Nuclear Weapon System Enterprise Review in part on assessment and reporting already completed for all aircraft, including the B-2 and B-52 bombers, through its Weapon System Enterprise Review briefings. Weapon System Enterprise Review metrics are tailored to each weapon system and have details on data such as cost, schedule, performance, and funding. These data are compiled into a quarterly briefing report for Air Force major commands and Air Force headquarters. According to Air Force officials, information included in the Nuclear Weapon System Enterprise Review was related to 10 recommendations from the 2014 nuclear enterprise reviews and the 2015 NC3 report. Tracking this information helped the Air Force to close out the recommendations assigned to Air Force Materiel Command. According to Air Force officials, as of July 2019 the Air Force had discontinued the use of the Nuclear Weapon System Enterprise Review. The officials said that Air Force Nuclear Weapons Center and Air Force Global Strike Command are currently collaborating on a replacement presentation focused on weapon system availability; however, this effort is not finalized. The officials further stated that the Air Force has transitioned to an Aircraft Availability Improvement Program construct with an aircraft readiness focus and is working to establish an equivalent for the nonflying weapon systems (i.e., Minuteman III and NC3). The Navy oversees its leg of the nuclear triad using the Navy Nuclear Deterrent Mission Oversight Council. The council is a senior Department of the Navy forum that is responsible for coordinating the Navy’s nuclear weapon activities (safety, security, reliability, and nuclear weapons incident response), operations, personnel, policy, material support, and oversight functions. According to Navy officials, the Navy Nuclear Deterrent Mission Oversight Council addresses long-term issues affecting the Navy’s nuclear enterprise and identifies and monitors risks associated with those issues, including the actions taken in response to sustainment and maintenance-related recommendations from the 2014 nuclear enterprise reviews. According to Navy officials, the Navy has also established an SSBN Sustainment Working Group and a Trident Planned Equipment Replacement Program Working Group to address Ohio-class sustainment and maintenance-related issues. Conclusions DOD and the military services have made progress in addressing the recommendations from the 2014 nuclear enterprise reviews and the 2015 NC3 report. They have done so partially by establishing and improving a number of processes to aid in the sustainment of defense nuclear enterprise systems. The department is modifying the NDERG’s focus from monitoring the status of the 2014 recommendations to monitoring the long-term health of the enterprise. This shift in focus should position the NDERG to better perform its oversight functions as the principal integrated civilian–military governance body for the defense nuclear enterprise. This is important because many of the recommendations that remain open are focused on long-term sustainment of the enterprise or are designed to be closed only after progress in addressing the issues can be meaningfully evaluated. It is important that the department and the military services continue to use the successful tools they have created to monitor these efforts and leverage these tools (and the premises behind them) as they create new mechanisms to maintain senior-leader visibility of the defense nuclear enterprise. Providing current, complete, and relevant information on the status of service and DOD component actions to address recommendations and an understanding of metrics, milestones, and risks will allow senior leadership to maintain oversight of the department’s progress. In particular, such visibility will help senior leaders maintain awareness of the progress of efforts to address past failings, determine whether efforts are having the intended effects and achieving the desired outcomes of addressing root problems, and achieve the desired end states of a healthy defense nuclear enterprise. These existing processes can help inform additional processes the department develops to monitor the health of the nuclear enterprise. The collection and assessment of information to maintain the currency and completeness of information in existing tools may also allow the department to identify potential emerging issues that may negatively affect the vital programs, infrastructure, and personnel essential to the maintenance of an effective nuclear deterrent. Recommendations for Executive Action We are making the following two recommendations to DOD: The Secretary of Defense should ensure that the Director of CAPE, in coordination with the Deputy Assistant Secretary of Defense for Nuclear Matters, the Deputy Assistant Secretary of Defense for Nuclear and Missile Defense Policy, and the Joint Staff Deputy Director for Strategic Stability, as co-chairs of the Nuclear Deterrent Senior Oversight Group, update the applicable guidance for methods of tracking and evaluating progress on implementation of the recommendations from the 2014 nuclear enterprise reviews, requiring DOD components to keep information—including any revised time frames—current. (Recommendation 1) The Secretary of Defense should ensure that the Under Secretary of Defense for Acquisition and Sustainment updates the applicable guidance for methods of tracking and evaluating progress on implementation of the recommendations of the 2015 NC3 report, requiring DOD components to keep information—including metrics for measuring progress and outcomes as well as any revised time frames that may extend out more than 1 year—complete and current. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of the classified report to DOD for review and comment. The department’s comments on the classified report are reprinted in appendix VII. In its comments, DOD concurred with both of our recommendations. DOD also provided technical comments on the classified report, which we incorporated as appropriate. In concurring with our first recommendation, DOD stated that the Nuclear Deterrent Senior Oversight Group co-chairs or, as necessary, the Deputy Secretary of Defense as the chair of the NDERG, will update the applicable guidance to ensure that time frames and other information associated with planned actions are kept up to date. In concurring with our second recommendation, DOD stated that the DOD CIO and, as appropriate, the Under Secretary of Defense for Acquisition and Sustainment as the NC3 capability portfolio manager, will update the applicable guidance to ensure that metrics, time frames, and other information associated with planned actions are kept up to date and complete. We are encouraged that DOD is planning to take these actions to address our two recommendations. We believe that providing current, complete, and relevant information on the status of service and other DOD component actions to address recommendations and an understanding of metrics, milestones, and risks will allow senior leadership to maintain oversight of the department’s progress. This may also allow DOD to identify potential emerging issues that may negatively affect the vital programs, infrastructure, and personnel essential to the maintenance of an effective nuclear deterrent. We are sending copies of this report to the appropriate congressional committees, and to the Secretary of Defense; the Under Secretary of Defense for Acquisition and Sustainment; the Chairman of the Joint Chiefs of Staff; the Secretaries of the Army, of the Navy, and of the Air Force; the Commander, U.S. Strategic Command; the Department of Defense Chief Information Officer; and the Director of the Office of Cost Assessment and Program Evaluation. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9971 or kirschbaumj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Appendix I: Offices That We Contacted Appendix II: Challenges for the Sustainment and Maintenance of the Minuteman III through the End of Its Service Life Minuteman III Overview Minuteman III is a strategic intercontinental ballistic missile (ICBM) weapon system that represents one leg of the nation’s nuclear triad. First deployed in 1970 with a planned service life of 10 years, the Minuteman III weapon system consists of missiles as well as 450 launch facilities and 45 launch control centers. The Minuteman III service life was extended since its deployment by various service-life extension programs. Launch facilities are connected to underground launch control centers through a system of hardened cables. A launch facility is an unmanned site that houses the missile and all equipment required to maintain the missile in a launch-ready configuration. These underground facilities have been considered part of the Minuteman III weapon system since 2014. Missile alert facilities are manned compounds that encompass the launch control center, a launch control support building, and a launch control equipment building. Missile alert facilities are crewed by security personnel, a cook, a facilities manager, and a launch crew. Launch crews, consisting of two officers, perform around-the-clock alert in the underground launch control center. See figure 5 for components of the Minuteman III weapon system. Nuclear command, control, and communications (NC3) systems and related procedures ensure launch crews in the launch control centers can receive and authenticate the President’s authorization for the use of nuclear weapons. In the event that connectivity is lost between a launch control center and an associated launch facility, other NC3 capabilities are available to carry out the direction of the President. For example, launch control centers aside from the one that lost connectivity can communicate with that launch facility as well as numerous other launch facilities. Further, an E-6B aircraft configured as an Airborne Command Post can transmit a launch command to the ICBM force through the Airborne Launch Control System capability. Minuteman III has undergone many life extension sustainment efforts to maintain its warfighting capabilities. The Air Force plans to sustain Minuteman III through 2030—50 years past its initial planned service life—and gradually draw down the weapon system before it is finally retired in 2036, as it is replaced by the Ground Based Strategic Deterrent ICBM weapon system. The Ground Based Strategic Deterrent has a planned initial operating capability date of 2029 and is to be fully deployed by 2036. Figure 6 provides a timeline of the expected service life of the Minuteman III ICBM weapon system. Minuteman III Challenges According to Air Force officials, Minuteman III is experiencing challenges related to aging facilities, aging infrastructure, and parts obsolescence. Aging facilities and infrastructure continue to affect the weapon system. According to Air Force officials, most of the real property installed equipment in use today is the original infrastructure that was fielded with the Minuteman I weapon system in 1960, achieving operational capability in 1962, and only slight modifications have been made over the years. Additionally, challenges with critical subsystems also exist, and while there are short-term mitigation strategies for each subsystem, there are no long-term replacements planned for the Minuteman III weapon system except by the fielding of its replacement program: the Ground Based Strategic Deterrent. Examples of facilities and infrastructure challenges include corrosion, water intrusion, collapsed conduits, misaligned doors, and bulging walls. According to Air Force officials, even attempting to replace small items can be difficult, because multiple subsystems must be replaced to support the modification. Diminishing manufacturing sources, material shortages, and obsolescence issues are additional contributing factors, because they cause difficulties in maintaining a credible supply chain for Minuteman III parts. Additionally, officials said that depot maintenance, interim maintenance, and organizational maintenance have all been affected by parts obsolescence, diminishing manufacturing sources, and material shortages, as has NC3 equipment. The Minuteman III weapon system is facing continued asset attrition. According to Air Force officials, as a result of the expected attrition of current field assets, the Minuteman III weapon system will be unable to meet full mission requirements after 2026 should full deployment be required. The Air Force expends four Minuteman III ICBMs per year on testing. According to the officials, continued asset attrition is also affecting the Minuteman III retirement schedule. Additionally, the Air Force Minuteman III program has experienced personnel challenges. According to Air Force officials, the Air Force has a backlog for top secret clearances for missile wing personnel, including maintainers, and it can take up to 2 years of a missileer’s 5-year commitment for a top secret clearance to come through. The officials told us a missileer can complete training with an interim top secret clearance but cannot be certified under the Personnel Reliability Program and therefore cannot be assigned to a two-person alert team. This makes it a challenge for missileers with interim clearances to keep up with their peers. Additionally, since commanders cannot assign them to alert duty, it puts additional burden on those missileers who are cleared to perform more alert-duty assignments. According to Air Force officials, they have also identified challenges associated with scheduling maintenance activities, including the need to balance longer working days with the additional risks that maintainers face as a result of these longer days. Officials also said that as launch control centers, launch facilities, and other elements of the Minuteman III weapon system are dispersed over large areas that make up the missile fields, maintainers may need to travel several hours from their base to arrive at the location of the maintenance activity. These increased travel times have resulted in extended workdays for maintainers and security forces or the need to split maintenance jobs between two shifts, which results in decreasing the number of personnel available to work at other locations. Minuteman III Challenge- Mitigation Efforts To mitigate challenges associated with the Minuteman III weapon system—including limitations in the availability of parts—the Air Force has broadened the definition of the Minuteman III weapon system, which is a process the Air Force calls demarcation. It was broadened to include some additional facilities related to the Minuteman III weapon system, and programmed depot maintenance was instituted for it. According to Air Force officials, demarcation centralized parts funding and inventory management for all of the essential parts of the Minuteman III and integrated the entire weapon system into the standard Air Force supply process. Additionally, according to the officials, the ICBM System Directorate has established a Weapon System Supply Chain Management office to oversee the commodity and organic support required to meet the daily needs of the warfighter and to sustain Minuteman Ill throughout Ground Based Strategic Deterrent deployment. The officials said the Weapon System Supply Chain Management office conducts predictive forecasting of the demand for parts through predictive data analysis, which tracks the potential demand for parts as well as parts supportability as an ongoing analysis process. Additionally, the Weapon System Supply Chain Management office does this through an analysis tool that draws on information from multiple supply databases to identify rising request levels in maintenance data systems and mission-capable conditions reported from the field. This tool uses data to identify parts that will be needed. Additionally, Air Force Global Strike Command conducted an end-to-end review of Minuteman III weapon system maintenance to determine whether ICBM maintenance organizations are organized, trained, and equipped to meet the current and future needs of the weapon system. The review noted that a questionable manpower standard, aging resources and equipment, and organizational inefficiencies have reduced the effectiveness of maintenance and the health of the Minuteman III. Subject-matter experts from various Air Force organizations and the Navy assessed maintenance and provided recommendations on methods, training, resources (supply and equipment), infrastructure, manpower, support, culture, and leadership. For example, the review observed that parts and equipment availability challenges continue to affect the mission. From this observation the review offered several recommendations, including that the Air Force Nuclear Weapons Center set aside all parts for weapon system testing so that they are available when the tests occur, every 5 years. This is intended to ensure that the parts that are set aside are not used at the missile wings. The review also recommended a number of efforts to improve the management of maintenance schedules, including increased coordination and planning of maintenance schedules in advance. According to Air Force officials, this allows maintenance commanders to make informed decisions, in advance, regarding when longer working days are appropriate. A number of service-life extension programs are under way to sustain the Minuteman III until the Ground Based Strategic Deterrent arrives. Additionally, ICBM programmed depot maintenance was introduced in 2014 and transformed processes for ICBM weapon system sustainment into a standardized, integrated planning and support model that performs maintenance to refurbish portions of the weapon system. According to Air Force officials, the idea was to have the Minuteman III weapon system undergoing depot maintenance in ways similar to the periodic depot maintenance that aircraft undergo. However, the depot team would have to conduct portions of the maintenance in the missile fields instead of bringing the weapon system to a depot. This new programmed depot maintenance takes individual Minuteman III launch facilities offline to conduct major maintenance. Air Force Nuclear Weapons Center works with the Defense Logistics Agency to procure parts as part of programmed depot maintenance planning. According to Air Force officials, the plan is to have 57 launch facilities go through the programmed depot maintenance process each year, with a plan to refurbish all launch facilities over an 8-year period. Additionally, the current programmed depot maintenance efforts are implementing a standard set of maintenance efforts across all facilities, but some additional issues are also being addressed on a case-by-case basis. To track the health of the Minuteman III, the Air Force Nuclear Weapon Center assigns predictive health measures to the systems. These predictive health measures estimate when there will be a specific maintenance activity needed for each weapon system part―for example, when a part will likely fail and need to be replaced—based on assessments of historic data and engineering analysis. It emphasizes ICBM sustainment through reliability-centered maintenance, which allows for the continuous evaluation of system performance. Additionally, the predictive health measures, based on data from Air Force maintenance data-collection systems, are analyzed monthly for all launch facilities and launch control centers across the three missile wings. According to Air Force officials, predictive health measures enable the Air Force to identify early indications of when systems may need additional maintenance as well as to analyze health trends to identify issues―such as parts failures―across all of the Minuteman III force. Additionally, the use of predictive health measures and reliability-centered maintenance allows the Air Force to better plan for when maintenance activities, and related resources, will be needed to address issues prior to when they arise. According to Air Force officials, Air Force Global Strike Command also collects and reports on metrics monthly, based on Integrated Maintenance Data System write-ups and predictive health metrics. Officials told us that the Integrated Maintenance Data System is a difficult system to learn and no formal training on the system is available. The data quality in the Integrated Maintenance Data System is highly dependent on the individual expertise of whoever enters it. Appendix III: B-2 Bomber Faces Challenges Associated with Its Small Fleet Size and Parts Obsolescence Issues B-2 Overview The B-2 Spirit is a multirole, dual-capable heavy bomber. The B-2 is the only U.S. aircraft that combines a long-range capability, a large payload, and stealth into a single platform, giving it the ability to project air power globally. The B-2 became operational in 1997, and the current B-2 operational fleet consists of a total of 20 aircraft. The 509th Bomb Wing, located at Whiteman Air Force Base, Missouri, is the sole operational unit for the B-2. The 509th Bomb Wing usually maintains 15 operationally available B-2s. At any one time, there are two aircraft undergoing sustainment and modernization upgrades, two in programmed depot maintenance, and one designated as a test aircraft. The Air Force plans to sustain the B-2 into the 2030s (see fig. 7). The B-2 will eventually be replaced by the B-21, which will assume the penetrating strike role of the B-2. The B-21 is expected to become operational in the mid-2020s, but no replacement schedule for the B-2 has been identified. The B-2 is undergoing multiple modernization programs, while maintaining existing capabilities through form, fit, and function replacements for components that are obsolete or no longer supportable. B-2 modernization efforts are ongoing for communications, navigation, defensive management, weapons, and the airframe. B-2 Challenges Because the B-2 is aging and the fleet is small, parts obsolescence is a challenge. A unique sustainment aspect of the B-2 is the focus on managing its low-observable stealth capability. The B-2 Low Observable Integrated Product Team manages the Low Observable Signature and Supportability Modifications portfolio of projects, which is aimed at maintaining the stealth capability of the B-2 by monitoring, maintaining, and enhancing the radar cross section (or “signature”) of the aircraft. In addition to specific efforts to sustain the low-observable stealth capability, every other sustainment and modernization activity for the B-2 must be assessed early in the planning stages for any effects on this capability. According to Air Force officials, in addition to maintaining readiness for its nuclear mission, the B-2 platform is also in high demand to support conventional bomber missions. However, the Air Force has a limited number of aircraft to meet this demand. Consequently, the Air Force’s B-2 Division, along with Air Force Global Strike Command and the 509th Bomb Wing, must carefully manage the timing of maintenance activities, aircraft modifications, programmed depot maintenance, assignment of a flight test aircraft, and the flying-hour program. This requires an intricate schedule of availability of aircraft for each effort, while trying to maintain overall operational availability for the B-2 fleet. According to Air Force officials, small-fleet dynamics have led to high costs, diminishing vendor and parts availability, and readiness concerns. B-2 Challenge-Mitigation Efforts Various initiatives are under way to improve the availability of B-2s. A cumulative increase of one additional aircraft available for operations is anticipated by fiscal year 2022. Several of these initiatives are directly related to improving sustainment of the B-2 and maintenance processes and procedures. Examples of sustainment and maintenance-related initiatives include the following: The B-2 Programmed Depot Maintenance Process Improvement initiative is a collaborative effort between the B-2 program office and Northrop Grumman to increase capacity during the depot maintenance process in order to incorporate modifications during depot maintenance. This initiative is expected to result in reduced downtime at the 509th Bomb Wing by allowing modifications that would normally occur at the wing―making an aircraft unavailable for operations―to occur during planned depot maintenance. The B-2 program office increased the interval between programmed depot maintenance periods from 7 years to 9 years. The original B-2 programmed depot maintenance interval of 7 years was driven by the expected life of low-observable coatings. According to B-2 program officials, they have since determined that the expected life of these coatings is 9 years. Additionally, the Air Force’s B-2 Division established the B-2 Obsolescence Integrated Product Team in 2018 to address management oversight of obsolescence. The team convenes monthly to develop a strategic plan to enhance processes, communications, and consolidation of obsolescence issues affecting B-2 modernization and sustainment. A list of obsolete parts, currently totaling over 100, as well as planned mitigation strategies, is consolidated and reviewed quarterly. The integrated product team is also developing a Diminishing Manufacturing and Materiel Shortages Management Plan to define the structure, process, management, and oversight of obsolescence for the life cycle of the B-2. Further, according to Air Force documentation, for each B-2 sustainment and modernization program, the government and prime contractor establish a joint Obsolescence Working Group that is responsible for reviewing the program’s strategy to mitigate diminishing manufacturing and materiel shortages. Appendix IV: B-52 Bomber Faces Maintenance Challenges through 2050 B-52 Overview The B-52 Stratofortress is a dual-capable heavy bomber used to meet the United States’ airborne strategic nuclear deterrence and global precision attack mission and objectives. The B-52 began operations in 1952. Eight models were produced, with a total production quantity of 742. The final version of the B-52, the “H” model, was the last model produced and became operational in 1961. The current B-52 operational fleet consists of a total of 76 aircraft, 46 of which are designated as nuclear capable. B- 52 operational units consist of the 2nd Bomb Wing, located at Barksdale Air Force Base, Louisiana, and the 5th Bomb Wing, located at Minot Air Force Base, North Dakota. The B-52 originally had a planned service life of approximately 20 years. However, the Air Force now plans to sustain the B-52 until at least 2050 (see fig. 8). An eventual replacement for the B-52 has not yet been identified. The B-52 is undergoing several modernization programs planned for completion in the 2020s. The B-52 Commercial Engine Replacement Program will replace the aging TF33-PW-103 engine with new commercial-off-the-shelf engines capable of meeting the needs of the B- 52 platform to keep the B-52 viable until 2050 and beyond. The engine replacement program was scheduled to begin in fiscal year 2019 and to be completed in fiscal year 2023. Additional modernization programs include installation of a Global Positioning System Interface Unit and a radar modernization program. B-52 Challenges According to B-52 maintainers, the biggest maintenance limitation they are experiencing is with the engine. In 2017, an engine fan disk failure on one of eight engines on a B-52 caused the engine to detach from the aircraft while in flight. The Air Force has identified the resulting fan disk inspection and replacement as a serious risk due to the time it will take to complete and expects the inspection, removal, and replacement to have an effect on the fleet into the 2020s. Further, the current TF33 engines are unsupportable beyond 2030. According to Air Force officials, the engine replacement program is expected to negatively affect aircraft availability rates until it is completed in 2023. Air Force officials also expressed concern that, because the new commercial engines have many digital components, their installation could increase the B-52’s cybersecurity risk. At 60 years old, the B-52 is experiencing structural issues typical of aging aircraft. The extension of the B-52’s service life into the 2050s likely imposes additional unforeseen sustainment and modernization challenges. The aging airframe has required increased depot-level maintenance to correct, for example, problems related to stress corrosion and cracking on the airframe. Further, industry is no longer able to support these aging systems, and the systems have experienced declining performance and system failure. According to Air Force officials, it is difficult to maintain suppliers who will produce the necessary parts for such an old airframe. According to officials at both B-52 wings, a security-clearance backlog limits the number of trained and available B-52 maintainers. Both B-52 wings also have shortages of experienced maintainers. Additionally, the demands of the B-52’s conventional mission create challenges to ensuring that they are available for their nuclear mission. The B-52 has been used in operations against the Islamic State in Syria. According to officials at both B-52 wings, the conventional mission is the day-to-day focus of most B-52 operators and maintainers. These officials said that it is sometimes challenging to shift their collective mindset to focus on the nuclear mission. Further, the B-52 requires different configurations for its conventional and nuclear missions. According to B-52 maintenance officials, the time it takes to change the configurations affects how quickly the aircraft can be ready for a nuclear mission. An official from one B-52 operations group expressed concern that if the B-52 continues to be used heavily in conventional operations, it will begin to experience airframe and personnel problems similar to those that have affected the B-1, which has been used extensively in recent conventional bombing operations. B-52 Challenge-Mitigation Efforts The B-52 engine replacement program is expected to allow the engines to be sustained until the 2050s, when the B-52 is expected to retire. In addition, the modern engines being installed will increase the B-52’s range by approximately 30 percent, significantly decrease maintenance costs and downtime, provide the additional electrical power required for follow-on systems, and decrease the B-52’s dependency on refueling tankers for both conventional and nuclear long-range strike sorties because it will be able to fly longer without being refueled. The B-52 program office is leading a B-52 Aircraft Availability Improvement Plan, which is an enterprise-wide effort to increase the number of B-52s available to operational units. According to officials, the program office is leading an initiative to reduce the number of aircraft that are at the depot at any given time from 11 to 9. This would increase the availability of aircraft to meet operational requirements. This effort is in the early implementation stages, and the program office has not yet evaluated the results. The B-52 program office mitigates parts obsolescence issues through active vendor management, selection of vendors who use an open systems approach, use of predictive database tools to identify diminishing manufacturing and materiel shortages, and leveraging industry and government reporting systems that track diminishing manufacturing and materiel shortages. Appendix V: Air-Launched Cruise Missile Is Experiencing Sustainment Challenges as a Result of Age and Attrition Air-Launched Cruise Missile (ALCM) Overview The AGM-86B ALCM is a long-range self-guided missile with a nuclear warhead that is carried by the B-52H Stratofortress bomber. ALCM complements the B-52 heavy bomber in its strategic mission; its primary missions are strategic attack, interdiction, and suppression of enemy air defenses. It is designed to be carried on the internal B-52 common strategic rotary launcher or externally on pylons located underneath each wing (see fig. 9). The ALCM air vehicle is powered by a low-thrust turbofan engine and flies at subsonic speeds. After release from the carrier aircraft, the ALCM proceeds autonomously to its target. ALCM Challenges ALCM became operational in 1982 and, according to Air Force officials, had an original planned service life of 10 years; it is on average 25 years beyond its planned service life (see fig. 10). Additionally, ALCM has experienced aging issues with multiple subsystems. For example, the officials told us the Bomber Weapons Integration Equipment, pylons, launcher, common support equipment, ALCM-peculiar support equipment, and automated test equipment all have aging and supportability issues that require assessment and actions that must be taken going forward. Air Force officials stated that because of ALCM’s age, diminishing manufacturing sources and material shortage issues occasionally arise that have required requalification of a product line or qualifying a new source. Additionally, they said that ALCM maintenance and analysis trends have highlighted that electrical components and bearings are wearing out. According to Air Force officials, the ALCM fleet, made up of approximately 535 missiles in active inventory as of May 2019, is affected by attrition resulting from testing. The ALCM is operationally tested with six force development evaluations and two functional ground tests each year. According to Air Force officials, the testing employs ALCM fleet inventory missiles that are consumed during live launch and destructive testing, thereby reducing the fleet by eight missiles per year. The officials noted that the fleet would be sustainable longer if the decision was made to stop testing. However, this would mean that fewer data—collected during the annual tests—would be available to predict the life of the missile, and the Air Force would lose full confidence that it could execute ALCM’s mission. ALCM Challenge- Mitigation Efforts According to Air Force officials, the ALCM will be sustained through 2030. Service-life extension programs have been implemented to sustain the weapon system, and maintenance is performed every 6 years to exchange the missile’s engine. In order to extend the ALCM’s service life until a replacement system is fielded, service-life extension programs were developed through surveillance, studies, and analysis programs that identified numerous components for replacement as a result of aging and obsolescence issues. Officials said these programs address replacement of aged brittle components, bearings, and circuitry and electronic components within navigation and guidance systems. According to Air Force officials, maintainers are being proactive in identifying parts on the ALCM system that will experience issues in the future. Additionally, continued monitoring through flight tests and aging surveillance programs will enable them to identify new aging issues, which may drive additional service-life extension efforts. To mitigate challenges that arise, there is ongoing coordination between the ALCM and Long-Range Stand Off program offices to develop plans to retire ALCMs as Long-Range Stand Off production is executed through full operational capability and complete deployment. To mitigate challenges with support equipment, supportability trades are being conducted for the launcher and pylon service-life extension, and a gap analysis is being conducted to identify components, processes, and procedures that need to be modified to ensure service life through 2030. According to Air Force officials, maintainers are looking for ways to be proactive in maintaining support equipment and identifying future issues before parts break, as they are doing for the missile itself. Through the Automatic Test Systems program office, the Electronic System Test Set is also encountering aging and supportability issues that are being addressed through multiyear technical insertion projects. Additionally, predicting new effects of aging on service life grows increasingly challenging as 2030 approaches. Appendix VI: The Navy Plans to Sustain the Ohio-Class Ballistic Missile Submarine until It Is Replaced by the Columbia-Class Ohio-Class Ballistic Missile Submarine (SSBN) Overview The Ohio-class SSBNs constitute the sea-based leg of the strategic triad. Each SSBN is capable of carrying and launching 20 D-5 Trident submarine-launched ballistic missiles, which can deliver multiple nuclear warheads. The first Ohio-class SSBN, the USS Ohio, entered service in 1981. The last Ohio-class SSBN, the USS Louisiana, entered service in 1997. The Navy maintains a fleet of 14 Ohio-class SSBNs. Eight of the SSBNs are deployed in the Pacific Ocean, homeported in Bangor, Washington, and six are deployed in the Atlantic, homeported in Kings Bay, Georgia. According to a DOD Inspector General report, in a 1998 memorandum from the Commander of the Naval Sea Systems Command to the Chief of Naval Operations, the Navy documented its decision to extend the original 30-year service life of the Ohio-class SSBNs to 42 years. The report noted that this decision was supported by a Navy- directed study led by the manufacturer of the Ohio-class, General Dynamics Electric Boat Division, which determined that extending the service life of the Ohio-class SSBNs to 42 years was technically feasible. Subsequently, in a 2017 memorandum from the Commander of the Naval Sea Systems Command to the Program Executive Office for Submarines, the Commander stated that extensions beyond 2042 were not technically feasible. However, Navy officials said that they are beginning to consider options in case the replacement program, the Columbia-class SSBN, is delayed. As we previously reported, Navy officials noted that the service has never operated a nuclear-powered submarine for as long as 42 years. The Navy plans to replace the 14 Ohio-class SSBNs with 12 Columbia- class SSBNs. The first of the Ohio-class SSBNs is scheduled to be retired from active service in 2027. The remaining Ohio-class SSBNs will be retired at a rate of one per year, with the last one exiting service in 2040 (see fig. 11). According to Navy officials, they do not have a contingency plan in case the Columbia-class SSBN acquisition dates are delayed. However, they said that the fact that 14 Ohio-class SSBNs are being replaced by 12 Columbia-class SSBNs provides some extra time for replacement in case Columbia is delayed. Specifically, there will be an estimated 2 years between when the last Columbia-class SSBN is delivered and the last Ohio-class SSBN is retired. Navy officials also said that they are trying to gather the necessary data to lay the ground work now to be able to make engineering decisions in 10 years about the feasibility of sustaining the Ohio-class SSBNs in the event that the Columbia-class is delayed. SSBN Challenges The Navy is experiencing challenges in sustaining the Ohio-class SSBN through its planned 42-year service life. According to Navy officials, since the Ohio will be in service longer than expected, the Navy is encountering parts that need replacement that were not originally intended to be replaced. There is no industrial base of suppliers to support the replacement of some of these parts. In addition, the overall amount of maintenance required for the SSBNs increases as they age. According to Navy officials, both of these issues contribute to diminishing manufacturing sources and material shortages for the Ohio-class SSBNs. According to May 2019 congressional testimony by the Director of the Navy’s Strategic Systems Programs, the D-5 Trident submarine-launched ballistic missile has also been deployed for longer than its original planned service life. Specifically, it has been deployed for over 25 years, and the Navy now plans to operate the D-5 for over 50 years total. It has undergone service-life extension programs and is operating on new rocket motors. However, according to the Director’s testimony, this will be more than double the historical service life of any previous sea-based strategic deterrent system. Engineered refueling overhauls—major maintenance periods that occur once during an SSBN’s life—have been completed for all except the last two Ohio-class SSBNs to enter service, the USS Wyoming and the USS Louisiana. The USS Wyoming is currently undergoing its overhaul and is scheduled to complete it in July 2020. The USS Louisiana was scheduled to begin its overhaul in September 2019 and complete it in April 2022. According to Navy officials, in the past SSBNs completing refueling overhauls have cannibalized parts from SSBNs that are beginning to be overhauled. The final Ohio-class SSBN to undergo an overhaul, the USS Louisiana, will not have that option, because there will be no other SSBNs from which to cannibalize parts. However, these officials noted that they have not encountered any insurmountable issues thus far in planning the Louisiana’s overhaul. The DOD Inspector General reported in June 2018 that the Navy did not have a contingency plan in the event that the Columbia-class is delivered late. The Navy has identified a number of efforts under way to ensure that it reduces risks in both the maintenance of the current Ohio-class SSBN and the acquisition schedule of the Columbia-class SSBN. However, as we reported in December 2017 and again in March 2019, the Columbia-class program is facing more risks than its predecessors from its aggressive and concurrent schedule as a result of the continued and pressing need for it to meet the Navy nuclear deterrent requirements. The first Ohio-class SSBN is scheduled to be retired in 2027, and another is to follow each year until 2040. The first Columbia-class SSBN is scheduled to enter service in fiscal year 2031, and another is to follow each year thereafter. SSBN Challenge- Mitigation Efforts We have previously reported that the Navy also plans to increase investment in its SSBN maintenance facilities, equipment, and workforce to improve the execution of SSBN maintenance. According to Navy officials, they have several strategies to combat diminishing manufacturing sources and material shortages. For example, the Ohio program office has made “life of type” purchases for some parts for which the industrial base cannot meet the demand. In other words, according to program officials, the program office purchases in one contract enough of that part to last for the entire life of the SSBN—a large enough order to make it worth the time and cost for a manufacturer to produce the parts. According to the officials, another solution is to retrofit the pieces being used to build the Columbia-class SSBNs to support the needs for the Ohio-class SSBNs. For example, the Navigation Process Unit was retrofitted from the Columbia to use on the Ohio. This allows the Navy to purchase these components from manufacturers who will already be making them for the Columbia. The Navy has initiated major modernizations on a number of systems on the Ohio to upgrade those systems with new capabilities. According to Navy officials, modernization efforts are being planned for navigation, radio, and electronic communications systems, among others. The Navy has also initiated a program to refurbish and extend the service lives of D- 5 Trident submarine-launched ballistic missiles to about 2040. As Columbia-class SSBNs begin to replace Ohio-class SSBNs, refurbished D-5s carried by retiring Ohio-class SSBNs will be transferred to new Columbia-class SSBNs. Columbia-class SSBNs will continue to be armed with these refurbished D-5s until about 2040, at which time the D-5s are to be replaced by a successor submarine-launched ballistic missile. According to Navy officials, maintaining one strategic weapon system configuration during the transition to Columbia is beneficial from a cost, performance, and risk-reduction standpoint. In 2018, the DOD Office of Inspector General reported that the Secretary of the Navy and the Chief of Naval Operations have formally designated strategic nuclear deterrence as the Navy’s top priority. According to the report, as a result, the Navy has reduced the time required for engineered refueling overhauls of SSBNs, increased workforce size at shipyards, accelerated and improved shipyard workforce training, and improved SSBN maintenance procedures and schedules. However, while the Navy was able to reduce the time required for its last two engineered refueling overhauls, it has not hit the target of 27 months since 2010. In addition, according to officials the Navy has created two working groups—the SSBN/Guided Missile Nuclear Submarine Working Group and the Trident Coordination Group—to monitor and mitigate Ohio-class sustainment and maintenance challenges. Appendix VII: Comments from the Department of Defense Appendix VIII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments Joseph W. Kirschbaum, (202) 512-9971 or kirschbaumj@gao.gov In addition to the contact named above, key contributors to this report were Penney Harwell Caramia, Assistant Director; R. Scott Fletcher; Jonathan Gill; Susannah Hawthorne; Brent Helt; Joanne Landesman; Amie Lesser; K. Ryan Lester; Ned Malone; Gabrielle Matuzsan; and Michael Shaughnessy. Related GAO Products Nuclear Weapons Sustainment: Fiscal Year 2018 Nuclear Forces Budget Estimates. GAO-19-127R. Washington, D.C.: November 2, 2018. Defense Nuclear Enterprise: DOD Continues to Address Challenges but Needs to Better Define Roles and Responsibilities and Approaches to Collaboration. GAO-19-29. Washington, D.C.: November 1, 2018. Defense Nuclear Enterprise: Processes to Monitor Progress on Implementing Recommendations and Managing Risks Could Be Improved. GAO-18-144. Washington, D.C.: October 5, 2017. Nuclear Weapons: DOD Assessed the Need for Each Leg of the Strategic Triad and Considered Other Reductions to Nuclear Force. GAO-16-740. Washington, D.C.: September 22, 2016. Defense Nuclear Enterprise: DOD Has Established Processes for Implementing and Tracking Recommendations to Improve Leadership, Morale, and Operations. GAO-16-597R. Washington, D.C.: July 14, 2016. Nuclear Weapons Council: Enhancing Interagency Collaboration Could Help with Implementation of Expanded Responsibilities. GAO-15-446. Washington, D.C.: May 21, 2015.
Why GAO Did This Study In 2014, the Secretary of Defense directed two reviews of DOD's nuclear enterprise. These reviews made recommendations to address problems with leadership, organization, investment, morale, policy, and procedures, as well as other shortcomings that adversely affected the nuclear deterrence mission. In 2015, DOD conducted a review focused on NC3 systems, which resulted in additional recommendations to improve NC3. The National Defense Authorization Act for Fiscal Year 2017 includes a provision for GAO to review DOD's processes for addressing these recommendations. This report addresses the extent to which DOD has made progress in (1) the implementation and tracking of the recommendations from the 2014 and 2015 nuclear enterprise reviews and (2) addressing sustainment and maintenance-related challenges and planning for the continued sustainment and maintenance of existing defense nuclear enterprise systems. GAO reviewed documents and interviewed DOD officials. This is a public version of a classified report that GAO issued in October 2019. Information that DOD deemed classified has been omitted. What GAO Found The Department of Defense (DOD) continues to make progress in implementing recommendations to improve the nuclear enterprise. These recommendations stemmed from DOD's 2014 internal and independent nuclear enterprise reviews, a U.S. Strategic Command 2014 memorandum, and an internal DOD 2015 report on nuclear command, control, and communications (NC3). Since GAO last reported—in November 2018—an additional five of the 247 sub-recommendations from the 2014 reviews have been closed; 91 remain open. In that time, DOD has also closed two more of the 13 recommendations from the 2015 review; six remain open. However, the key tracking tools DOD uses to provide visibility on the status of the recommendations do not provide current and complete information. For example, for those items that are behind schedule, many of the expected completion dates have not been updated to reflect when the items are now expected to be completed. The current DOD guidance for tracking the recommendations' status does not include a specific requirement to keep the information current in the tracking tools. Until DOD addresses these issues, it will not have a complete and accurate picture of when tasks are expected to be finished, whether progress is being made, whether efforts have stalled, or if there are other challenges. Ensuring that there is current and complete information regarding enduring recommendations would also help inform DOD's effort to monitor the health of the defense nuclear enterprise. DOD and the military services are experiencing challenges related to sustainment and maintenance of nuclear weapon systems and have ongoing and planned initiatives intended to mitigate these challenges. All of the systems we reviewed have been operational since before 1998, making these systems at least 22 years old (see figure). The age of the systems has resulted in maintenance and supply issues. For example, the Ohio -class submarine has experienced the failure of parts that were not originally intended to be replaced. DOD and the services have ongoing and planned efforts to mitigate these challenges, such as improving maintenance processes and sources of supply. What GAO Recommends GAO is making two recommendations for DOD to update guidance to require DOD components to keep information on recommendations current and complete. In written comments on the classified report, DOD concurred with these recommendations.
gao_GAO-19-508
gao_GAO-19-508_0
Background Effective communication is vital for first responders’ ability to respond to emergencies, ensure the safety of both their personnel and the public, and protect public and private property. For example, first responders use public safety communications systems to gather information, coordinate a response, and, if needed, request resources and assistance from neighboring jurisdictions and the federal government. First responders use several types of communications systems, such as LMR systems, commercial wireless services, and the FirstNet network. LMR systems. These systems are the primary means for first responders to use voice communications to gather and share information while conducting their daily operations and coordinating their emergency response efforts. LMR systems are intended to provide secure, reliable voice communications in a variety of environments, scenarios, and emergencies. Commercial wireless services. Public safety entities often pay for commercial wireless services to send data transmissions such as location information, images, and video. FirstNet network. FirstNet is working to establish a nationwide, dedicated broadband network for public safety use that is intended to foster greater interoperability among first responders, support important voice and data transmissions, and meet public safety officials’ reliability needs on a priority basis, including call “preemption.” FirstNet’s network is intended to complement LMR systems with broadband capabilities and does not serve as a substitute for mission-critical voice needs. Communications systems must work together, or be interoperable, to ensure effective communication. Emergency communications interoperability refers to the ability of first responders and public safety officials to use their radios and other equipment to communicate with each other across agencies and jurisdictions when needed and as authorized. First responders’ LMR systems operate by transmitting voice communications through radio waves at specific frequencies and channels within the electromagnetic spectrum. FCC is responsible for allocating spectrum for various purposes and assigning spectrum licenses in a specific area and to a specific entity such as a police department or a telecommunications company. As previously noted, an auction is one mechanism that FCC may use to assign spectrum licenses. According to FCC officials, due to certain restrictions in the Communications Act, FCC has used administrative procedures, not auctions, to assign licenses for public safety and non-commercial educational broadcast stations. Over the years, spectrum for public safety has expanded to new frequency bands, as previously available frequencies became congested and public safety needs for spectrum increased. As we have previously reported, congestion results from growth in the overall number of users and demand for spectrum dependent technologies and services. Because of the increased demand for spectrum, in 1971 FCC authorized public safety and business-industrial users to share a portion of the T- Band spectrum (470 to 512 megahertz) with television broadcast stations in 11 metropolitan areas. The 11 metropolitan areas, which are identified in figure 1, include almost all the most populous metropolitan areas in the United States. The entire T-Band is not available for public safety and business users in these 11 metropolitan areas to build and operate LMR systems, and the amount of spectrum varies in each area. FCC rules allow “base station transmitters”—the equipment that emits radio signals to communicate with mobile units—to be located within 50 miles from the geographic center of each metropolitan area, as shown in figure 1. In 2012, as part of the Middle Class Tax Relief and Job Creation Act of 2012 (the Act), FCC was required by statute to reallocate the T-Band spectrum currently used by public safety and commence the process for an auction by February 22, 2021. As part of the reallocation of the T- Band for the 11 metropolitan areas listed above, the proceeds from the required auction shall be available to NTIA to make grants to cover relocation costs for the relocation of public safety entities. The grants are to be funded by the auction proceeds for the purpose of helping cover these users’ relocation costs. According to FCC officials, the Act does not address the hundreds of business-industrial users also using the T- Band and does not set aside or identify replacement spectrum for public safety users. DHS officials told us that the Act does not provide a formal role for DHS in the T-Band spectrum auction or relocation of public safety users. While one purpose of spectrum auctions is to recover the public portion of the value of spectrum, FCC officials told us that the Act and its legislative history do not explain the purpose of the T-Band auction and relocation, and we confirmed the absence of legislative history for the auction mandate. According to FCC officials, there are approximately 925 public safety entities with licenses in the T-Band. Each of these entities holds at least one license, but in some cases may hold many licenses. For example, the State of Texas holds one public safety license in the T-Band in the Houston metropolitan area, while the New York City Police Department has 180 licenses in the New York City metropolitan area. The number of licenses held by each entity depends on the demand for the spectrum for LMR systems and the availability of spectrum in other bands allocated for public safety use. FCC estimates that public safety entities have approximately 3,000 stations within the T-Band. Additionally, FCC said that the T-Band also contains approximately 700 business-industrial users that occupy about 1,700 stations. T-Band Relocation Poses Significant Challenges, Including Uncertainty of Available Spectrum, High Cost, and Interoperability Concerns Lack of Available Alternative Spectrum in Major Metropolitan Areas Public safety officials in three of our four selected metropolitan areas— Boston, Los Angeles, and New York City—told us that they have not been able to identify alternative spectrum to relocate from the T-Band, a situation that raises questions about the feasibility of the auction and relocation. For example, all of the officials we interviewed from New York City police, fire, and emergency management departments said there is no spectrum available for them to relocate to. The officials noted that the New York City Police Department is the largest municipal police department in the country and that it relies on the T-Band to dispatch police for 911 calls. Additionally an official from Pasadena in the Los Angeles metropolitan area said that the spectrum allocated for public safety in the region is already crowded and that officials are unsure of where to relocate their emergency communication operations. Public safety officials from Boston, Los Angeles, and New York City metropolitan areas also said that FCC has not provided a plan or identified alternative spectrum for relocation. In 2013, in anticipation of the mandatory T-Band auction, FCC published a notice and solicited public comment to gather information on when, how, and under what circumstances to relocate public safety and business-industrial users of the T-Band. At that time, FCC asked commenters what alternative spectrum bands were potentially available for relocation of T-Band’s public safety users, and whether these users could relocate to other public safety bands including the 700 and 800 MHz bands. In response to FCC’s request for comment, NPSTC conducted an analysis and reported in 2013 that the 11 different metropolitan areas would face different likelihoods of relocating to alternative spectrum. NPSTC analyzed FCC data on T-Band licenses to determine the number of public safety licenses that would need to be relocated, and then compared the need for licenses to the available licenses in other spectrum bands that FCC has allocated for public safety use. Based on that analysis NPSTC concluded the following. In five of the 11 metropolitan areas, relocating public safety users from the T-Band would not be possible. Specifically, in addition to identifying the three metropolitan areas we discuss above (Boston, Los Angeles and New York City), NPSTC concluded that at least two other metropolitan areas (Chicago and Philadelphia) lacked sufficient spectrum in any band to relocate public safety’s existing T-Band operations. For the other six metropolitan areas (Pittsburgh, San Francisco, Washington, D.C., Dallas-Fort Worth, Houston, and Miami) NPSTC’s analysis found that these areas might have sufficient spectrum to relocate T-Band users, with the 700 MHz narrowband offering the greatest potential. These metropolitan areas have fewer public safety T-Band licensees needing to relocate. Representatives from a trade organization that represents business-industrial users of the T-Band told us that in five of these six metropolitan areas, business-industrial users hold more than half of T-Band licenses. Specifically, the representatives noted that approximately 95 percent of T-Band users in the Houston metropolitan area are business-industrial users and that in Pittsburgh, Washington, D.C., Dallas-Fort Worth, and Miami metropolitan areas more than 50 percent of the T-Band users are business-industrial users. Our interviews with selected local officials confirmed that public safety users in Dallas-Fort Worth (our fourth selected metropolitan area) have had success transitioning off the T-Band. Two of the three public safety licensees we talked with told us they had already transitioned off the T- Band and noted that it was unrelated to the required T-Band auction. For example, an official from the City of Dallas, which holds one public safety license in the T-Band, told us that in 2012 the city began replacing existing radios with new radios that did not operate on the T-Band. The official said the city stopped operating on the T-Band in 2013 and relocated operations onto another spectrum band where most of the city’s public safety communications operated. Another T-Band public-safety licensee from the Dallas-Fort Worth metropolitan area told us that although it has active licenses they were unaware of the required auction or need to relocate from the T-Band. FCC and DHS officials told us the analysis conducted by NPSTC was a good source of information about the potential negative effects of the T- Band auction on public safety users, including numbers related to licensing and potential cost. DHS officials told us that NPSTC has broad expertise in emergency communications, noting that it is a member of two federally supported organizations that promote the interoperability of emergency communications—the Public Safety Advisory Committee and SAFECOM. Additionally, SAFECOM worked with another federally supported emergency communications advisory group—the National Council of Statewide Interoperability Coordinators—to create a publicly available document on the T-Band auction and the potential effects on public safety and cited the NPSTC’s report in the assessment. The document, notes that insufficient spectrum alternatives leave few options for identifying replacement spectrum in several major metropolitan areas. Selected representatives from industry groups whose members are business-industrial T-Band users in the 11 T-Band metropolitan areas, such as the American Petroleum Institute and the Utilities Technology Council, also said they anticipate that there would not be alternative spectrum available if required to relocate. For example, representatives with the American Petroleum Institute said that there are staff at major refineries that use the T-Band on a daily basis for all plant operations including emergency response (firefighters and hazardous materials), control room, engineering, and maintenance, and that relocating to new spectrum would be challenging given the lack of available spectrum. These representatives noted that most of the refineries that use the T- Band are located in Houston, but there are also some facilities in the San Francisco, Los Angeles, and Philadelphia metropolitan areas. Relocation Costs Could be in the Billions of Dollars Public safety officials in Boston, Los Angeles, and New York City agreed that relocating LMR operations from one spectrum band to another can be costly, complicated, and time intensive given infrastructure and equipment needs. These officials told us that transitioning from the T- Band requires identifying and acquiring new sites to build towers, purchasing new radios, testing new systems, building other infrastructure, and training personnel on the new systems. NPSTC calculated in its 2013 report that the cost to relocate public safety operations in the 11 metropolitan areas would be approximately $5.9 billion. Their calculation includes the costs for the total estimated number of new towers, cables, antennas, and mobile, portable, and vehicular radios. In 2016, after updating its analysis, NPSTC’s second report confirmed that the conclusions from the 2013 report remain valid. According to FCC officials, in early 2019 they analyzed the costs for relocating public safety users from the T-Band and estimated the total cost would be between $5 and $6 billion. Officials from nearly all of the public safety entities we interviewed in the Boston and New York City metropolitan areas cited the NPSTC reports as the best source of publicly available cost calculations for relocating public safety users from the T-Band. Officials from nearly all of the public safety entities we interviewed in Boston, Los Angeles, and New York City told us that estimating relocation costs is and will remain difficult until alternative spectrum is identified. However a few selected public safety users provided us with high-level cost estimates for replacing LMR system components. For example, an official in Pasadena said a conservative estimate for those components would be $13 to $14 million; while public safety officials in New York City estimated component costs would be at least $1.8 billion. According to public safety officials in Morris County, New Jersey, and Yonkers, New York, the financial burden may be greater for less populated areas, despite the higher anticipated actual cost for more populated areas. For instance, public safety officials in Morris County, New Jersey, told us they estimated $30 million in relocation costs, which exceeds the county’s total annual capital project budgets (approximately $20 to 25 million). According to public safety users in the Boston, Los Angeles, and New York City metropolitan areas, costs for relocating LMR systems from the T-Band depend on a variety of factors including (1) equipment, (2) infrastructure, and (3) real estate. 1. Equipment. Transitioning to another spectrum band could require public safety users to purchase new equipment such as radios. Some radios can only operate on one spectrum band, so moving to a new band requires purchasing new radios that can operate on that band. Alternatively, users could purchase multi- band radios, which can operate on more than one radio frequency band. According to public safety officials we spoke with, multi-band radios might be the best option since it is not clear which frequencies they will ultimately be relocated to. However, they also noted that multi-band radios are substantially more expensive than single band radios. For example, officials with the Boston Fire Department told us a regular radio costs approximately $5,000 each while multi-band radios cost up to $8,000. These officials told us that relocating from the T-Band would mean replacing approximately 1,800 radios with multi-band units, meaning that just replacing the Boston Fire Department’s handheld and portable radios could cost more than $14 million. Additionally, public safety officials in Boston and New York City added that local building codes in those areas require buildings of a certain size to install equipment that amplifies wireless signals throughout a building and improves coverage. These systems help first responders, such as police and firefighters, communicate with each other in large buildings. 2. Infrastructure. Infrastructure costs could include new radio towers and antennas and fiber-optic cable systems. Because different radio frequencies have different characteristics and can cover different distances, depending on to which spectrum band public safety users are relocated, circumstances may require more radio towers and antennas. For example, officials with the Boston Fire Department told us that if space were available and they were to relocate from the T- Band to the 800 MHz public safety band, they would need additional radio towers. Specifically, these officials said their current system consists of 42 receivers and five transmitting sites and estimated that a system in the 800 MHz band would likely require up to 60 receivers and five-to-nine transmit sites. FCC officials told us that based on the characteristics of other spectrum bands allocated to public safety, users may need to build between two and three times as much infrastructure to provide the same coverage. The officials noted this would substantially increase relocation costs. Additionally, public safety officials in Boston and New York City told us they are able to use the T-Band to communicate in the tunnels beneath each city because of infrastructure investments like the T-Band specific radiating cables, which allow first responder’s radios to work underground. Officials from New York City police, fire, emergency- management department and the mayor’s office said that relocating to a new spectrum would require installing a new radiating cable system in hundreds of miles of subway, train, and vehicle tunnels. These officials estimated that replacing the radiating cable infrastructure alone would take at least a decade and cost over $1 billion. Officials added that replacing the infrastructure would involve closing subway lines for extended periods of time as the new cables are installed. 3. Real estate. Costs associated with buying or leasing new real-estate sites for towers and other radio equipment will also affect the cost estimate for public safety users. Officials from Boston, Los Angeles, and New York City told us that because of the characteristics of different spectrum bands, building a replacement system might require additional sites. Additionally, officials with New York City told us that identifying locations and negotiating leases for radio towers and spaces for other equipment including radio cabinets would likely be difficult due to the scarcity of and high costs of appropriate sites in New York City. Public safety officials in Boston, Los Angeles, and New York City added that relocating from the T-Band would require building and operating parallel systems to avoid disrupting emergency communications. This project would require some duplication of investments—for example, radio towers, radio cabinets, and antennas, among other equipment and infrastructure—during the transition. For example, officials in New York City police, fire, and emergency-management departments told us they would need to build a dual system that could require at least twice as much space for equipment. They also noted that the current sites are rent free because of existing arrangements, but they believe that it is unlikely that landlords will provide additional space rent free. These officials told us that even if FCC identified available spectrum for them to relocate to, they would be unable to build and test the systems in the 2-year time frame required by statute. For example, New York City officials estimated buildout and testing could take over a decade, which they indicated would also substantially increase the city’s cost. Public safety stakeholders in the Boston, Los Angeles, and New York City metropolitan areas told us that it is difficult to estimate the time needed to build new LMR systems, but estimates ranged from 2 to more than 10 years from the time that alternative spectrum was identified. They noted that these time frames would also depend on the availability of funding and on the complexity of the new systems to be designed, built, and tested. FCC officials also told us that the time and expense of relocating hundreds of licensees at thousands of sites is difficult to predict due to many local factors. For instance, FCC officials cited their ongoing experience relocating public safety licensees within the 800 MHz band which was originally estimated to take 3 years. However, based on certain factors such as the geographic location and interdependencies of communications systems, this relocation effort remains incomplete after 14 years. Potential Difficulties in Maintaining Interoperability and Reliability of Emergency Communications on Alternative Spectrum Public safety stakeholders we talked to told us that the T-Band is important for the interoperability of public safety equipment and said that maintaining interoperability on alternative spectrum would be a challenge. Boston officials told us interoperability is vital for public safety and the T- Band is the key for their interoperability capabilities. For example, these officials said the LMR systems that allow almost 170 local, county, state, and federal law enforcement agencies to communicate with each other use the T-Band. The officials said this network of LMR systems is the only way for all these entities to communicate on a daily basis and is also used for command and control for crisis response at major events such as the Boston Marathon. These officials credited this system on the T-Band for the successful response to the 2013 Boston Marathon bombing. Officials said the LMR system allowed first responders in neighboring jurisdictions to provide additional communication equipment and personnel during the ensuing manhunt. Similarly, officials from New York City told us the T- Band now provides the foundation for all first responder communications in the area. Officials said the September 11, 2001, terrorist attacks demonstrated the loss of life that can occur when first responders are unable to communicate with each other because there was no system in place to allow police, fire, and emergency medical services to easily communicate. As a result, officials said New York City has spent countless hours and millions of dollars to improve interoperability, and that the interoperable system currently in place is based on the T-Band. In December 2018, we reported that it is vital for first responders—such as police officers and firefighters—to have (1) timely communications; (2) sufficient capacity to handle the communications; and (3) interoperable communications systems that enable first responders to connect with their counterparts in other agencies and jurisdictions, even if their counterparts’ systems or equipment vendors differ. As noted previously, public safety users rely on LMR systems as their primary means to gather and share information. For public safety users that rely on the T-Band for interoperable communications and that lack alternative spectrum to build new interoperable systems, losing access to the T-Band would mean public safety officials in multiple large metropolitan areas would be unable to communicate with first responders within their community, neighboring jurisdictions, and the federal government. Public safety officials in Boston, Los Angeles, and New York City told us that the characteristics of the T-Band spectrum are ideal for reliable emergency communications and that moving to another spectrum band may present a challenge to reliability. Since different frequencies of radio waves have different characteristics, jurisdictions typically use the spectrum that is best suited for their particular location. The officials told us that the T-Band’s characteristics allow radio signals to penetrate buildings and across varied terrain and require less infrastructure investments, such as radio towers, than other frequency bands assigned for public safety use. Los Angeles County officials cited the characteristics of the T-Band as the primary advantage the current radio system has over other systems operating on other spectrum bands. They explained that the characteristics make it more suitable for challenging terrain on the forested, mountainous, and coastal areas of the county, than similarly equipped radio systems operating in other frequency bands. FCC Has Taken Limited Actions to Help Facilitate the Mandated Spectrum Auction and Address Relocation Challenges; NTIA Is Awaiting FCC Action before Designing a Grant Program FCC Has Taken Some Preliminary Steps to Prepare for the Auction but Has Not Taken Additional Action FCC has taken some preliminary steps to help facilitate the mandated relocation of public safety users from the T-band, such as imposing a T- Band license freeze, requesting public comments, and creating a fact sheet to notify stakeholders of the spectrum auction and prepare for the auction. In April 2012, FCC froze the processing of applications for new or expanded T-Band radio operations in an effort to avoid adding to the cost and complexity of the mandated public safety relocation. Affected applications included those seeking: (1) new T-Band licenses; (2) modifications to existing licenses by adding or changing frequencies or locations within the T-Band; (3) modifications to existing licenses by changing technical parameters—such as increases in bandwidth, power level, antenna height, or area of operation—in a manner that expands the station’s spectral or geographic footprint; and (4) any other modification that could increase the degree to which the 470–512 MHz band currently is licensed. Both public safety and business-industrial users we interviewed expressed concerns about the license freeze and said it has caused some uncertainty and in limited cases has affected their ability to maintain existing systems. For example, public safety officials from one department we interviewed in the Boston metropolitan area said the freeze has affected users’ ability to replace aging equipment, which has led to poor communications in the area. Additionally, representatives from one business-industrial user told us that Hurricane Harvey destroyed one of its LMR sites and that the entity was having trouble rebuilding a site elsewhere since FCC considers this action a major change and thus affected by the license freeze. FCC staff told us that the public notice announcing the license freeze specifically advised affected parties that they could request a waiver in unusual circumstances where the public interest so warrants, and that that no such request appears to have been filed in this instance. In addition, as discussed earlier, FCC sought public comment in February 2013 to gather information and specific proposals for reallocating and auctioning the T-Band. FCC officials said they continue to evaluate auction proposals from these comments. In October 2014, FCC released a report and order making 24 channels in the 700 MHz narrowband, previously held in reserve, available for public safety users. FCC concluded that given the significant increase in demand for 700 MHz narrowband spectrum, particularly in urban areas, these channels should be made available for use. Public safety users of the T-Band were given priority to these new channels if they committed to return an equal amount of T-Band channels and obtained the concurrence of the relevant regional-planning committees. According to NPSTC’s 2016 report, these 24 additional channels are beneficial but insufficient to relocate all current users of the T-Band. The report notes that channel insufficiency is particularly challenging in the five metropolitan areas where T-Band usage is the highest—Boston, Chicago, Los Angeles, New York City, and Philadelphia. Furthermore, one public safety official in the Los Angeles metropolitan area raised concerns about potential radio interference if relocated to another frequency. The official said that currently, because the T-Band is not used by neighboring jurisdictions, the city does not currently have to worry about frequency interference. By contrast, the 700 and 800 MHz band is currently occupied by public safety in neighboring Riverside and San Diego Counties. This means, according to the official, that building a new system operating in the 700-800 MHz band could potentially introduce interference issues. FCC also created a fact sheet in July 2016 with basic information on the statutory relocation requirement. The T-Band fact sheet states that the relocation shall be completed within 2 years of the auction’s completion date: the exact timing of the relocation deadline will depend on when the auction concludes. FCC officials told us the T-Band fact sheet is the only formal T-band auction guidance that they have provided. However, officials said that they have also met with several licensees to discuss T- Band issues. For example, according to officials, FCC has met with public safety entities from areas such as Los Angeles, Chicago, Boston, and New York City. DHS officials told us that while they have no formal role in the T-Band auction and relocation of public safety users, they provide this fact sheet when they are asked for details about the T-Band auction as a way to help raise awareness about the auction and relocation requirements. Although FCC has made efforts to provide guidance and information to T-band users regarding the mandated auction, as we discuss earlier in the report, we found that not all T-Band users we interviewed are aware of the upcoming auction or the need to relocate from the T-Band. FCC has not set a timeline for initiating the auction but has stated that it is committed under any scenario to ensure the continuity of T-Band licensee’s public safety mission-critical communications. According to FCC officials, as of March 2019, almost all T-Band licensees continue to operate on the T-Band spectrum, and FCC officials cited multiple factors for the limited progress in preparing for the T-Band auction: FCC has not determined how to address challenges stakeholders identified in response to FCC’s 2013 request for public comment, including the lack of available spectrum to relocate and the cost. For example, officials told us that they are taking a wait-and-see approach to see how many T-Band licensees relocate prior to the auction. However, as noted previously, FCC officials told us their analysis of other spectrum bands shows insufficient spectrum for relocating public safety entities from the T-Band. The officials told us that public safety operates on the T-Band in large metropolitan areas where other public safety spectrum is heavily used and that this reason is why the T-Band was allocated for LMR in these areas in the first place. The T-Band auction has raised complicated relocation questions. For example, select industry groups we spoke to whose members are business-industrial T-Band users expressed concern about the uncertainty of the spectrum auction requirements, since the Act was silent on business-industrial users, but they are constrained by the license freeze from replacing aging equipment. FCC previously told us that it had not determined whether business-industrial users would be required to relocate. However, in April 2019, FCC officials told us that it intends to implement the auction following the statute’s language. FCC officials stated that the Act does not expressly require it to auction spectrum licensed to business-industrial users, but officials also stated that FCC may decide that it has the authority to auction that spectrum under a different statutory provision. Before conducting the auction, FCC must issue a notice, which includes a public comment period, to determine the auction procedures and requirements. FCC officials told us they have not progressed beyond the preliminary conceptual stages and do not have a precise timeline for the pre-auction process or auction. The officials explained that if business-industrial users relocate, they would face similar relocation challenges to that of public safety users and the Act does not mention them as eligible for relocation grants. According to FCC officials, licenses for business-industrial users outnumber those of public safety users on the T-Band in some areas. According to FCC officials and a FirstNet official, public safety users on the T-Band may subscribe to services on FirstNet’s nationwide public safety broadband network, which offers some voice functionality. However, officials said the network currently does not accommodate the need of public safety users for mission-critical voice functionality. For example, FCC officials told us that FirstNet’s network is not a substitute for mission critical voice systems operated by public safety licensees in the T-Band because the network does not support such capabilities and because there is no plan or schedule in place for the network to begin offering such services. According to an official at FirstNet, this network is intended to complement LMR systems with broadband capabilities, not replace LMR systems in the near future. In the interim, public safety users electing to use FirstNet’s broadband network will need to continue to use LMR networks for their mission critical voice needs while evaluating whether their future voice needs require continued maintenance of their LMR networks or whether FirstNet broadband services could fulfill their wireless communications requirements. FCC Officials Said That T- Band Spectrum Has Potentially Low Auction Value; NTIA Is Awaiting FCC Action The amount of proceeds that may be generated from the T-Band auction—which are, according to FCC, expected to be the sole source of federal funding to help cover the relocation costs incurred by public safety entities—is likely to be less than the total relocation costs. FCC officials told us the T-Band has potentially low value because of limited demand by potential bidders in the auction. For example, FCC officials estimated that revenue for the entire T-Band would not exceed $2 billion. To reach this amount would require public safety and business-industrial users to relocate from the T-Band, which according to FCC estimates could cost between $9 and $10 billion. As discussed previously, representatives from a trade organization told us that in five of 11 metropolitan areas where public safety uses the T-Band, business-industrial users hold more than half of T-Band licenses. Because of the high numbers of business- industrial users in the T-Band, there may be less spectrum to auction than perhaps initially contemplated when the Act was passed, which would ultimately affect auction proceeds. If FCC were to decide that it has the authority to auction spectrum utilized by business industrial users under a different statutory provision, as explained above, proceeds would be higher. As discussed above, NTIA is to make grants to cover relocation costs for the relocation of public safety entities in accordance with the Middle Class Tax Relief Act. However, NTIA officials told us that the agency has no dedicated funding to administer such a program and must wait for auction proceeds to stand one up. The officials also said that only when the auction concludes will NTIA know the total amount available and how best to disburse those funds for relocating agencies. Thus, designing a grant program, notifying eligible parties of available grants, evaluating applications, and issuing awards must all take place during the statutory 2-year relocation period. If agencies require the funds before they can move to other frequencies, it is unlikely that this migration can meet the two-year deadline. NTIA officials also stated that until they design the grant program, they do not have any relevant information to provide public safety stakeholders. NTIA officials said they would provide information on the grant program and begin making grants as soon as possible given the statutory requirement for public safety users to relocate within 2 years of the auction’s conclusion. According to NTIA officials, because the requirements for NTIA’s grant program for public safety relocation costs have not yet been specified, it is unclear what expenses will be covered. As previously discussed, FCC and NPSTC each calculated the cost for relocating public safety users in the 11 metropolitan areas and each arrived at an estimate between $5 and $6 billion. FCC officials said because of the high relocation costs and likely low value of the T-Band’s being auctioned, there is a strong likelihood auction proceeds would not cover public safety relocation costs. Although the Act stipulates that auction proceeds shall be made available through grants in such sums necessary to cover costs for the relocation of public safety entities from the T-Band spectrum, FCC officials said the Act did not address what would happen if the auction generated insufficient funds to cover relocation costs. Consequently, public safety stakeholders from Boston, Los Angeles, and New York City expressed concern about moving forward with relocating. These stakeholders identified the uncertainty of what spectrum would ultimately be auctioned as one of the main reasons they were concerned they would be unable to fully cover their relocation costs. FCC Plans to Proceed with the T-Band Auction Unless There Is a Statutory Change FCC officials stated that they recognize that the T-Band auction and relocation requirement present challenges for FCC and public safety entities—and potentially business-industrial users—particularly since spectrum for relocating all public safety users is limited to non-existent. However, these officials said they will design and conduct the spectrum auction, as required, unless the law is changed. In this case, FCC officials told us they provided Congress with information on the challenges associated with the auction. While FCC provided information to Congress, it did not suggest changes to law in this instance. As such, officials told us in March 2019 they were in the process of briefing key congressional committees on the challenges associated with the T-Band auction based on FCC analysis. According to this analysis, all T-Band auction scenarios would fail. FCC ran auction scenarios that looked at different options for relocating users and auctioning the T-Band used by public safety. These scenarios included relocating only public safety users, relocating public safety and business-industrial users, relocating public safety users, and reorganizing business-industrial users within the T-Band. In 2018, bills were introduced in both the House of Representatives and the Senate to repeal the requirement for FCC to reallocate and auction the T-Band. These bills were not enacted and expired at the end of the 115th Congress. However, in January 2019, a bill was introduced—and subsequently referred to a House subcommittee—to repeal the T-Band relocation and auction requirements. As of June 2019, no further action has taken place on the legislation. According to FCC’s strategic plan, one of FCC’s priorities is to protect public safety, and in particular, take steps to assist and safeguard the communications of our nation’s law enforcement officers and first responders. However, auctioning the T-Band spectrum, as FCC has been mandated to do, could hamper its ability to safeguard these communications. As mentioned above, the Act and its legislative history do not discuss the purpose of the T-Band auction. Public safety stakeholders in Boston, Los Angeles, and New York City told us they believe that there may have been an assumption the FirstNet network could absorb public safety users, but at this time the network does not support mission-critical voice capabilities first responders need. According to stakeholders in the Boston and New York City metropolitan areas, if the provision requiring the auction of public safety users’ T-band spectrum remains in effect and if the auction takes place, they could experience substantial harmful effects on their ability to maintain continuous and effective communications during an emergency. Officials representing seven public safety entities told us they favored Congress’ repealing the required T- Band auction for this very reason. For example, public safety officials in New York City said they believe the T-Band auction would severely negatively affect their ability to respond to emergencies and could lead to the loss of lives. In addition, officials with the Boston police department told us the T-Band is the lifeblood of police communications and the only way for almost 170 law enforcement departments in the Boston metropolitan area to communicate with one another on a daily basis and during major events. These officials said that auctioning the T-Band and forcing them to relocate and build a new system over several years would disrupt critical public safety communications and be disastrous. Conclusions Since the passage of legislation requiring the relocation of public safety users from, and auction of, the T-band radio spectrum, the potential consequences of these actions have become far more apparent. If FCC conducts such an auction, it is unclear that all public safety users in the affected areas will be able to relocate. If alternative spectrum is not available, public safety would be jeopardized in some of the nation’s largest metropolitan areas. Even if alternate available spectrum can be found, public safety users are likely to bear significant costs associated with relocating and reestablishing interoperability. These costs could go well beyond the revenue produced by such an auction. Matter for Congressional Consideration Congress should consider legislation allowing public safety users continued use of the T-Band radio spectrum. (Matter for Consideration 1) Agency Comments We provided a draft of this report to the Department of Commerce, DHS, and FCC for review and comment. DHS and FCC provided technical comments, which we incorporated as appropriate. The Department of Commerce indicated that it did not have comments. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Commerce and Homeland Security, and the Chairman of FCC. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or members of your staff have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix II. Appendix I: List of Interviewees Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, David Sausville (Assistant Director); Aaron Kaminsky (Analyst in Charge); Camilo Flores; Ray Griffith; Delwen Jones; Josh Ormond; Kelly Rubin; and Jessica Walker made key contributions to this report.
Why GAO Did This Study First responders and others in 11 large metropolitan areas use radio systems operating in the T-Band since spectrum is limited in other bands. In 2012, FCC was required by statute to begin an auction of this T-Band public safety spectrum by February 2021 and to make the proceeds available to the National Telecommunications and Information Administration (NTIA) to develop and administer a grant program to help cover costs associated with relocating public safety users' radio systems. GAO was asked to review issues related to the required T-Band auction. This report examines, among other things: (1) the challenges selected first responders and local governments anticipate facing in relocating public safety communications from the T-Band and (2) the actions FCC has taken both to help facilitate the required T-Band relocation and to address identified challenges. GAO reviewed FCC's March 2019 congressional briefing and analysis on T-Band spectrum and conducted case studies in four cities selected based on the number of public safety licenses in each area, among other things. GAO reviewed relevant statutes and regulations, FCC documents, and T-Band studies conducted by a public safety organization. GAO interviewed FCC officials and other stakeholders, including first responders in case study cities. What GAO Found Public safety officials, such as police and fire fighters, in 11 metropolitan areas rely on radio systems that use the portion of spectrum known as the T-Band for mission critical voice communications. Selected stakeholders GAO interviewed, including first responders and officials in three of four areas selected as case studies, anticipate significant challenges in relocating public safety communications from the T-Band. For example, stakeholders in Boston, Los Angeles, and New York said the Federal Communications Commission (FCC) has not identified sufficient alternative spectrum. Additionally, two studies conducted by a public safety organization concluded these three areas and others may also have insufficient alternative spectrum (see figure below). Moreover, a recent FCC analysis showed that relocation options for public safety users are limited or nonexistent. Further, costs for relocating public safety users from the T-Band were calculated by FCC to be $5-to-$6 billion. Selected stakeholders said relocating their communication systems would require such things as new towers and radios as well as other infrastructure. FCC has taken limited actions to address challenges and assist public safety users of the T-Band with the mandatory relocation. For example, FCC has taken steps to notify stakeholders, but officials told GAO they have not begun planning the auction. FCC officials acknowledged challenges the auction and relocation requirements present. FCC officials explained that public safety entities were licensed to operate on the T-Band in large metropolitan areas because other public safety spectrum was already heavily used. In March 2019, FCC briefed Congress on the auction's challenges and concluded that all T-Band auction scenarios would fail. Nonetheless, FCC officials said the agency will conduct the auction unless the law is amended. While FCC provided information to Congress, it did not suggest changes to law in this instance. Stakeholders in two metropolitan areas said the auction could result in substantial harmful effects on their ability to maintain continuous and effective communications during an emergency. What GAO Recommends Congress should consider legislation allowing public safety users continued use of the T-Band spectrum.
gao_GAO-19-627
gao_GAO-19-627_0
Background Medicare Coverage Options Beneficiaries have several Medicare options from which to select, which can have important consequences for their out-of-pocket expenses and access to care. These decisions include the following: What type of coverage? The first coverage decision faced by Medicare beneficiaries is choosing between original Medicare or MA. Original Medicare includes coverage for Medicare Part A services, such as inpatient hospital stays, and for Medicare Part B services, such as outpatient hospital care and physician office visits. Under MA—the private plan alternative to original Medicare—beneficiaries enroll in MA plans that generally must provide coverage for all the services included under original Medicare, and may also offer extra benefits. MA plans generally establish a network of health care providers to provide services to enrolled beneficiaries. Add prescription drug coverage? Beneficiaries in original Medicare and those in certain MA plans may also choose whether to add prescription drug coverage (Medicare Part D). Prescription drug plans are administered by private insurance companies that contract with CMS. Beneficiaries in original Medicare obtain drug coverage by purchasing a separate prescription drug plan (PDP), while those in MA generally obtain coverage by selecting a MA plan that offers prescription drug benefits. MA prescription drug plans and separate PDPs vary in the amount beneficiaries need to pay and in the drugs that are covered. Add supplemental coverage? Beneficiaries in original Medicare can also purchase Medicare supplemental insurance—known as Medigap plans—offered by private insurance companies. These plans help pay for Medicare’s required cost sharing and some out-of-pocket costs not covered under original Medicare, such as emergency health care during international travel. Figure 1 illustrates the decisions beneficiaries have to make when selecting their Medicare coverage options. Medicare Cost and Access Considerations Two research studies we reviewed indicate that cost is a key consideration for Medicare beneficiaries when selecting Medicare coverage. Beneficiaries may want to know what their likely out-of-pocket costs will be monthly, annually, or both. Beneficiaries may also want to know what their costs may be if they have a change in health status, such as by experiencing an illness. Beneficiaries may be responsible for several specific types of health care costs, including the following: Premiums—Beneficiaries generally make monthly payments to purchase coverage. Medicare Part A generally does not require beneficiaries to pay a premium. Part B premiums are established by statutory formula and are means-tested so that beneficiaries with higher incomes pay higher premiums. The premiums charged by MA plans and Part D plans are established by each plan and can vary widely. Beneficiaries in original Medicare who opt to purchase Medigap will also pay a monthly premium for coverage, with the amount of the premium varying across the 10 standardized plans and by the different companies offering these plans. Cost sharing—Beneficiaries are typically responsible for paying a portion of the costs for the services they receive as either a copayment or coinsurance. A copayment is a fixed dollar amount for each doctor visit, medical service, or medication. With coinsurance, a beneficiary pays a percentage of the allowed charge for each health care service or medication. Deductibles—Beneficiaries must pay out-of-pocket a specified annual amount of expenses before Medicare will begin paying for approved services or medications. MA plans establish out-of-pocket maximums or set limits on the amount a beneficiary will have to spend a year. In contrast, original Medicare has no limit on beneficiary out-of-pocket costs. In 2019, two Medigap plans provide maximum out-of-pocket limits, and beneficiaries with these plans do not have to pay costs above the limits. The same two research studies identified access to particular health care providers as another key consideration for beneficiaries when selecting Medicare coverage. Beneficiaries in original Medicare may see any doctor or use any facility that accepts Medicare payment, and referrals are not needed to see specialists. In contrast, MA beneficiaries must typically use the MA plan’s network of health care providers, including doctors, hospitals, and outpatient facilities, and referrals are generally needed to see specialists. Further, beneficiaries in MA plans that allow access to out-of-network providers may be required to pay more when receiving services from such providers. MA provider networks can change during the year and from year to year. Medicare Plan Finder (MPF) According to CMS officials, MPF was launched in 1998 in response to the Balanced Budget Act of 1997, which required the Department of Health and Human Services—the agency responsible for overseeing CMS—to maintain MA plan information on the internet, among other things. According to CMS, MPF is a primary CMS resource for beneficiaries to compare costs and coverage of different Medicare health and prescription drug coverage options in their area, including comparing original Medicare to MA plans, and Part D plans. As illustrated in figure 2, the MPF landing page—the first web page users see when accessing MPF— includes a section where beneficiaries start the process of searching for and comparing coverage options (see A in fig. 2), and a section providing links to additional decision support tools for beneficiaries (see B in fig. 2). Beneficiaries begin searching in MPF by entering their zip code and following a 4-step process that moves them through different MPF website pages. Step 1—Basic search: Beneficiaries provide responses to requested information, including identifying whether they have Medicare coverage and whether they would like to add prescription drug coverage to their search. Step 2—Enter drugs: Beneficiaries may add a list of prescription drugs, along with the dosage and dosing frequency, to identify which plans cover these drugs and the cost sharing amount under each plan. Step 3—Select pharmacies: Beneficiaries select up to two pharmacies that they prefer for obtaining their medications. Step 4—Refine plan results: Beneficiaries see a list of available coverage options—original Medicare, MA plans, and separate PDPs—based on the zip code they entered. Beneficiaries can filter these search results by variables such as monthly premium or deductible amounts, and then they can sort those results by variables such as lowest estimated annual costs or lowest plan deductible. Beneficiaries can then select up to three choices, view specific coverage and cost details for each, and do a detailed side-by-side comparison of each. The plan results page shows this comparison and includes beneficiaries’ estimated annual out-of- pocket costs for each coverage option they choose to review. The additional decision support tools available on the MPF landing page that beneficiaries may use to help select their Medicare health and drug coverage include the following: 1. Help with Your Medicare Choices, which uses filtering questions to help new beneficiaries understand their Medicare coverage choices; 2. Estimate Medicare Costs, which helps beneficiaries compare the average estimated costs of original Medicare options, such as original Medicare with a prescription drug plan and a Medigap plan, to the costs of MA with prescription drug coverage; and 3. Find and Compare Medigap Policies, which helps beneficiaries find information on the different standardized Medigap plans offered by zip code. Since its inception, MPF has undergone many modifications as new parts were added to the Medicare program, such as the addition of Medicare Part D. According to CMS officials, the agency has also taken steps to make additional changes to improve the website, including technology updates to improve system stability and performance, such as page load times and error rates. In addition, CMS seeks feedback from stakeholders, such as the customer service representatives at the 1-800- MEDICARE help line, SHIP personnel, and others, which according to agency officials, has resulted in additional changes. Changes have included allowing beneficiaries to log into their Medicare account to access some of their existing data, such as their prescription drugs, and the addition of a help feature that can connect beneficiaries to 1-800- MEDICARE customer service representatives for live help. Stakeholders and Research Indicated Medicare Plan Finder Is Difficult to Use and Provides Incomplete Information, and CMS Is Redesigning It to Make Improvements Challenges Navigating and Understanding Information in Medicare Plan Finder Make It Difficult for Beneficiaries to Use, According to Stakeholders and Research Stakeholders, research studies, and SHIP directors responding to our survey generally indicated that MPF is difficult for beneficiaries to navigate and understand. All 13 stakeholder groups we interviewed reported that MPF is challenging for Medicare beneficiaries to use. Specifically, most stakeholders cited difficulty navigating as beneficiaries click through multiple complex pages in order to find and compare coverage options. For example, two stakeholders noted that beneficiaries must answer questions about their current Medicare health and drug coverage and then go through a series of pages and steps before they can view detailed information on their coverage options. One of these stakeholders also told us that MPF navigation is cumbersome because users cannot jump directly to certain pages or sections that address their needs, such as viewing the availability of preferred pharmacies. One of the stakeholders we interviewed also noted the lack of prominent instructions on how to use MPF contributed to difficulties navigating the four steps. Finally, in our interviews two stakeholders also noted that navigation is difficult because beneficiaries are uncertain of the information needed to make different comparisons or identify specific plans. For example, the ability to filter and sort plan information does not appear until later in the plan search process, where users are refining plan results. This makes it hard for users to narrow options specific to their needs because they first must go through all the options presented. Specifically, beneficiaries will first see a list of plans available in their zip code—on average 24 plans—and then must narrow down that list before they can compare up to three selected plans. A 2018 report conducted jointly by two advocacy groups cited difficulties locating the filter and sort functions in MPF, which contributed to navigation problems. CMS user testing conducted on MPF found that overall beneficiaries are confused about how to find a MA plan on MPF. For example, this testing showed that some users had difficulties with the steps for refining plan results because they overlooked or ignored the filters. A 2017 CMS study noted that MPF navigation is difficult and is better suited for specialist users who assist beneficiaries in determining their coverage options, such as 1-800 MEDICARE customer service representatives and SHIP counselors. Further, CMS officials said the study found that beneficiaries would benefit if navigation through the site were more tailored to the tasks they were undertaking. Our survey of SHIP directors, who provide assistance to Medicare beneficiaries and therefore are familiar with MPF usability, also found that it is difficult for beneficiaries to navigate and find information. Specifically, 73 percent (29 of 40) of the SHIP directors who responded to our survey reported that it is difficult or very difficult for beneficiaries to find information in MPF. While SHIP directors reported that it is easier for SHIP counselors to find information, they noted that some also experience difficulty. Eighteen percent (7 of 40) SHIP directors reported that it is difficult for SHIP counselors to find information in MPF. (See fig. 3.) In addition to website navigation, it is also difficult for beneficiaries to understand the information in MPF, according to stakeholders, research studies, and SHIP directors responding to our survey. All seven beneficiary advocacy groups interviewed reported that beneficiaries find it challenging to understand information in MPF. For example, some stakeholders noted that beneficiaries do not always understand terminology, such as the differences between cost sharing, copayment, and out-of-pocket costs. Most stakeholders also noted that beneficiaries struggle to understand cost estimates and interpret how much they will have to pay. CMS user testing of MPF in 2018 found that beneficiaries were overwhelmed by the number and complexity of options from which they had to choose. According to a 2018 research study conducted by two advocacy groups, the website explains health coverage terminology poorly and does not use plain language. As a result, users with low health insurance literacy may not understand, for example, the cost differences between generic versus brand-name drugs. Sixty-five percent (26 of 40) of the SHIP directors we surveyed reported that the information in MPF is difficult or very difficult for beneficiaries to understand, while 23 percent (9 of 40) reported that it is difficult for SHIP counselors to understand information (see fig. 4). SHIP directors identified health coverage terminology as a challenge, with 38 percent (15 of 40) reporting that MPF does a poor or very poor job explaining health coverage terminology, such as non-network providers, drug formularies, and drug tiers to beneficiaries. According to Stakeholders, Medicare Plan Finder Provides Incomplete Information on Costs and Coverage, Making it Difficult to Compare Medicare Options MPF provides incomplete estimates of beneficiaries’ costs under original Medicare, making it difficult to compare coverage options, according to stakeholders and SHIP directors responding to our survey. The cost estimates on the plan results pages are incomplete because they do not include the effect of Medigap—which helps cover beneficiaries’ cost sharing responsibilities under original Medicare. As a result, beneficiaries who want to use MPF to compare original Medicare with a Medigap plan to specific MA plans are unable to do so. Most—4 of 7—beneficiary advocacy group stakeholders that we interviewed noted that beneficiaries must leave MPF to obtain information about Medigap plans, such as the specific benefits covered under those plans and their estimated costs. Six of seven beneficiary advocacy groups that we interviewed noted that MPF’s incomplete information on estimated beneficiary costs is a concern because beneficiaries need this information for understanding and comparing their Medicare options. CMS’s other coverage decision support tools—Help with Your Medicare Choices and Estimate Medicare Costs—provide general information intended to help beneficiaries understand and compare their Medicare options. However, these tools are separate links; their information is not included on the plan results pages in MPF. The SHIP directors we surveyed also noted lack of information as a concern, with 75 percent (30 of 40) reporting that the lack of Medigap information in MPF limits the ability of beneficiaries to compare original Medicare and MA plans. Further, SHIP directors surveyed reported more general concerns with MPF’s cost estimates, with 80 percent (32 of 40) reporting that improvements are needed to better estimate total annual beneficiary costs, and 63 percent (25 of 40) of the SHIP directors reporting that MPF does a poor or very poor job comparing the costs of original Medicare to MA. Stakeholders and SHIP directors responding to our survey reported that MPF also provides incomplete information on MA plan provider networks. According to a CMS-sponsored study, determining if specific providers are in an MA plan provider network is a key factor for beneficiaries when making coverage decisions, and beneficiaries stated in user testing that they must have this information. However, to obtain information on the providers in specific MA plans, MPF users must exit the website and go to the individual plan websites. Most stakeholders—10 of 13—cited the lack of information on provider networks as a shortcoming for beneficiaries in using MPF to select a plan, with one group stating that MPF users may need to call individual plans to determine if providers are in a plan’s network. SHIP directors also cited this issue as a problem, as 85 percent (34 of 40) who responded to our survey reported that the lack of a provider directory limits MPF as a resource for beneficiaries to compare MA plans. Without provider information, beneficiaries are not able to use MPF to narrow their options to MA plans that include desired providers or make comparisons among these plans. CMS Is Redesigning MPF in an Effort to Improve its Usability and the Completeness of Cost Information According to CMS officials, the agency is redesigning MPF to make it more usable for beneficiaries and is planning to release the redesigned MPF in early August 2019. With the redesign, CMS plans to improve the navigation of MPF by providing more prominent explanations on how to use MPF; reducing the steps users must take to get to more detailed coverage information; configuring MPF so users can more easily switch between different topics inside MPF, such as switching between MA plan information and Part D plan information; and improving the filter and sort functions so users can narrow down their coverage options more quickly. CMS also plans to make information easier to understand by simplifying and reducing the volume of information on the pages and revising frequently misunderstood terms with more user-friendly language. As part of the redesign, CMS is also taking steps to provide more complete cost information in MPF to help compare coverage options, according to agency officials. CMS plans to provide more information to the redesigned MPF to help beneficiaries understand their coverage options and decide whether original Medicare or MA is right for them. CMS officials also told us in June 2019 that the redesigned MPF will allow beneficiaries to do estimated cost comparisons of MA to all their original Medicare options, such as original Medicare with a Medigap plan and a prescription drug plan. Officials also told us that CMS is incorporating the functionality of the additional decision support tools currently available on the MPF landing page—Help with Your Medicare Choices and Estimate Medicare Costs—into the redesigned MPF to help beneficiaries understand their coverage options and compare their estimated costs across these options. In June 2019, CMS officials stated these additional tools will also continue to appear as separate links on the MPF landing page. CMS officials also told us that they are currently examining how to integrate MA plan provider information, but this is not part of the redesigned MPF being released in August 2019. The officials said they are working with the plans to develop requirements to help support the integration of provider directories into future versions of MPF. According to CMS, the redesign of MPF is not finalized and CMS will continue to evaluate the extent to which the changes will make MPF easier for beneficiaries to use and whether it provides complete information for making coverage decisions. As of June 2019, CMS officials told us they are continuing to gather feedback from stakeholders, such as 1-800-MEDICARE customer service representatives and SHIP personnel, and conduct user testing on a redesigned MPF model. CMS then plans to publicly launch the redesigned MPF to a subset of users in early August 2019. Once launched, CMS plans to incorporate feedback from this subset of users to confirm the core features that will be released in the redesigned MPF prior to the Medicare open enrollment period starting October 15, 2019. According to CMS officials, the development of the redesigned MPF is an incremental process that will involve continuous changes based on feedback and user testing. According to the agency, CMS will know more about how well the redesigned MPF addresses user needs after it is used by beneficiaries. Agency Comments We provided a draft of this report to the Department of Health and Human Services for review and comment. The Department of Health and Human Services provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. Appendix I: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Tim Bushfield, Assistant Director; Maggie G. Holihan, Analyst-in-Charge; Sylvia Diaz Jones; Anne Hopewell; Dennis A. Antonio; and Dan Ries made key contributions to this report. Also contributing were Cathy Hamann, Krister Friday, Ethiene Salgado-Rodriguez, Julie Flowers, and Jennifer Rudisill.
Why GAO Did This Study Medicare beneficiaries—more than 60 million as of 2019—have a series of decisions to make when selecting their Medicare health and prescription drug coverage. Beneficiaries must first choose between two main options for their Medicare coverage: either original fee-for-service Medicare or MA. Within these two options, beneficiaries have many additional choices, and they are permitted to change their coverage at least annually. These selections can be difficult due to the Medicare program's complexity and can have important implications for beneficiaries' out-of-pocket costs and access to providers. According to CMS, the MPF website is intended to help beneficiaries make informed decisions regarding their health care and prescription drug coverage. However, some stakeholders have raised concerns that beneficiaries experience challenges using MPF to compare their Medicare coverage options. GAO was asked to review MPF. This report examines what is known about the usability of MPF and the completeness of its information. GAO reviewed research and CMS documentation on MPF, and surveyed 51 directors of SHIP offices that have counselors who assist beneficiaries with Medicare decisions. Forty SHIP directors completed the survey, resulting in a 78 percent response rate. GAO also interviewed CMS officials and officials with 13 stakeholder groups, including seven beneficiary advocacy groups. GAO provided a draft of this report to the Department of Health and Human Services. The department provided technical comments, which GAO incorporated as appropriate. What GAO Found The Medicare Plan Finder (MPF) website—a primary resource for comparing Medicare coverage options—is difficult for beneficiaries to use and provides incomplete information, according to stakeholders and research studies. These sources and directors of State Health Insurance Assistance Programs (SHIP) GAO surveyed—who assist beneficiaries with their Medicare coverage choices—reported that beneficiaries struggle with using MPF because it can be difficult to find information on the website and the information can be hard to understand. For example, MPF requires navigation through multiple pages before displaying plan details, lacks prominent instructions to help beneficiaries find information, and contains complex terms that make it difficult for beneficiaries to understand information. In response to GAO's survey, 73 percent of SHIP directors reported that beneficiaries experience difficulty finding information in MPF, while 18 percent reported that SHIP counselors experience difficulty. Stakeholders and SHIP directors reported that MPF provides incomplete estimates of costs under original Medicare, making it difficult to compare original Medicare and Medicare Advantage (MA), the program's private heath plan alternative. Specifically, MPF's plan results pages do not integrate information on Medigap plans. (These plans help cover some of beneficiaries' out-of-pocket costs.) Seventy-five percent of the SHIP directors surveyed reported that the lack of Medigap information in MPF limits the ability of beneficiaries to compare original Medicare to MA. The Centers for Medicare & Medicaid Services (CMS)—the agency that administers MPF—is aware of the difficulities beneficiaries face using MPF and is planning to launch a redesigned website in August 2019. According to CMS, redesigning MPF involves multiple iterations of changes and ongoing user testing, and CMS will know more about how well the redesigned MPF addresses user needs after it is used by beneficiaries.
gao_GAO-19-621T
gao_GAO-19-621T_0
Background Resellers maintain large, sophisticated databases with consumer information that can include credit histories, insurance claims, criminal records, employment histories, incomes, ethnicities, purchase histories, and interests. As shown in figure 1, resellers largely obtain their information from public records, publicly available information (such as directories and newspapers), and nonpublic information (such as from retail loyalty cards, warranty registrations, contests, and web browsing). Consumer information can be derived from mobile networks, devices (including smartphones and tablets), operating systems, and applications. Resellers also may obtain personal information from the profile or public information areas of websites, including social media sites, or from information on blogs or discussion forums. Depending on the context, information from these sources may be publicly available or nonpublic. In 1973, a U.S. government advisory committee first proposed the Fair Information Practice Principles for protecting the privacy and security of personal information. While these principles are not legal requirements, they provide a framework for balancing privacy with other interests. In 2013, the Organisation for Economic Co-operation and Development (OECD) developed a revised version of the principles (see table 1). The Fair Information Practice Principles served as the basis for the Privacy Act of 1974—which governs the collection, maintenance, use, and dissemination of personal information by federal agencies. The principles also were the basis for many Federal Trade Commission (FTC) and Department of Commerce privacy recommendations and for a framework for consumer data privacy the White House issued in 2012. Several Laws Apply in Specific Circumstances to Consumer Data That Resellers Hold As we reported in 2013 and as continues to be the case, no overarching federal privacy law governs the collection, use, and sale of personal information among private-sector companies, including information resellers. There are also no federal laws designed specifically to address all the products sold and information maintained by information resellers. Federal laws addressing privacy issues in the private sector are generally narrowly tailored to specific purposes, situations, types of information, or sectors or entities—such as data related to financial transactions, personal health, and eligibility for credit. These laws include provisions that limit the disclosure of certain types of information to a third party without an individual’s consent, or prohibit certain types of data collection. The primary laws include the following: Fair Credit Reporting Act (FCRA). FCRA protects the security and confidentiality of personal information collected or used to help make decisions about individuals’ eligibility for credit, insurance, or employment. It applies to consumer reporting agencies that provide consumer reports. Accordingly, FCRA applies to the three nationwide consumer reporting agencies (commonly called credit bureaus) and to any other information resellers that resell consumer reports for use by others. FCRA limits resellers’ use and distribution of personal data—for example, by allowing consumers to opt out of allowing consumer reporting agencies to share their personal information with third parties for prescreened marketing offers. Gramm-Leach-Bliley Act (GLBA). GLBA protects nonpublic personal information that individuals provide to financial institutions or that such institutions maintain. GLBA sharing and disclosure restrictions apply to financial institutions or entities that receive nonpublic personal information from such institutions. For example, a third party that receives nonpublic personal information from a financial institution to process consumers’ account transactions may not use the information or resell it for marketing purposes. Health Insurance Portability and Accountability Act of 1996 (HIPAA). HIPAA establishes a set of national standards to protect certain health information. The HIPAA privacy rule governs the use and disclosure of an individual’s health information for purposes including marketing. With some exceptions, the rule requires an individual’s written authorization before a covered entity—a health care provider that transmits health information electronically in connection with covered transactions, health care clearinghouse, or health plan—may use or disclose the information for marketing. The rule does not directly restrict the use, disclosure, or resale of protected health information by resellers or others not considered covered entities under the rule. Children’s Online Privacy Protection Act of 1998 (COPPA). COPPA and its implementing regulations apply to the collection of information— such as name, email, or location—that would allow someone to identify or contact a child under 13. Covered website and online service operators must obtain verifiable parental consent before collecting such information. COPPA may not directly affect information resellers, but the covered entities are potential sources of information for resellers. Electronic Communications Privacy Act of 1986 (ECPA). ECPA prohibits the interception and disclosure of electronic communications by third parties unless an exception applies (such as one party to the communication consenting to disclosure). For example, the act would prevent an internet service provider from selling the content of its customers’ emails to a reseller for marketing purposes, unless the customers had consented to disclosure. However, ECPA provides more limited protection for information considered to be “non-content,” such as a customer’s name and address. Federal Trade Commission Act (FTC Act), Section 5. The FTC Act prohibits unfair or deceptive acts or practices in or affecting commerce. Although the act does not explicitly grant FTC the specific authority to protect privacy, FTC has interpreted it to apply to deceptions or violations of written privacy policies. For example, if a retailer’s written privacy policy stated customers’ personal information would not be shared with resellers and the retailer later sold information to such parties, FTC could bring an enforcement action against the retailer for unfair and deceptive practices. Some states also have enacted laws designed to regulate resellers’ sharing of personal information about consumers. For example, in 2018, Vermont passed a law that contains, among other requirements, consumer protection provisions related to data brokers. Among other things, the law requires data brokers to register annually and prohibits the acquisition and use of brokered personal information through certain means and for certain uses. Gaps Exist in the Consumer Privacy Framework The scope of consumer privacy protections provided under federal law has remained narrow in relation to (1) individuals’ ability to access, control, and correct their personal data; (2) collection methods and sources and types of consumer information collected; (3) new technologies; and (4) some regulatory authorities. The examples in the following sections are drawn from our earlier reports and remain pertinent today. Federal Law Provides Individuals Limited Ability to Access, Control, and Correct Their Personal Data In our 2013 report, we found that no federal statute that we examined generally requires resellers to allow individuals to review personal information (intended for marketing purposes), control its use, or correct it. The Fair Information Practice Principles state that individuals should be able to know about and consent to the collection of their information and have the right to access the information, request correction, and challenge the denial of those rights. We also reported in 2013 that no federal statute provides consumers the right to learn what information is held about them and who holds it for marketing or look-up purposes. FCRA provides individuals with certain access rights, but only when information is used for credit eligibility purposes. And GLBA’s provisions allowing consumers to opt out of having their personal information shared with third parties apply only in specific circumstances. Otherwise, under federal law, individuals generally cannot require that their personal information not be collected, used, and shared. Also, no federal law we examined provides correction rights (the ability to have resellers and others correct or delete inaccurate, incomplete, or unverifiable information) for marketing or look-up purposes. Laws Largely Do Not Address Data Collection Methods, Sources, and Types Our 2013 report also found that federal privacy laws are limited in addressing the methods by which, or the sources from which, resellers collect and aggregate personal information, or the types of information collected for marketing or look-up purposes. The Fair Information Practice Principles state that personal information should be relevant, limited to the purpose for which it was collected, and collected with the individual’s knowledge or consent. Federal laws generally do not govern the methods resellers may use to collect personal information. For instance, resellers, advertisers, and others use software to search the web for information about individuals and extract and download bulk information from websites with consumer information. Resellers or retailers also may collect information indirectly (by combining information from transactions). Current federal law generally allows resellers to collect personal information from sources such as warranty registration cards and surveys and from online sources such as discussion boards, social media sites, blogs, and web browsing histories and searches. Current federal law generally does not require disclosure to consumers when their information is collected from these sources. The federal laws that address the types of consumer information that can be collected and shared are not comprehensive. Under most circumstances, information that many people may consider very personal or sensitive can be collected, shared, and used for marketing. This can include information about physical and mental health, income and assets, political affiliations, and sexual habits and orientation. For health information, HIPAA rule provisions generally apply only to covered entities, such as health care providers. Privacy Framework Largely Has Not Kept Pace with Changes in Technology The current privacy framework does not fully address new technologies such as facial recognition technology, privacy issues raised by online tracking and mobile devices, and activities by financial technology firms. The original enactment of several federal privacy laws predates these trends and technologies. But in some instances existing laws have been interpreted to apply to new technologies. For example, FTC has taken enforcement actions under COPPA and revised the statute’s implementing regulations to account for smartphones and mobile applications. Facial Recognition Technology One example of how privacy law has not kept pace with changes in technology is the use of facial recognition technology, which involves the collection of facial images and may be employed in a wide range of commercial applications. In our 2015 report we concluded that the future trajectory of this technology raised questions about consumer privacy. We found that federal law does not expressly address the circumstances under which commercial entities can use facial recognition technology to identify or track individuals, or when consumer knowledge or consent should be required for the technology’s use. Furthermore, in most contexts federal law does not address how personal data derived from the technology may be used or shared. The privacy issues stakeholders raised about facial recognition technology and other biometric technologies in use at the time of our 2015 report served as yet another example of the need to adapt federal privacy law to reflect new technologies. As such, we reiterated our 2013 recommendation that Congress strengthen the current consumer privacy framework to reflect the effects of changes in technology and the marketplace. Activities by Financial Technology Firms The rise of financial services provided by nonfinancial firms—often referred to as fintech—is another example of how new technology may create privacy concerns. For example, fintech lenders offer a variety of loans such as consumer and small business loans and operate almost exclusively online. In our 2018 report, we noted that while these lenders may still assess borrowers’ creditworthiness with credit scores, they also may analyze large amounts of additional or alternative sources of data to determine creditworthiness. We also found that some fintech firms may collect more consumer data than traditional lenders. For example, fintech lenders may have sensitive information such as consumers’ educational background or utility payment information, and according to certain stakeholders, these data may contain errors that cannot be disputed by consumers under FCRA. Furthermore, some data aggregators may hold consumer data without disclosing what rights consumers have to delete the data or prevent the data from being shared with other parties. A leak of these or other data held by fintech firms may expose characteristics that people view as sensitive. GLBA generally requires fintech firms and traditional financial institutions to safeguard nonpublic personal information about customers. Our 2018 report discussed that some fintech firms use new technologies or mobile device features to mitigate data privacy risks and that some regulators have issued guidance to consumers publicizing practices that help maintain privacy when using online products and services, including those provided by fintech firms. Regulators also have issued GLBA guidance to businesses, including fintech firms, recommending that they adopt policies and procedures to prevent, detect, and address privacy threats. Internet Privacy Issues Online tracking. In our 2013 report, we found that no federal privacy law explicitly addresses the full range of practices to track or collect data from consumers’ online activity. Cookies allow website operators to recall information such as user name and address, credit card number, and purchases in a shopping cart. Resellers can match information in cookies and their databases to augment consumer profiles. Third parties also can synchronize their cookie files with resellers’ files. Advertisers can use third-party cookies—placed on a computer by a domain other than the site being visited—to track visits to the websites on which they advertise. While current federal law does not, with some exceptions, explicitly address web tracking, FTC has taken enforcement actions related to web tracking under its authority to enforce the prohibition on unfair or deceptive acts. For example, in 2011, FTC settled charges with Google for $22.5 million after alleging that Google violated an earlier privacy settlement with FTC when it misrepresented to users of Apple’s Safari web browser that it would not track and serve targeted advertisements to Safari users. Google agreed to disable its advertising tracking cookies. Mobile devices. In 2013, we also explained that no federal law comprehensively governs applications software for mobile devices. Application developers, mobile carriers, advertisers, and others may collect an individual’s information through services provided on a mobile device. However, FTC has taken enforcement action against companies for use of mobile applications that violate COPPA and FCRA. The agency also has taken action under the FTC Act. We and others have reported that the capability of mobile devices to provide consumer’s location engenders privacy risks, particularly if companies use or share location data without consumers’ knowledge. ECPA might not apply if location data were not deemed content and would not govern entities that are not covered by ECPA. But FTC could pursue enforcement action if a company’s collection or use of the information violated COPPA. More recently, in January of this year, we issued a report on internet privacy that reinforces what we reported in 2013. To varying extents, internet content providers and internet service providers collect, use, and share information from their customers to enable their services, support advertising, and for other purposes. Consumers access such services through mobile phones and tablets, computers, and other internet- connected devices. However, there is no comprehensive federal privacy statute with specific standards. FTC has been addressing internet privacy through its unfair and deceptive practices authority, among other statutes, and other agencies have been addressing this issue using industry- specific statutes. We concluded that recent developments regarding internet privacy suggest that this is an appropriate time for Congress to consider comprehensive internet privacy legislation. To address such privacy concerns, states and other countries have adopted privacy rules. For example, the European Union’s General Data Protection Regulation, which came into force in May 2018, is a set of privacy rules that give consumers control over the collection, use, and sharing of their personal information, and California passed its own privacy law in June 2018 that becomes effective in 2020. Regulatory Authorities under Current Law May Be Limited In February of this year, we reported that FTC does not have civil penalty authority for initial violations of GLBA’s privacy and safeguarding requirements, which, unlike FCRA, includes a provision directing federal regulators and FTC to establish standards for financial institutions to protect against any anticipated threats or hazards to the security of customer records. To obtain monetary redress for these violations, FTC must identify affected consumers and any monetary harm they may have experienced. However, harm resulting from privacy and security violations (such as a data breach) can be difficult to measure and can occur years in the future, making it difficult to trace a particular harm to a specific breach. As a result, FTC lacks a practical enforcement tool for imposing civil money penalties that could help to deter companies from violating data security provisions of GLBA and its implementing regulations. We recommended that Congress consider giving FTC civil penalty authority to enforce GLBA’s safeguarding provisions. Additionally, in our January 2019 report, we found that FTC had not yet issued regulations for internet privacy other than those protecting financial privacy and the internet privacy of children, which were required by law. FTC uses its statutory authority under the FTC Act to protect consumers from unfair and deceptive trade practices. For FTC Act violations, FTC may promulgate regulations but is required to use procedures that differ from traditional notice-and-comment processes and that FTC staff said add time and complexity. In addition, under this authority, FTC can generally only levy civil money penalties after a company has violated an FTC final consent order. In our recommendation that Congress consider developing comprehensive internet privacy legislation, we also suggested that such legislation consider providing rulemaking and civil money penalty authorities to the proper agency or agencies. In summary, new technologies have vastly changed the amount of personal information private companies collect and how they use it. But our current privacy framework does not fully address these changes. Laws protecting privacy interests are tailored to specific sectors and uses. And, consumers have little control over how their information is collected, used, and shared with third parties for marketing purposes. As a result, current privacy law is not always aligned with the Fair Information Practice Principles, which the Department of Commerce and others have said should serve as the foundation for commercial data privacy. Thus, the privacy framework warrants reconsideration by Congress in relation to consumer interests, new technologies, and other issues. Chairman Crapo, Ranking Member Brown, and Members of the Committee, this concludes my statement. I would be pleased to respond to any questions you may have. GAO Contacts For further information on this statement, please contact Alicia Puente Cackley at 202-512-8678 or cackleya@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact above, Jason Bromberg (Assistant Director), William R. Chatlos, Rachel DeMarcus, Kay Kuhlman (Assistant Director), Christine McGinty (Analyst in Charge), Barbara Roesmann, and Tyler Spunaugle contributed to this statement. Other staff who made key contributions to the reports cited in the testimony are identified in the source products. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Information resellers—companies that collect and resell information on individuals—have dramatically increased the collection and sharing of personal data in recent years, raising privacy concerns. Increasing use of social media, mobile applications, and other technologies have intensified these concerns. This statement is primarily based on findings from GAO's 2013 report on information resellers ( GAO-13-663 ). It also discusses a 2015 report on facial recognition technology ( GAO-15-621 ), a 2018 report on financial technology ( GAO-18-254 ), and two 2019 reports on internet privacy and consumer data protection ( GAO-19-52 and GAO-19-196, respectively). GAO discusses (1) existing federal laws related to the privacy of consumer information held by information resellers and (2) any gaps in this legal framework. For the prior work, GAO analyzed relevant laws, regulations, and enforcement actions and interviewed representatives of federal agencies, trade associations, consumer and privacy groups, and resellers. What GAO Found In recent years, GAO issued reports that relate to information resellers and consumer privacy issues. Two central findings from a 2013 GAO report remain current: No overarching federal privacy law governs the collection and sale of personal information among private-sector companies , including information resellers (data brokers). Instead, a variety of laws are tailored to specific purposes, situations, or entities. For example, the Fair Credit Reporting Act limits use and distribution of personal information collected or used to help determine eligibility for such things as credit or employment. Other laws apply to health care providers, financial institutions, or to online collection of information about children. Gaps exist in the federal privacy framework . With regard to data that private-sector entities use for marketing, no federal statute provides consumers the right to learn what information is held about them and who holds it. In many cases, consumers also do not have the legal right to control the collection or sharing with third parties of sensitive personal information (such as their shopping habits and health interests) for marketing purposes. In 2013 and in 2015, GAO also reported that the statutory framework for consumer privacy did not fully address new technologies—such as online tracking and facial recognition—and the vastly increased marketplace for personal information, including the proliferation of information sharing among third parties. In two 2019 reports, GAO found additional gaps in the federal privacy framework and potential limitations in regulatory authority under current privacy law. Internet content providers and internet service providers collect, use, and share information from customers to enable their services, support advertising, and for other purposes. Although the Federal Trade Commission (FTC) generally has addressed internet privacy through its unfair and deceptive practices authority, and other agencies have used industry-specific statutes, there is no comprehensive federal privacy statute with specific internet privacy standards for the private sector. GAO also reported that the Gramm-Leach-Bliley Act, a key law governing the security of consumer information, does not provide FTC with civil penalty authority for violations of the privacy and data security provisions of the act. New and more advanced technologies and changes in the marketplace for consumer information have vastly increased the amount and nature of personal information collected and the number of parties using or sharing it. Such changes warrant reconsideration of how well the current privacy framework protects personal information. What GAO Recommends In 2013, GAO recommended that Congress consider strengthening the consumer privacy framework to reflect the effects of changing technologies and markets. In 2019, GAO recommended that Congress consider comprehensive internet privacy legislation. Legislation on these issues has not been enacted to date.
gao_GAO-19-569
gao_GAO-19-569_0
Background At the federal level, CMS, within the Department of Health and Human Services, is responsible for overseeing the design and operation of states’ Medicaid programs, and states administer their respective Medicaid programs’ day-to-day operations. As a comprehensive health benefit program for vulnerable populations, each state Medicaid program, by law, must cover certain categories of individuals and provide a broad array of benefits. Within these requirements, however, states have significant flexibility to design and implement their programs, resulting in more than 50 distinct state-based programs. These variations in design have implications for program eligibility and services offered, as well as for how expenditures are reported and services delivered. Medicaid Service Delivery In administering their own programs, states may provide Medicaid services under a fee-for-service delivery model or a managed care service delivery model. Under a fee-for-service model, states make payments directly to providers for services provided, and the federal government generally matches state expenditures for such services on the basis of a statutory formula. Under a managed care model, states pay MCOs a capitation payment, which is a fixed periodic payment per beneficiary enrolled in an MCO—typically, per member per month. MCOs pay health care providers for the services delivered to enrollees. In contrast, ACOs are organizations of health care providers and suppliers that come together voluntarily to provide coordinated care to patients with the goal of reducing spending while improving quality. States vary in terms of the types of managed care arrangements used, the populations enrolled, and the parts of the state covered by managed care. Service Utilization and Expenditures We previously reported that a small share of beneficiaries in each state collectively accounted for a disproportionately large share of total Medicaid expenditures. We found that in fiscal years 2009 through 2011, the most expensive 5 percent of Medicaid beneficiaries consistently accounted for almost half of the expenditures for all Medicaid beneficiaries. (See fig. 1.) Examining beneficiaries who were enrolled only in Medicaid, we also found that the most expensive 5 percent of beneficiaries were much more likely to have certain conditions—such as asthma, diabetes, and behavioral health conditions—than all other beneficiaries enrolled only in Medicaid. Examining 2009 data, we found that about 65 percent of the total expenditures for high-expenditure beneficiaries enrolled only in Medicaid were for hospital services and long-term services and supports, with the remaining 35 percent of expenditures for drugs, payments to managed care organizations and premium assistance, and non-hospital acute care. Other studies have also found similar patterns of service utilization and expenditures within the Medicaid population. For example, a January 2018 report noted that while beneficiaries who are dually eligible for Medicare and Medicaid constituted about 15 percent of Medicaid beneficiaries in 2013, they accounted for nearly one-third of Medicaid spending. A study examining data on children’s use of behavioral health services in Medicaid found that in 2005, about 10 percent of children in Medicaid received behavioral health services, but those services accounted for about 38 percent of spending on the overall Medicaid child population. Care Management Care management programs can be used as efforts to manage the cost and quality of health care services delivered to high-expenditure Medicaid populations, with the aim of improving outcomes and achieving cost savings. Generally, care management programs seek to assist consumers manage physical and mental health conditions more effectively, for example, by assessing patient needs and coordinating care across different providers. The general goal of care management is to achieve an optimal level of wellness and improve coordination of care while providing cost effective, non-duplicative services. Specific definitions for care management and other related terms such as care coordination, case management, and disease management vary. For the purpose of this report, we use care management to refer to these activities unless otherwise specified. Selected States Identified or Predicted High-Expenditure Medicaid Beneficiaries Using Statistics and Other Approaches Risk Scores Officials from most state agencies, MCOs, and the ACO said they used risk scores to identify or predict high-expenditure beneficiaries. Officials from four of the seven selected states, four MCOs, and the ACO said they used software or hired vendors who computed beneficiaries’ risk scores based on Medicaid service utilization data. Washington state officials said that in addition to Medicaid service utilization data, they used utilization data from Medicare Parts A, B, and D to compute risk scores for their dual-eligible population. Officials also discussed using the risk scores they computed in different ways. For example, Washington officials said they considered beneficiaries with a risk score of 1.5 or greater to be high expenditure, and they used that risk score as one of the eligibility criteria that must be met to receive certain care management services. In contrast, officials from an MCO in Nevada said they considered risk scores alongside other contextual information, such as the recent diagnosis of a chronic condition, to predict whether the beneficiary would likely generate high expenditures in the future and should be assigned care management services. Officials from three states, an MCO in South Carolina, and the ACO we interviewed said their software or vendors identified or predicted high-expenditure beneficiaries by using the risk scores they computed to stratify beneficiaries into risk tiers, such as low, medium, and high risk. Statistical Outliers Officials from South Carolina’s state Medicaid agency and two MCOs from Pennsylvania and Washington said they identified high-expenditure beneficiaries by examining service utilization data to identify statistical outliers or trends. Officials from the two MCOs said they looked for statistical outliers for various types of service utilization, such as emergency department visits, inpatient stays, and pharmacy use. Officials from South Carolina said they built internal software tools to help them easily examine service utilization for various subsets of beneficiaries and services. These officials said they looked for beneficiaries whose utilization appeared to be significantly higher or lower compared with other beneficiaries with similar characteristics, such as among children with Type 1 diabetes or among children in foster care. The officials also said that after they identified those outliers, they examined the reasons for those beneficiaries’ utilization patterns to better understand why those beneficiaries were outliers and to take corrective action if appropriate. The officials explained that they did not simply focus on a discrete list of beneficiaries with the highest overall expenditures, because many of those beneficiaries have medical needs that are inherently expensive and cannot be meaningfully improved through intervention. Diagnoses Officials from three of the seven state Medicaid agencies and four MCOs said they identified high-expenditure beneficiaries based on diagnoses or other group categorization. Officials commonly said they used chronic conditions, such as end-stage renal disease, the human immunodeficiency virus or acquired immune deficiency syndrome, chronic obstructive pulmonary disease, diabetes, or Hepatitis C. Pennsylvania officials said their list was developed based on clinical experience. Officials from South Carolina said their list of diagnoses was based on a review of conditions associated with high expenditures. Service Utilization and Claims Expenditure Thresholds Officials from two state Medicaid agencies—Indiana and Nevada—and all five MCOs said they identified high-expenditure beneficiaries as beneficiaries who exceed certain service utilization or claims expenditure thresholds. Indiana officials said they used service utilization thresholds, such as visiting the emergency room six or more times in the past 6 months. Nevada officials said one of their programs identified high- expenditure beneficiaries as those whose treatment costs exceeded $100,000 over a 12-month period. Officials from the five MCOs offered varying thresholds, such as claims exceeding $100,000 over a 6-month period; claims exceeding $40,000 during a state fiscal year; or stays in a neonatal intensive care unit exceeding 15 days. Clinical Judgment Officials from two state Medicaid agencies—Nevada and Pennsylvania— four MCOs, and the ACO said they relied on clinical judgment to decide whether a beneficiary was likely to be high expenditure. Officials from one MCO in Washington said the MCO conducted health assessments of new members to obtain a baseline understanding of their clinical states, which were then used to stratify beneficiaries and identify appropriate staff to address their needs. Similarly, officials from Pennsylvania and three MCOs said clinical reviews of beneficiaries’ needs or histories were triggered by providers, caregivers, or self-referrals for care management or other services. Officials from the ACO said that while risk scores made initial predictions about beneficiaries’ risk for generating high expenditures, those predictions could be overridden by clinical judgment. Selected States Used Care Management and Other Strategies to Manage Costs for High-Expenditure Medicaid Beneficiaries Officials from all seven selected states, all five MCOs, and the ACO we interviewed said they used care management to manage the costs and quality of care for high-expenditure Medicaid beneficiaries. In addition, some states used other strategies, such as strategies involving coverage policies, payment incentives, and restrictions on the number of providers certain beneficiaries could use. Across states that evaluated these efforts to manage costs and quality of care, results were mixed. All Selected States Used Care Management to Manage Costs for High- Expenditure Medicaid Beneficiaries Officials from all of the seven state Medicaid agencies we interviewed reported that they provided care management for high-expenditure beneficiaries in their fee-for-service delivery systems, for example, by assessing patient needs and coordinating care across providers, in an attempt to manage costs and ensure quality care. Further, the six selected states with MCOs or ACOs required these organizations to provide care management to high-expenditure beneficiaries enrolled in managed care. Officials also reported barriers to their efforts to provide care management. Care Management in Fee-for- Service Medicaid Officials from all of the seven state Medicaid agencies we interviewed reported that they provided care management for high-expenditure beneficiaries in their fee-for-service delivery system, to manage the cost and quality of their care. The organization and scope of the care management programs they described vary in some cases. For example: Pennsylvania provided care management for beneficiaries in fee-for- service through the state’s “intensive case management” unit, a unit of providers that contact beneficiaries by phone to ensure that they get the care they need. Care management is provided to newly enrolled Medicaid beneficiaries who are identified as high-expenditure until the beneficiary selects a managed care plan, typically within 30 days, and to certain other beneficiaries. State officials said that of the approximately 150,000 beneficiaries in fee-for-service, they provide care management to about 1,000 each month. Nevada implemented mandatory care management services for high- expenditure fee-for-service beneficiaries in rural areas of the state through a contract with a care management organization, which was paid to reach out to high-expenditure beneficiaries, assess their needs, and connect them with their medical providers. The organization delivered care management through regional care teams geographically located in beneficiaries’ communities, which coordinated with the beneficiaries’ providers to implement personalized care plans and manage follow-up appointments and services. High-expenditure beneficiaries were assigned to one of eight care management programs based on the beneficiary’s qualifying condition, such as whether they had cancer, chronic kidney disease, or a mental health diagnosis. South Dakota implemented a health home program in 2013, which paid local primary care clinics, community mental health centers, and Indian Health Service facilities to provide care management to high- expenditure Medicaid beneficiaries. Each clinic or center had a care coordinator who reached out to high-expenditure beneficiaries to initiate care management and connect them with their primary care providers. These beneficiaries were placed in one of four categories indicating the level of care coordination they needed based on the severity of their illness and risk of future costs. The program helped beneficiaries create a care plan, set goals to address their particular care needs, and manage their conditions. In state fiscal year 2018, around 5,800 recipients received services through more than 100 health home clinics in South Dakota. Washington State also implemented a health home program in 2013 in which care management activities were coordinated through “lead” entities, such as Area Agencies on Aging and other community-based organizations. These entities established networks of care coordination organizations representing primary care, mental health, long term care, chemical dependency providers, and specialty providers. The lead entities conducted outreach to high-expenditure beneficiaries to connect them with a care manager, who might be a nurse, physician assistant, social worker, behavioral health professional, or chemical dependency professional. Care Management in Managed Care State Medicaid officials who have MCOs and ACOs within their states said that they required these organizations to provide care management to high-expenditure beneficiaries to manage the cost and quality of their care. Examples of states’ care management requirements included steps such as beneficiary and provider outreach, conducting screenings or health assessments, and developing care plans (see sidebar). Some requirements specified the minimum frequency for conducting outreach and what information and data must be reported to the state regarding care management activities (see sidebar). Beneficiaries with excessive utilization or under-utilization for conditions other than those specified diseases in the contract must also be eligible for disease management services. beneficiaries are categorized for different levels of care coordination. (Indiana Medicaid) standard model of care management for high-risk beneficiaries, but each clinical department in the MCO—for example, Obstetrics or Cardiology—established specific plans for care management within their area of care. Care managers in these departments—nurses or social workers—were responsible for coordinating with a beneficiary’s primary care provider to ensure that the beneficiary is appropriately referred to specialists. Care managers can contact beneficiaries by phone, but they are also based in the community, such as at hospitals and state mental health clinics. Officials from the ACO in Vermont said that the ACO paid providers that were part of their network—such as primary care offices, home health agencies, and mental health agencies—to serve as beneficiaries’ care managers. Beneficiaries select one provider to be their “lead care coordinator” based on who they have the strongest relationship and trust with, and this provider receives enhanced payments from the ACO to support coordination with other providers in the beneficiary’s care team. Care team members communicate with each other through a software tool provided by the ACO, which maintains updated information on beneficiaries’ conditions and the care received. Barriers to Care Management for High-Expenditure Beneficiaries Officials we spoke to from the selected states, MCOs, and the ACO identified barriers to implementing care management for some high- expenditure Medicaid beneficiaries, including the inability to contact beneficiaries, the lack of social supports—that are part of what is referred to as “social determinants of health”—and shortages of providers or care management staff in rural areas. Difficulties contacting beneficiaries. The lack of valid contact information can result from missing or outdated information, transiency and homelessness, and beneficiary reliance on cell phones with limited minutes. Officials described efforts they had taken to address this barrier, including asking pharmacies to confirm and get updated information when beneficiaries pick up prescriptions; using e- mail, which officials stated is more consistent than physical addresses; and conducting direct outreach in emergency rooms. Addressing Social Determinants of Health Officials at most of the selected states, managed care organizations (MCO), and the accountable care organization said they took steps to help beneficiaries address social determinants of health, for example, by gathering data to identify which beneficiaries needed help with social supports, helping beneficiaries obtain transportation to medical appointments, assisting beneficiaries in accessing social services, providing short- term housing, and meeting other needs. For example, officials from one MCO described a beneficiary with diabetes, who, despite consistently filling his prescription and adhering to his care plan, regularly visited the emergency department in insulin shock. Through outreach they discovered that the beneficiary could not appropriately store his prescribed insulin, which needed to be refrigerated, because his home did not have running electricity or a refrigerator. The MCO identified resources in the community to provide a refrigerator and restore electricity. Social determinants of health. The effectiveness of care management in addressing the health needs of high-expenditure beneficiaries can be hindered by the lack of social supports. Officials said that in order to help beneficiaries manage their medical needs, care managers sometimes needed to address these social determinants of health, such as lack of transportation to medical appointments, lack of stable housing, and inconsistent access to food and other basic resources (see sidebar). At the same time, states and MCOs can face challenges to addressing social determinants of health, such as lack of data on social determinants of health and a lack of understanding about the effect of social determinants of health on health care utilization, which if available could help bolster program investments in those areas. Staff shortages in rural areas. Efforts to provide care management and medical services can be hindered by staff shortages in rural areas. Officials with one state Medicaid agency’s health home program said there was a shortage of individuals in rural areas willing to provide care management to high-expenditure beneficiaries. MCO officials in another state said their ability to care for beneficiaries in rural areas was also affected by a shortage of care managers. In Addition to Care Management, Some Selected States Used Other Strategies to Manage High-Expenditure Beneficiaries Other strategies, in addition to care management, reported by selected states—South Carolina, Nevada, Pennsylvania, and Indiana—to manage the cost and care for high-expenditure Medicaid beneficiaries included coverage policy changes, payment incentives, and restrictions on the use of providers. Coverage policy changes. South Carolina Medicaid officials said that in certain cases they reviewed their coverage policy to see if changes could reduce costs and improve health outcomes for high-expenditure beneficiaries. For example, according to officials, the state had a small number of high-expenditure beneficiaries with Type 1 diabetes that officials thought could benefit from continuous glucose monitoring, which was not covered by their state Medicaid program. The officials said that they wrote a proposal into their state budget and drafted state plan amendment language to address this, though they noted that the proposal had not been implemented as of January 2019. Payment incentives. Medicaid officials in Nevada and Pennsylvania described efforts to use payment incentives to manage costs for high- expenditure beneficiaries. Nevada officials told us that the state’s arrangement with its care management organization for high-expenditure beneficiaries included payment incentives related to reductions in cost, as well as performance on certain quality measures, such as immunization rates and treatments for specific conditions such as asthma, coronary artery disease, and heart failure. However, state officials said that they faced difficulties measuring these outcomes. The care management organization did not receive incentive payments for the first year of operation of the program (2014-2015) and state officials said they did not have results on incentive payments for subsequent years. Pennsylvania officials told us that in response to the high cost of drugs to treat Hepatitis C, Pennsylvania’s Medicaid agency created a risk-sharing arrangement with MCOs that had high-expenditure beneficiaries with Hepatitis C. According to state officials, the MCOs were required to submit their enrollees’ Hepatitis C test scores to show whether beneficiaries were obtaining treatment and experiencing improvement. The state then allocated additional funds to MCOs that demonstrated positive quality outcomes, thus saving the cost of re-treating beneficiaries who failed to follow through on treatment. The Pennsylvania officials also told us that the state provided payment incentives to MCOs in its Integrated Care Plan Program, in which physical health and behavioral health MCOs coordinate with each other in the care of high-expenditure beneficiaries with persistent serious mental illness, such as schizophrenia, depression, or psychosis. To quality for incentive payments, these MCOs had to create an integrated care plan for each beneficiary with a qualifying condition. The state’s Medicaid agency identified outcome measures that MCOs were held accountable to in calendar year 2018 related to emergency department utilization, inpatient admissions, inpatient readmissions, prescription medication adherence, and engagement in treatment for substance use disorders. As metrics improved, MCOs become eligible for incentives. According to state officials, Pennsylvania allocated $10 million for Integrated Care Plan program incentive payments for calendar year 2018. Restrictions on the use of providers. Indiana Medicaid officials described their program to address over-utilization of services by certain high-expenditure beneficiaries who may be engaged in doctor or pharmacy shopping—a strategy of using multiple providers that results in over-utilization or improper utilization of prescription drugs or other services. According to the officials, if other efforts to address a beneficiary’s over-utilization fail over a 2- to 4-month period, the beneficiary may be enrolled in Indiana’s Right Choices Program. This program restricts, or “locks in,” the beneficiary to a single physician, pharmacy, and hospital. Officials said that this program has helped to ensure that the provider is aware of the beneficiary’s history and has proven effective in getting beneficiaries to change their behavior. In addition to using the program for Medicaid beneficiaries enrolled in fee- for-service, MCOs are provided with a report of their beneficiaries who have high-utilization levels so that the MCO can determine if any of these beneficiaries should be enrolled in the program. Across Selected States that Assessed the Effect of their Strategies on Medicaid Expenditures and Other Outcomes, Results Were Mixed While some of the selected state Medicaid agencies reported that their efforts to manage costs and care for high-expenditure beneficiaries showed positive results, officials in other states reported mixed or inconclusive findings. Medicaid officials in four states—Pennsylvania, South Dakota, Vermont, and Washington—said their assessment of efforts to manage costs and care for high-expenditure beneficiaries showed positive results, such as cost savings or reductions in the use of expensive services. Pennsylvania Medicaid officials said that their Integrated Care Plan Program for high-expenditure beneficiaries with persistent serious mental illness resulted in improvements in utilization, including reductions in inpatient hospitalizations and readmissions. South Dakota Medicaid officials found that for 2017, health home participants cost $204 less per month than the comparison group, and experienced an 8 percent decline in emergency room visits from the prior year compared with a 10 percent increase in emergency room visits for the comparison group. The state estimated $7.7 million in costs were avoided. Vermont Medicaid officials analyzed utilization of high-expenditure beneficiaries in care management before and after they enrolled. The state reported in 2018 that the rate of inpatient visits per thousand beneficiaries decreased from 600 to 393, and the annual rate of emergency visits per thousand beneficiaries decreased from 1,536 to 1,003. An independent evaluation of a demonstration program for dually eligible beneficiaries in Washington that incorporated its Health Homes program found $107 million in Medicare cost savings over its first 42 months. As part of the state’s Financial Alignment Initiative, part of those savings went to the state Medicaid program. In contrast with the results reported by the four states, officials from Indiana and Nevada Medicaid agencies reported mixed or inconclusive findings related to the impact on cost or quality of their programs for high- expenditure Medicaid beneficiaries. Officials with Indiana’s Medicaid agency told us that an assessment of the Right Choices Program found relatively low cost savings generally, with the exception of pharmacy costs, where the program curbed excessive drug use among beneficiaries with substance use disorders and led to cost savings. Nevada Medicaid officials said that their fee-for-service care management organization appeared to achieve some cost savings, but had little effect on quality of care after the program was implemented in 2013. They also said that it was difficult to determine the true effect of the program, because the state implemented several other cost savings policies at the same time as the care management organization. Nevada let the program expire in 2018 and is researching other potential ways to manage high-expenditure beneficiaries in the state’s fee-for-service program. CMS Offered Optional Tools and Technical Assistance That Could Be Used To Identify or Better Manage High- Expenditure Medicaid Beneficiaries CMS offered optional tools, as well as technical assistance and other educational resources that state Medicaid agencies used to identify or better manage high-expenditure beneficiaries. CMS Offered Tools That Could Help States Manage High-Expenditure Medicaid Beneficiaries CMS’s optional tools included the Health Home State Plan Option and the Financial Alignment Initiative, though these are not specifically designed for the purpose of identifying and managing high-expenditure beneficiaries. Medicaid officials in two selected states said that these programs improved their efforts to manage care for their high-expenditure beneficiaries. Health Home State Plan Option. The Medicaid Health Home State Plan Option, authorized under the Patient Protection and Affordable Care Act, allowed states to design health home programs to provide comprehensive care coordination for Medicaid beneficiaries with chronic conditions. CMS officials we spoke with said the states who chose the option received access to resources including planning funds and technical assistance from CMS. For example, CMS issued a brief illustrating how states could focus their health home programs on high-expenditure beneficiaries. CMS officials noted that they supported 23 states’ and the District of Columbia’s health home programs. Among the state officials we interviewed, South Dakota Medicaid officials said that when they were establishing their health home program, CMS was helpful in connecting them with other states that had created similar programs so that they could learn from other states’ experiences. South Dakota Medicaid officials stated they would like CMS to continue to bring health home program managers from several states together to discuss their successes, challenges, and innovations. Nevada Medicaid officials stated they were considering establishing a health home program. Financial Alignment Initiative. For the Financial Alignment Initiative, CMS oversaw efforts by states to implement improvements in Medicaid service delivery aimed at achieving savings for both Medicare and Medicaid, with one state we spoke with using the initiative to target high- expenditure beneficiaries. As noted earlier, Washington established its Health Homes demonstration program for dually eligible beneficiaries in association with the Financial Alignment Initiative. Washington targeted the demonstration to high-cost, high-risk Medicare-Medicaid beneficiaries based on the principle that focusing intensive care coordination on beneficiaries with the greatest need provided the greatest potential for improved health outcomes and cost savings. Washington’s Financial Alignment Initiative demonstration was approved through 2020, and Washington officials stated they are hoping to get an extension, because it has yielded cost savings for both Medicaid and Medicare. A feature of the Financial Alignment Initiative is that any cost savings achieved by the program are split between the state Medicaid program and Medicare. CMS Provided Technical Assistance and Educational Resources to Help States Identify and Manage Care for High- Expenditure Medicaid Beneficiaries CMS also offered state Medicaid agencies access to several resources that, while not designed specifically to target high-expenditure beneficiaries, have been used to support states in identifying or better managing care for this population. These resources included the Medicaid Innovation Accelerator Program, the State Data Resource Center, and the Medicare-Medicaid Data Integration Initiative. Medicaid Innovation Accelerator Program. The Medicaid Innovation Accelerator Program is funded by the Center for Medicare and Medicaid Innovation and run by the Center for Medicaid and CHIP Services, both within CMS. The goals of the program were to improve care for Medicaid beneficiaries and reduce costs by supporting states in their ongoing payment and delivery system reforms through targeted technical support. The program offered participating states targeted technical support to Medicaid agencies in building their data analytic capacity as they design and implement delivery system reforms for high-expenditure beneficiaries, one of the program’s focus areas. The program worked with five states on issues such as identifying and stratifying beneficiaries with complex care needs and high costs, designing effective care management strategies, and incorporating social determinants of health into program design activities. In addition to working directly with five states, the program also offered a national webinar series under the broader topic of Medicaid Beneficiaries with Complex Care Needs and High Costs. The webinar series covered a variety of topics, including a webinar titled “Identification and Stratification of Medicaid Beneficiaries with Complex Care Needs and High Costs,” which provided information about different approaches to targeting and assessing the needs of this population. Vermont Medicaid officials we spoke with said it would be helpful to have more information about how social determinants of health impact beneficiaries’ ability to manage their own care. CMS hosted other webinars on various technical support and data analytics topics for states. Among the state Medicaid officials we interviewed, Nevada officials mentioned participating in the Innovation Accelerator Program. State Data Resource Center. State Medicaid agencies have traditionally been hampered in managing the Medicaid portion of care for dually eligible beneficiaries, because they lacked data on the Medicare services these beneficiaries receive, such as hospitalizations, physician visits, prescription drugs, and skilled nursing facility stays. To address this challenge, CMS established the State Data Resource Center to facilitate state access to and use of Medicare data on dually eligible beneficiaries. Through the program, states had access to technical advisors when working with CMS Medicare data, which have allowed states to better predict and identify high-expenditure dually eligible Medicaid beneficiaries, CMS officials told us. The officials said the State Data Resource Center provided states with learning opportunities through webinars and monthly “Medicare Data Workgroup” calls, during which states shared their data use experiences. CMS officials and CMS contractors we spoke with said 29 states have received Medicare data, including all 10 states that participated in the Financial Alignment Initiative, though not all had projects specifically linked to high- expenditure Medicaid beneficiaries. CMS officials said all states had some contact with the State Data Resource Center, whether through data inquiries or participation in webinars. Medicare-Medicaid Data Integration Initiative. The Medicare-Medicaid Coordination Office and the Center for Medicaid and CHIP Services’ Medicaid Innovation Accelerator Program, both within CMS, jointly sponsored the Medicare-Medicaid Data Integration Initiative. The initiative assisted states with integrating Medicare and Medicaid data in order to enhance care coordination and reduce costs for the dually eligible population, which may have included high-expenditure Medicaid beneficiaries. CMS officials we spoke with said the Medicare-Medicaid Data Integration Initiative had assisted 10 states—five participating in the Financial Alignment Initiative (Colorado, Minnesota, Ohio, Rhode Island, and Virginia) and five participating in the Medicaid Innovation Accelerator Program from October 2015 to March 2019 (Alabama, the District of Columbia, New Hampshire, New Jersey, and Pennsylvania). Agency Comments We provided a draft of this product to the Department of Health and Human Services for review. The department provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. Appendix I: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Lori Achman (Assistant Director), Mary Giffin (Analyst-in-Charge), Matthew Dobratz, Drew Long, and Brandon Nakawaki made key contributions to this report. Also contributing were Julianne Flowers, Vikki Porter, Jennifer Rudisill, and Eric Wedum.
Why GAO Did This Study Medicaid, a joint federal-state health care financing program, is one of the nation's largest sources of health care coverage for low-income and medically needy individuals. A 2016 report published by the National Governors Association noted that high-expenditure Medicaid beneficiaries typically have poorly managed chronic conditions and a host of unmet social needs that result in potentially preventable use of costly services, such as emergency department visits. The report also noted that identifying and better managing those beneficiaries are key to reducing costs and improving outcomes. GAO was asked to examine state and federal efforts to manage costs and improve care coordination for high-expenditure Medicaid beneficiaries. This report describes (1) approaches selected states used to identify or predict high-expenditure Medicaid beneficiaries; (2) strategies selected states used to manage beneficiaries' health care costs while ensuring quality of care; and (3) resources CMS provided to states to help them identify, predict, or better manage high-expenditure beneficiaries. GAO interviewed officials from CMS, as well as Medicaid officials from a nongeneralizable sample of seven states (Indiana, Nevada, Pennsylvania, South Carolina, South Dakota, Vermont, and Washington) and five MCOs. States were selected for variation in their total Medicaid enrollment, enrollment in Medicaid managed care, percentage of state population living in rural settings, and percentage of state population with disabilities. MCOs were selected based on state suggestions, and varied in terms of whether they operated nationally or on a state or regional basis. What GAO Found GAO previously reported that in fiscal years 2009 through 2011, the most expensive 5 percent of Medicaid beneficiaries accounted for nearly half of the expenditures for all beneficiaries; others have also found that a small percentage of beneficiaries account for a disproportionately large share of Medicaid program expenditures. These high-expenditure beneficiaries are an extremely diverse population with varying needs. GAO found that the seven selected states identified or predicted high-expenditure Medicaid beneficiaries using statistics and other approaches. For example, states used risk scores, which estimate an individual beneficiary's expected health care expenditures relative to the average expenditures for beneficiaries in the group. Other approaches included examining service utilization data to identify statistical outliers and using diagnoses, service utilization and claims expenditure thresholds, or clinical judgment to identify or predict high-expenditure beneficiaries. To manage costs and ensure quality of care for high-expenditure beneficiaries, the seven selected states used care management and other strategies. Care management . All the selected states provided care management—providing various types of assistance such as coordinating care across different providers to manage physical and mental health conditions more effectively—for beneficiaries in their fee-for-service delivery systems. Five of the states also contracted with managed care organizations (MCO) to deliver services for a fixed payment and required the MCOs to ensure the provision of care management services to high-expenditure beneficiaries. Other strategies . Some of the seven selected states used additional strategies to manage care for high-expenditure beneficiaries. For example, Indiana officials described a program to restrict, or “lock in,” a beneficiary who has demonstrated a pattern of high utilization to a single primary care provider, hospital, and pharmacy, if other efforts to change the beneficiary's high utilization were unsuccessful. The Centers for Medicare & Medicaid Services (CMS), which oversees the Medicaid program at the federal level, offered optional tools and other resources to support states' efforts to identify or better manage high-expenditure beneficiaries. For example, CMS officials said states received access to resources and technical assistance on establishing health home programs—which seek to better coordinate care for those with chronic conditions—including how to focus on high-expenditure beneficiaries. CMS officials noted that they supported 23 states' and the District of Columbia's health home programs. CMS also offered several resources that, while not designed specifically to target high-expenditure beneficiaries, have been used to support states in identifying or better managing their care. For example, CMS's Medicaid Innovation Accelerator Program offered targeted technical support to states' Medicaid agencies in building their data analytic capacity as they designed and implemented delivery system reforms, which could be used to identify high-expenditure beneficiaries. Officials in two selected states reported that these tools were beneficial for managing the health care costs associated with high-expenditure beneficiaries. HHS provided technical comments, which GAO incorporated as appropriate.
gao_GAO-19-401T
gao_GAO-19-401T_0
Background Confucius Institutes are entities that seek to promote Chinese language and culture in foreign countries. Their establishment is guided by Hanban, which is headquartered in Beijing, China, and, according to various sources, is affiliated with the Chinese government’s Ministry of Education. The first Confucius Institute in the United States was established in 2004, and there were approximately 525 institutes worldwide as of September 2018, according to Hanban’s website. Most Confucius Institutes in the United States are based at colleges and universities. We identified 96 Confucius Institutes in operation at U.S. colleges and universities in 44 states and the District of Columbia as of January 2019. See our February 2019 report on Confucius Institutes for a full list of the schools and their locations. Figure 1 shows U.S. states with one or more Confucius Institute on college or university campuses. Additionally, in recent years, some U.S. universities have partnered with Chinese universities to establish degree-granting institutions in China approved by the country’s government. The Chinese government requires that U.S. universities seeking to establish such an education arrangement in China partner with a Chinese university, and establish written agreements with the Chinese university defining the academics, governance, operations, finances, and other aspects of the arrangement. At the time of our review in August 2016, the 12 institutions we reviewed ranged from fewer than 40 to more than 3,000 students. More than 90 percent of the students across the 12 institutions were Chinese, and less than 6 percent were U.S. citizens. Confucius Institute Arrangements Vary Across Universities, and Stakeholders Have Identified Related Benefits, Concerns, and Suggestions for Improvement Confucius Institute Management, Operations, and Agreements Vary by School Management In February 2019, we reported that Confucius Institutes in the United States that we reviewed were established as a partnership between a U.S. school and a Chinese college or university, funded and arranged in part by Hanban. Management of the institutes varies by school. Some Confucius Institutes at U.S. schools are part of an academic department or an administrative office, while others report directly to the school president or other school leadership. Confucius Institute personnel generally consist of a Confucius Institute director or directors, Confucius Institute teachers, and a board of directors. At the 10 case study schools that were part of our review, the Confucius Institute director was a U.S. school employee—either a school administrator, faculty member, or professional hired to manage the Confucius Institute. In addition, several case study schools had a Chinese assistant director, who reported to the Confucius Institute director from the United States, and often was an employee at the Chinese partner university. School Officials, Researchers, and Others Identified Both Benefits and Concerns, and Suggested Ways to Improve Confucius Institutes Perspectives on Institute Benefits In February 2019, we reported that officials we interviewed from case study schools stated that Confucius Institutes’ benefits include opportunities for schools to forge international connections and receive funding and other resources for China-related programs. These officials noted that because Hanban pays the salaries of Confucius Institute teachers who teach language and assist with Chinese programs at schools, sparing the schools these costs, these schools could offer Chinese language courses even when enrollment was low. Case study school officials also stated that Confucius Institutes provide valuable resources and opportunities to increase knowledge of and exposure to China and Chinese culture within the school and in the broader community. Perspectives on Concerns Related to Institutes Case study school officials, researchers, and others we interviewed also offered various perspectives on whether having Confucius Institutes on campuses could bring about undue Chinese influence. These parties discussed the potential for or absence of Chinese interference in events and activities at the institute and on campus. They also expressed views on Confucius Institute teacher hiring, and quality of those teachers. Several school officials, researchers, and others we interviewed expressed concerns that hosting a Confucius Institute could limit events or activities critical of China—including events at the institute and elsewhere on campus. Two officials who expressed these concerns were faculty members at one case study school who have not applied for Confucius Institute funding for a research project because they believed Hanban would not approve of the topic. In contrast, officials at multiple case study schools noted that U.S. school faculty members make all decisions regarding conference themes, guest speakers, and topics for events at their institute. Officials at some schools offered examples of events and activities their Confucius Institute had sponsored that addressed topics that could be considered critical of China. Specifically, they reported hosting a conference discussing intellectual property in relation to China and events on Tibet, territorial disputes in the South China Sea, and religion in China. In addition, multiple researchers and others we spoke with expressed concerns with the Confucius Institute teacher selection process whereby Hanban or the Chinese partner school accepts initial applications from potential Confucius Institute teachers and proposes candidates to the U.S. school. These individuals noted that the Chinese entities could use such a process to effectively screen out candidates based on inappropriate criteria, such as political or religious affiliation. Officials we interviewed at multiple case study schools that had Confucius Institute teachers, however, expressed no concerns about the process for hiring teachers. School officials stated that they believed their school generally controlled the hiring process and were thus satisfied with it. Most officials emphasized that while institute teachers often come from the Chinese partner university, and are referred by the partner or Hanban, the U.S. school makes the final hiring selection. Suggestions for Improvement Case study school officials, researchers, and others also suggested ways to improve the institutes, including changing the language in agreements governing Confucius Institutes and policies for sharing these agreements. These parties stated that schools should remove the confidentiality section of their agreements and make the agreements publicly available online. Several researchers and others also emphasized that making the agreements publicly available would dispel questions and concerns over their contents. Several representatives of higher education institutions told us that they believed the confidentiality language in agreements was unnecessary and schools should consider removing it from their agreements. A few case study school officials, researchers, and others we interviewed stated that schools should include stronger language in the agreements to make it clearer that the U.S. school has executive decision-making authority. School officials and others we interviewed suggested other steps that schools could take to ensure they protect against undue Chinese influence. Several school officials stated that the schools should clearly delineate between the Confucius Institutes’ programs and their own Chinese language programs, such as by locating the institute apart from these departments within the school’s organizational structure. A few school officials and others noted that Confucius Institute teachers should not teach credit-bearing courses, even if those courses use curriculum developed by the school’s language department. One school administrator, who stated that his school’s Confucius Institute would never have a Chinese assistant director because the position suggests an excessive degree of Chinese influence, recommended that other schools remove the Chinese assistant director position from their institutes. Officials from two case study schools and others we interviewed stated that schools should organize events through the institute specifically intended to address what some might perceive as a topic sensitive to Chinese interests to demonstrate the school and institute were not subject to undue Chinese influence. U.S. Universities in China Emphasized Academic Freedom but Faced Internet Censorship and Other Constraints U.S. Universities Reported Receiving Support from Chinese Entities, with Limited U.S. Support In August 2016, we reported that the 12 U.S. universities we reviewed generally reported receiving support for their institutions in China from their Chinese partner universities and from Chinese government entities, with limited funding from U.S. government agencies and private donors. Most universities reported being granted land, resources for construction of buildings, and the use of the Chinese university’s campus facilities. The amount of support reported by the universities varied widely and was in some cases substantial. For example, one university reported receiving nearly 500 acres of land and a commitment from the Chinese provincial and local governments to spend about $240 million for construction and development of facilities. Five of the 12 universities reported receiving federal funding, which in most cases consisted of federal financial aid to U.S. students. Agreements between U.S. and Chinese Partners and Other Policies Generally Outlined Academic Freedom Protections At the time of our review, most universities we reviewed included language in their written agreements or other policies that either embodied a protection of academic freedom or indicated that the institution in China will adhere to academic standards commensurate with those at their U.S. campus. Six universities in our review included language in either their written agreements or other university policies that indicated a protection of academic freedom, such as permitting students to pursue research in relevant topics and allowing students to freely ask questions in the classroom. For example, one university’s agreement stated that all members of and visitors to the institution in China will have unlimited freedoms of expression and inquiry and will not be restricted in the selection of research, lecture, or presentation topics. Another three universities’ written agreements included language indicating that the institution in China will adhere to academic standards commensurate with either the U.S. campus or the university’s accrediting agency or other authoritative bodies. Fewer agreements addressed other types of protections at the time of our review. About half of the universities GAO reviewed addressed access to information, such as providing faculty and students with access to physical or online libraries, though a few universities’ agreements and policies include language protecting internet access. Written agreements and policies for about half of the universities we reviewed included language that suggested a protection of at least one of the freedoms of speech, assembly, and religion or worship, though the number of universities addressing each freedom varies. For example, regarding freedom of speech, student and faculty handbooks at a few of these universities contained language indicating that students have the ability to discuss sensitive topics. Regarding freedom of religion or worship, several of the universities included language in their policy documents indicating that religious practices will be protected. U.S. University Members Generally Indicated They Experienced Academic Freedom, but Internet Censorship and Other Factors Posed Challenges The more than 130 faculty and students we interviewed from universities’ institutions in China during our 2016 review generally reported that academic freedom had not been restricted. Faculty told us they did not face academic restrictions and could teach or study whatever they chose. For example, several faculty members asserted that neither they nor their colleagues would tolerate any academic restrictions, and one faculty member told us he and his colleagues intentionally introduced class discussions on politically sensitive topics to test whether this would trigger any complaints or attempted censorship. Students also generally indicated that they experienced academic freedom and could study or discuss any topic. Some students who had also studied or knew others who studied at Chinese universities contrasted their experiences at a U.S. institution in China, noting that they could have interactive dialogue with faculty, discuss sensitive topics, and freely access information at the U.S. institution but not at a Chinese university. Through interviews and responses to our questionnaire, university administrators reported that academic freedom was integral to their institutions in China. Administrators at several universities told us that academic freedom was nonnegotiable, while others noted that the same curriculum used in the United States also applied to their institution in China. However, fewer than half of the universities we reviewed had uncensored internet access at the time of our review. We visited universities with and without uncensored internet access, and observed university members accessing search engines, newspapers, and social media sites that have been blocked in China—such as the New York Times, Google, and Facebook—at some universities but not others. At several universities that lacked uncensored internet access, students and faculty told us that, as a result, they sometimes faced challenges teaching, conducting research, and completing coursework. For example, one faculty member told us that she sometimes asked others outside of mainland China to conduct internet research for her because they can access information she could not. Several students at another university told us their ability to conduct academic research was constrained by the internet limitations. We also reported in August 2016 that additional factors that could create obstacles to learning at U.S. universities in China, including self- censorship and constraints specific to Chinese students. Administrators, faculty, and students representing more than half of the universities we reviewed gave examples of self-censorship, including some cases where individuals were advised by teachers or others in positions of authority to avoid certain topics. For example, an administrator at one university noted that he believed it was advisable, as a guest of China, to refrain from insulting China, while an administrator at another university noted that the university advised teachers to avoid discussing sensitive subjects in class. In addition, we found that some conditions specific to Chinese students may constrain their academic experience. For example, some noted that Chinese students may know or suspect that their Chinese classmates are government or Communist Party monitors and will report on whatever the students say. An administrator at one university told us that he assumed there were Chinese students and faculty in the institution who reported to the government or the Communist Party about the activities of other Chinese students. Faculty members at several universities told us that they understood there were Chinese students in class who intended to report on the speech of faculty or Chinese students. Finally, we also observed that three of the 12 universities we reviewed that were approved by the Chinese Ministry of Education as having independent legal status shared characteristics that may be correlated with greater academic and other freedoms on campus. We found that these three universities had campuses built specifically for the joint institution that were located relatively far away from their Chinese university partner’s campus, generally controlled their own day-to-day operations, had uncensored internet access, and offered extensive campus and student life programs. In contrast, the other nine universities we reviewed did not consistently share these characteristics at the time of our review. Chairman Portman, Ranking Member Carper, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgements If you or your staff have any questions about this testimony, please contact Jason Bair, Acting Director, International Affairs and Trade at (202) 512-6881 or bairj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Joseph Carney (Assistant Director), Caitlin Mitchell (Analyst in Charge), Joyce Kang, Neil Doherty, Melissa Emrey-Arras, Meeta Engle, Elizabeth Repko, Aldo Salerno, Michael Silver, and Nicole Willems. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Numerous U.S. universities and colleges have partnered with Chinese entities to establish (1) Confucius Institutes in the United States and (2) degree-granting institutions in China. Confucius Institutes are partnerships between Chinese entities and schools in other countries, arranged and funded in part by Hanban, which seek to promote Chinese language and culture. There were 96 institutes located at colleges and universities in the United States as of January 2019. U.S. universities have also partnered with Chinese universities to establish degree-granting institutions in China approved by the Chinese government. School officials have noted these types of partnerships provide valuable educational, cultural, and other benefits. Some researchers, government officials, and others, however, have raised concerns about them, including about the contents of written agreements and the role of the Chinese government, which, according to the Department of State, has made efforts to restrict academic freedom and impose censorship at Chinese universities and other institutions. Some have expressed concern that U.S. universities partnering with the Chinese government may face similar restrictions. This testimony discusses funding, agreements, and operations of (1) Confucius Institutes in the United States and (2) U.S. universities in China. This testimony is based on GAO's February 2019 report on Confucius Institutes in the United States and GAO's August 2016 report on U.S. universities in China. What GAO Found GAO reviewed 90 agreements establishing Confucius Institutes and spoke to officials about benefits and concerns related to the institutes. Agreements between Hanban—an affiliate of the Chinese Ministry of Education—and U.S. colleges and universities generally describe similar activities, funding, and management, though institute operations vary in practice. Confucius Institutes receive funding from Hanban and U.S. schools, and do not receive direct federal funding. While 42 of 90 agreements contained language about the document being confidential, some were available online or upon request, and one-third of the 90 agreements explicitly addressed how U.S. school policies apply to the institutes. Officials GAO interviewed at 10 case study schools noted U.S. school policies apply to institutes at their schools. GAO also interviewed some researchers and others who expressed concern that the presence of Confucius Institutes could constrain campus activities and classroom content. For example, several suggested schools with institutes might avoid hosting events on topics that could include criticism of China, such as Taiwan or Tibet, so as to not offend Chinese partners. School officials offered examples to illustrate that these concerns did not apply to their institute, noting institutes had sponsored events on such topics. Nonetheless, school officials and others suggested ways schools could improve institute management, such as renegotiating agreements to clarify U.S. schools' authority and making agreements publicly available. In August 2016, GAO reported that U.S. universities that have partnered with Chinese universities to establish degree-granting institutions in China emphasize academic freedom, but face internet censorship and other challenges. The 12 U.S. universities GAO reviewed generally reported receiving support for their institutions in China from Chinese government entities and universities, and 5 reported receiving U.S. government funding, mostly federal financial aid to U.S. students. Universities' agreements with Chinese partners or other policies GAO reviewed generally included language protecting academic freedom or indicating their institution in China would adhere to U.S. standards. University members generally indicated that they experienced academic freedom, but also stated that internet censorship, self-censorship, and other factors presented constraints. At several universities that lacked uncensored internet access, faculty and students noted that, as a result, they faced challenges teaching, conducting research, and completing coursework at that time.
gao_GAO-20-205
gao_GAO-20-205_0
Background According to FTA’s National Transit Database, about 1,500 rural transit providers, including tribal transit providers, supply vital mobility and connections to essential services for people living in rural communities. Rural transit providers generally have low budgets, few employees, and small vehicle fleets. Rural transit providers provide a variety of transit services, including: demand-response, which is scheduled in response to calls from passengers; fixed-routes, which are buses operating according to a set schedule; and deviated-fixed routes, which are fixed-routes that allow for minor route deviations in response to passenger calls. Service areas for rural providers may span dozens of square miles in remote areas—with long trips and only a few riders at any given time—or be located in smaller, more developed rural areas surrounding major cities. DOT primarily supports rural transportation through formula grants, some of which require states and rural transit providers to coordinate. Specifically, these rural transportation formula grants are apportioned to state departments of transportation based on various factors, and these state agencies then allocate funding to rural transit providers as sub- grantees. Sub-grantees can be regional or local governments, non-profit organizations, or federally recognized tribes, which provide public transit services in their communities. DOT also awards rural transit program funds directly to federally recognized Indian tribes through the Tribal Transit Program. See table 1 for a description of the DOT’s primary formula-grant programs that support rural transit. Within DOT, FTA and its 10 regional offices administer these programs; their responsibilities include: 1. grant funding, including targeted grants and contracts for coordination-related projects to enhance mobility and access nationwide; 2. oversight of state transportation agencies and tribal-transit program grantees through State Management Reviews and Tribal Transit Assessments; 3. training and technical assistance to states and rural transit providers; 4. policy interpretations and development to enhance mobility and access. DOT and FTA also lead the Coordinating Council, which is charged with improving coordination across federal programs that fund transportation services for transportation-disadvantaged persons. The Coordinating Council consists of 11 federal agency members, namely, the departments of Agriculture, Education, HHS, Housing and Urban Development, Interior, Justice, Labor, Transportation, and Veterans Affairs (VA); the National Council on Disability; and the Social Security Administration. Aside from DOT, transportation is not the primary mission of these federal agencies. However, each member agency has programs that provide funding for transportation to enable program beneficiaries to access the various health and human service programs within the agencies’ primary missions, such as job training, education, or medical care. For example, the HHS’s Medicaid program requires assurance from states that Medicaid beneficiaries have access to necessary medical services; this medical service includes arranging and providing funding for transportation to medical appointments and other health services when beneficiaries cannot transport themselves. In 2012, we found, among other things, that Coordinating Council member agencies were not effectively collaborating and recommended that the Coordinating Council strengthen its coordination efforts across federal programs. In 2014, we again identified the need to strengthen federal coordination efforts and recommended that the Coordinating Council develop both a strategic plan and a cost-sharing policy to promote and enhance federal, state, and local nonemergency medical transportation coordination activities. For a full description of our prior recommendations and their implementation status, see appendix II. State and local stakeholders—including state transportation agencies, regional planning organizations, rural and tribal transit providers—and health and human service providers, coordinate rural transportation services when they share resources and responsibilities and plan activities to achieve common goals and for the overall benefit of the community. Coordination of rural transportation services can occur across geographic jurisdictions, funding sources, and various local, state, and federal programs. Coordination of transportation services has the potential to reduce transportation program costs by clustering passengers, using fewer one-way trips, and sharing the use of personnel, equipment, and facilities; at the same time, people in need of transportation also often benefit from greater and higher quality services when transportation providers coordinate their operations. Available Resources and Alignment of Program Requirements Cited among Factors Affecting Rural Transit Coordination Various factors affect rural transit coordination, according to stakeholders we interviewed, participants from three discussion groups, and literature we reviewed. Factors that can affect coordination include availability of resources, alignment of different federal program requirements, availability of coordinating mechanisms, and the distances between transit providers. (See fig. 1.) As discussed below, we found that these factors are often interrelated and can serve as both a motivating factor and a barrier to coordination. Availability of Resources The availability of resources was the most commonly cited factor affecting rural transit coordination in our literature review and interviews. Almost two-thirds of the stakeholders we spoke with (30 of 43) and participants in three discussion groups told us that it is difficult to coordinate transit services in rural communities with limited resources, such as funding, staff and time, and technology. For example, a rural transit provider told us that while it provides public transit to a neighboring national park for its visitors during the summer season, insufficient funding from the national park combined with very limited access to FTA’s rural transit funds limits the providers’ ability to effectively coordinate services. We also reported in 2014 that smaller budgets and fewer employees can influence rural transit providers’ ability to coordinate. A 2018 survey of state and local transit and health and human services providers conducted by the National Center for Mobility Management also noted that the availability of resources can be a key barrier to transportation coordination both in rural and non-rural areas. Resources specifically affecting rural transit coordination include: Availability of Matching Funds. The availability of matching state and local funds can affect coordination, as rural transit providers tend to rely on a variety of funding sources to provide transit services. Federal programs generally require a share of state or local funding to match federal funds. Approximately one-third of selected stakeholders (13 of 43) and participants in three discussion groups said that they face challenges identifying enough state or local funding to meet FTA’s matching fund requirements. Some rural transit providers (4 of 21) told us they have access to funds from different sources, but others (4 of 21) said that they are challenged with securing state or local matching funds. For example, local, regional, or state taxes provide some funding streams for public transit, including rural transit providers, in California, Georgia, New Mexico, and Washington. Although revenues from state or local taxes may be available as a funding source, rural transit providers still told us that identifying and coordinating state and local funding sources can be challenging. We previously reported that constrained state and local budgets can make securing these funds difficult as rural transit competes for funding with other needs within a community, such as public safety. Technology and Coordination: Greater Columbia Call-Center People For People, a rural transit provider in Yakima, Washington, uses technology to coordinate and operate the Greater Columbia 2-1-1 (GC211) call center. GC211 maintains a statewide database of community resources, including transportation options. It is one of the state’s seven regional 2-1-1 call centers that directs riders to social, health, and transportation resources. Staffing and time. Some stakeholders (12 of 43) said that rural transit providers do not have enough staff and time to pursue or engage in coordination efforts. For example, three rural transit providers told us that staff sometimes take on multiple duties, such as bus driver and dispatcher in addition to grant and program manager, duties that affect their time and ability to coordinate. Representatives from a national transit planning association also told us that staffing constraints are an issue, particularly with rural transit providers because they are usually more understaffed than urban transit agencies. Technology. Access to technology can help coordinate trips and schedules across rural transit services. About half of the rural transit providers (11 of 21) we interviewed stated that they use software and other technology to schedule trips and operate call centers to facilitate coordination efforts. For example, People For People, a rural transit provider in Yakima, Washington, uses technology to coordinate and operate the Greater Columbia call-center. (See sidebar). However, a handful of stakeholders (4 of 43) mentioned that access to broadband, which is needed to enable technology and scheduling software, can be limited in certain areas, especially on tribal lands. For example, an official from EBCI Transit, a tribal transit provider in North Carolina, said EBCI experienced poor cell phone service and other communication limitations, which affected its ability to schedule and coordinate trips. Our recent work on telecommunications found that tribal lands have significantly lower levels of broadband internet access relative to the country as a whole. Availability of Formal Coordinating Mechanisms Formal Coordinating Mechanisms: State- and Regional-Coordinating Bodies As one of the regional coordinating bodies, the Southwest Georgia Regional Commission has played a central role in coordinating rural transit services through much of its region; it currently provides public transit services in 13 counties to the general public as well as to riders with specific needs to access health and human services in 14 counties. The availability of coordinating mechanisms can facilitate information sharing and coordination. About half of the stakeholders (18 of 43) told us that they participate in some statewide, regional, or local coordinating bodies as part of a process to facilitate coordination. For example, the Georgia Department of Transportation works with regional commissions to coordinate rural transit throughout Georgia. (See sidebar). In contrast, officials from the North Carolina Department of Transportation told us that the state disbanded its coordinating council, which may be contributing to challenges in providing nonemergency medical transportation services. We previously reported that state and local transportation agencies and aging network organizations used a variety of different mechanisms, such as state-, regional-, and local-planning bodies to coordinate transportation services for older adults. Half of the states we selected (4 of 8) have statewide-coordinating bodies. For example, participants from one discussion group said that state requirements can facilitate coordination when the state statute requires rural transit providers applying for or receiving federal, state, or local assistance to coordinate with other state agencies, including the state’s health and human services department, for funding and services. Rural transit providers also told us that they participate in regional- and local-coordinating bodies. For example, all transit providers in Montana are required to coordinate through local Transportation Advisory Committees that plan and prioritize local transportation needs. About one-third of the stakeholders (13 of 43) and participants in three discussion groups also mentioned knowledge-sharing forums—such as conferences and training organized by state transportation agencies, transit industry associations, and FTA—as mechanisms to facilitate coordination. For example, officials from Pullman Transit told us that these forums, such as the Washington State Transit Association’s annual conference, presented opportunities to share and learn about various federal transportation programs, coordinating efforts, and information on best practices. Alignment of Program Requirements We and others have reported that transit providers, as well as health and human service providers, may encounter substantial challenges trying to coordinate services across different programs when program requirements do not align. For our current work, about one-third of stakeholders (13 of 43) and participants in three discussion groups told us that they face a wide array of barriers coordinating across differing federal laws, regulations, and program requirements. The different federal program requirements can affect rural transit providers’ ability to coordinate transit services as some federal programs are dedicated to specific groups of riders (e.g., older adults, people with disabilities, and low-income riders) with specific needs; such specification of groups makes it difficult to coordinate trips for different riders. Three rural transit providers stated that it is sometimes difficult to coordinate transportation to medical appointments for “blended riders” (i.e., senior citizens, veterans, and the general public) in one trip. For example, VA’s Highly Rural Transportation Grants require rural transit providers to serve only veterans, while Medicaid’s nonemergency medical transportation funds require serving only Medicaid beneficiaries. Rural transit providers— which provide service to the general public within their service areas—are sometimes challenged with providing an efficient and coordinated transit service for VA or Medicaid beneficiaries to access their programs. FTA and the Coordinating Council’s 2018 Focus Group Report also identified federal program requirements, including trip purpose restrictions, as a barrier for coordination. As discussed later in the report, the Coordinating Council has been charged with addressing this barrier, among others, and is currently examining whether and how federal program requirements could be better aligned. Long Distances Coordination in rural areas can be both essential and challenging because rural transit passengers often need to travel long distances (e.g., 30-100 miles) to reach critical services, such as doctor appointments or grocery shopping. About a quarter of stakeholders (11 of 43) and participants in two of the discussion groups said that the long distance between transit providers in remote rural communities sometimes makes it difficult to find entities or other providers interested in or able to coordinate. Two rural transit providers also told us they have no neighboring transit provider to coordinate with due to the extremely remote rural locations. For example, an official from Turtle Mountain Transit in North Dakota said it is challenging to coordinate with other neighboring tribal transit providers due to the long distance to the nearest tribal transit provider in Spirit Lake, which is about 100 miles away. Turtle Mountain Transit, like a number of other tribal transit providers, often serves large and fairly remote areas. We previously reported that tribal lands can vary in size, and range from the smallest at less than one square mile to the largest, the Navajo Nation, which is more than 24,000 square miles or the size of West Virginia, and extends into the states of Utah, Arizona, and New Mexico. Selected Rural Transit Providers Coordinated Trips and Shared Resources to Improve Transit Services Coordinated Trips and Schedules Despite encountering some of the factors that can make coordination difficult, all rural transit providers we interviewed told us that they currently coordinate trips or schedules with other local or regional stakeholders. Such coordination efforts include establishing common drop-off points or common schedules (21 of 21), coordinating to provide access to health and human services (14 of 21) and using technologies, such as software, to facilitate coordination of transportation (11 of 21). Rural transit providers told us that they coordinate with others because coordinating may help them meet increasing rural-transit service demand and improve service. They mentioned that the benefits of their coordination efforts include: increased ridership or access, cost efficiency or reduced costs, and enhanced quality of services. Examples of coordination cited by our selected rural transit providers are summarized in table 2 below. Coordinated Funding and Shared Resources All of the rural transit providers we interviewed also told us they coordinated across various funding sources or shared other resources with nearby transit providers. The most commonly cited coordination and resource-sharing activities included pursuing funding from several programs and raising local revenue for transit (18 of 21); participating in opportunities to share knowledge, such as training (11 of 21); sharing vehicles and related resources, such as maintenance capabilities (8 of 21); and sharing staff to achieve a common goal (5 of 21). Four of our selected rural transit providers also stated that full consolidation of their transit services across multiple jurisdictions or providers resulted in cost savings. Specific examples of these activities cited by our selected rural transit providers are summarized in table 3 below. FTA Continues to Facilitate Coordination, but Its Efforts Have Had Mixed Results FTA and the Coordinating Council Have Ongoing Efforts, but Key Deadlines Have Been Missed and Much Work Remains As the lead agency of the Coordinating Council, FTA has taken a number of steps in recent years, including those summarized below, to work with other Coordinating Council member agencies to enhance federal interagency coordination. From January 2017 through June 2019, FTA and the Coordinating Council members were involved in more than 90 interagency- coordinating activities, according to the Coordinating Council’s summary of recent activities posted on its website. Coordinating activities included interagency meetings, trainings, and webinars to share information and coordinate interagency efforts that support rural communities and improve transportation access to health and human services. For example, in September 2018, staff from FTA and the Department of Agriculture held a webinar for federal, state, and local officials on the opioid crisis and increasing transportation in rural areas to improve access to treatment centers, the courts, and other services in rural West Virginia. In 2018, FTA and Coordinating Council members engaged in significant efforts to inform the strategic direction of the Coordinating Council. From March through June 2018, FTA and some Coordinating Council members convened a series of focus groups with state and local stakeholders, including transit and health and human services providers to be informed of the current state of transportation services and identify leading practices and barriers to transportation coordination. FTA also obtained input from state and local transit and health and human services stakeholders via a survey that the National Center for Mobility Management conducted from June through November 2018 to identify promising practices, barriers, and challenges around coordinated transportation. Working group efforts under way are addressing some of the challenges facing rural transit providers. For example, the Coordinating Council’s Program Analysis Work Group, which was convened in November 2018, is currently examining all federal programs with transportation funding available and conducting program analyses to determine whether and how federal program requirements could be better aligned. FTA officials stated that the Coordinating Council plans to submit a report to Congress with some proposed changes and recommendations for improved alignment of federal requirements by September 2020. While these coordinating activities are constructive and encouraging steps, the Coordinating Council’s progress has been slow in other key areas. In 2014, we recommended that the Coordinating Council develop a strategic plan and cost-sharing policy to promote and enhance federal, state, and local nonemergency medical transportation coordination activities. In addition, the 2015 Fixing America’s Surface Transportation Act (FAST Act) required the Council to publish a strategic plan by December 2016 that, among other things, identifies a strategy to strengthen interagency collaboration and that develops a cost-sharing policy in compliance with applicable federal laws. The FAST Act also required the Coordinating Council to submit a final report containing the Council’s final recommendations to Congress for enhancing interagency coordination. However, the Coordinating Council did not issue the required strategic plan until October 2019, about 3 years after the 2016 deadline. We are currently evaluating this plan as part of our follow-up on the implementation status of our 2014 recommendations. Regarding the final report to Congress on interagency coordination, FTA officials told us that they plan to submit the final report to Congress by September 2020. Additionally, we previously reported on the long-standing challenge of the Coordinating Council Executive Committee, which is tasked with providing top management direction for the Council, providing limited leadership and guidance that can have a broad effect on rural transportation. Specifically, we reported that the Council Executive Committee had provided limited leadership, had not met since 2007, and had not issued key guidance documents that could promote coordination. Accordingly, we recommended that the Council meet and issue guidance documents. According to FTA officials, the Executive Committee met for the first time since 2007 in October 2019 and issued the strategic plan noted above. As previously mentioned, we will continue following up on our prior recommendations (see app. II). FTA Has Facilitated Coordination of Rural Transit Services at the State and Local Level, but the Effectiveness of FTA’s Information Sharing Has Been Limited FTA also facilitates coordination of rural transit services by engaging directly with state and local stakeholders, including transit and health and human services providers. FTA has, for example, taken the following actions: It created a website that provides resources and information on planning and coordinating rural transportation services. This website includes a self-assessment toolkit for state and local transportation agencies on “Building a Coordinated Transportation System” and a link to case studies on coordination of state and regional councils. FTA staff provides ongoing training, resources, and technical support to state transportation agencies and transit and human services providers through its three technical assistance centers—the National Rural Transit Assistance Program, the National Aging and Disability Transportation Center, and the National Center for Mobility Management. FTA and its three centers have been disseminating and sharing some coordination-focused information through their websites, training, and conferences. For example, FTA officials pointed us to the National Aging and Disability Transportation Center’s webpage on “Annual Trends Report and Spotlight Series” that posted best practices information on a non-profit agency that recruits and uses volunteers to transport older adults to social outings and medical appointments. FTA also annually awards competitive grants for innovative, coordinated health and transportation programs. For example, FTA awarded approximately $9.6 million in fiscal year 2019 to 37 projects that were selected as innovative projects for the transportation of disadvantaged populations that are designed to improve the coordination of transportation services and nonemergency medical transportation services. FTA has also bi-annually recognized rural transit providers with an FTA Administrator’s Award for outstanding rural-transit programs, selected in part based on coordination efforts. FTA officials told us that recipients of this award are expected to share their successful practices at the National Rural Transit Assistance Program conference, which is attended by many rural transit providers. Although FTA has a number of efforts under way to facilitate coordination, we identified limitations with FTA’s current communication and information sharing approach. More than a third of the stakeholders we spoke with (16 of 43) stated that communication and information sharing on coordination opportunities from FTA have been limited. FTA officials told us that they disseminate and share some coordination-focused information through its three technical assistance centers, training, conferences, and regular meetings with state transportation agencies as its direct grantees and transportation industry associations. However, about a quarter of the stakeholders (11 of 43) and participants in one discussion group told us that while they have attended FTA trainings and conferences and have used FTA’s technical centers, the focus has been on grant management issues, such as compliance with drug and alcohol policy and procurement, and not on coordination opportunities. Stakeholders stated that they wanted more information on: ways to coordinate with other providers, how providers addressed coordination challenges, technologies that were used to facilitate coordination, and any quantifiable data and results on coordination. Additional information on leading coordination practices that FTA can share with stakeholders include those that we previously identified, such as defining and articulating a common outcome that agencies can engage in to sustain coordination efforts. In December 2014, we recommended that FTA and the Coordinating Council collect data to track and measure progress in achieving results, including the extent of coordination efforts under way. FTA officials told us that the Council’s recent adoption of their strategic plan includes goals and objectives that represents progress toward measuring the extent of coordination efforts at the federal level. FTA officials also told us that the Council’s final report to Congress that will be submitted in September 2020 will report on the implementation status of the objectives in the strategic plan. We have previously reported on the importance of information sharing on coordination across federal, regional, state, and local government entities. Office of Management and Budget guidance on using information as a “strategic resource” notes that making federal information “discoverable, accessible and useable” can fuel innovation. Further, according to Standards for Internal Control in the Federal Government, agencies should communicate necessary and quality information externally so that external parties can achieve their objectives and periodically evaluate methods of communication, so that the agency has the appropriate tools to communicate quality information with external parties on a timely basis. FTA, however, has not clearly communicated and conveyed information on coordination opportunities and leading practices. For example, while FTA officials told us that they rely on their website to share information with stakeholders, more than a third of the stakeholders (17 of 43) told us that information on coordination opportunities and leading coordination practices are not clearly identifiable on FTA’s website or easily accessible. Two stakeholders, for example, said that while locating program requirement information, such as on procurement, was fairly easy, it was difficult to locate coordination-related information. An official from a transit industry association also commented that “stakeholders would benefit if FTA and the technical assistance centers make coordination resources and training more visible on their websites.” This visibility could include “having coordination as a standalone topic and/or creating a page(s) dedicated to coordination on their websites.” We also determined that coordination-related information was fragmented on FTA’s website and found it difficult to navigate FTA’s website to find leading practices information on coordination. For example, FTA officials referred us to its website on FTA’s Access and Mobility Partnership Grant (also known as the Innovative Coordinated Access and Mobility Grant) for information on leading practices for transportation coordination. In our review of this website, we found a description of projects that FTA selected for the grant, the grant amount, and how the funds will be used. We could not identify any information specifically on how these projects identified opportunities to coordinate or exhibited leading coordination practices. We also examined FTA’s website that provides a self- assessment toolkit for building a coordinated transportation system, as we previously mentioned. FTA officials also mentioned that they developed the Coordination Council’s webpage to present information targeted to coordination. FTA does not have a strategy for communicating and sharing information on coordination opportunities and leading coordination practices for its wide audience of rural and tribal providers, state transportation agencies, and other stakeholders. FTA officials told us that they develop individualized communication plans when they undertake any major activities and examine an approach to communicating and sharing information when they develop annual statements of work for their three technical centers and meet with stakeholders. However, FTA could not provide us with a documented strategy that outlines how it communicates and shares coordination-focused information with state and local stakeholders. In light of the multiple means by which FTA and the Coordinating Council are attempting to communicate information about coordinating rural and tribal transit services, a comprehensive plan or strategy that assesses what information state, local, and transit providers would benefit from receiving and how that information can be effectively communicated could help FTA’s information-sharing efforts have their intended effect. Without such a strategy, stakeholders are without valuable information that could aid them in identifying potential coordination opportunities, leading practices, and data to help inform and facilitate their coordination efforts. Conclusions Coordination is important to help state transportation agencies, rural transit providers, and human health and service providers meet the increasing needs of those who rely on rural transit systems, particularly in light of limited resources. FTA has taken a number of steps to enhance and facilitate coordination, including having interagency meetings, trainings, and webinars to coordinate interagency efforts that support rural communities and improve transportation access to health and human services. Going forward, it will be critical for the Coordinating Council’s Executive Committee to implement our prior recommendations on key coordination issues. In addition, although FTA, along with its three technical centers, has developed resources to facilitate coordination, its communication efforts have fallen short. Without a communication strategy to effectively reach state and local stakeholders, FTA is missing opportunities to enhance communication and information sharing that can improve coordination among state transportation agencies and rural and tribal transit providers. Recommendation for Executive Action The Administrator of FTA should develop a communication plan that will effectively share information with state transportation agencies and rural and tribal transit providers on coordination opportunities and leading coordination practices in an accessible and informative way. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this report to the Department of Transportation (DOT) and Department of Health and Human Services (HHS) for review and comment. DOT provided written comments, which are reproduced in appendix III and summarized below. DOT and HHS also separately provided technical comments, which we incorporated as appropriate. In written comments, DOT partially concurred with our recommendation. DOT provided examples of its communication efforts with stakeholders on coordination opportunities and practices and highlighted two recent initiatives to further support the coordination of rural transportation services. For example, in October 2019, DOT established the Rural Opportunities to Use Transportation for Economic Success (ROUTES) initiative to enable better coordination among agencies to address underserved rural areas and to collect input from stakeholders on the benefits rural transportation offers for safety and economic outcomes. In partially concurring with our recommendation, DOT wrote that it plans to direct each of its technical assistance centers to reorganize its web pages to centralize coordination information and best practices. We acknowledge FTA’s efforts and highlighted the progress FTA has made in communicating and facilitating coordination in this report. We noted that FTA has provided ongoing training, support, and resources through its technical assistance centers. While DOT’s plans to have its technical assistance centers’ web pages reorganized may help in communicating coordination opportunities with stakeholders, they fall short of a comprehensive communication plan. Such a plan would define a strategy for effectively communicating and sharing information with stakeholders and ensuring that methods of communication are reaching all intended stakeholders. Among other things, FTA’s plans to increase access to coordination information does not include reorganizing and centralizing coordination-related information on FTA’s web pages, a strategy that is different from these technical centers’ web pages and one where many stakeholders can turn to and search for communication and information. We believe that a comprehensive communication plan that includes FTA’s strategy for ongoing communication on coordination opportunities would enable FTA to ensure that coordination information is reaching intended stakeholders to inform them of opportunities to enhance rural transit services. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Transportation, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact at me (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Appendix I: Rural Transit Stakeholders GAO Interviewed and Discussion Group Participants Appendix I: Rural Transit Stakeholders GAO Interviewed and Discussion Group Participants Industry groups Community Transportation Association of America National Association of Development Organizations National Association of Regional Councils National Center for Mobility Management National Rural Transit Assistance Program Small Urban, Rural and Tribal Center on Mobility Federal Transit Administration (FTA) Regional Office FTA Region IV ^ State transportation agencies Caltrans - California Department of Transportation New Mexico Department of Transportation North Carolina Department of Transportation North Dakota Department of Transportation Washington State Department of Transportation Rural transit providers (including tribes’ names, where appropriate) Carlsbad Municipal Transit System CSKT Transit (Confederated Salish and Kootenai Tribes of the Flathead Reservation) * EBCI Transit (Eastern Band of Cherokee Indians) *^ Missoula Ravalli Transportation Management Association ^ Morongo Transportation Department (Morongo Band of Mission Indians) * North Central Regional Transit District Pueblo of Santa Clara, New Mexico * Rocky Boy’s Transit (Chippewa Cree Indians of the Rocky Boy’s Reservation, Montana)* Southeast Tennessee Human Resource Agency ^ Turtle Mountain Transit (Turtle Mountain Band of Chippewa Indians of North Dakota) * Williston Council for the Aging Legend: * = Recipient of FTA’s Tribal Transit Program funding. ^ = Site visit to interview stakeholder. State transportation agencies Mississippi Department of Transportation VTrans - Vermont Agency of Transportation Rural transit providers (including tribe names, where appropriate) Big Woods Transit (Bois Forte Band (Nett Lake) component of Minnesota Chippewa Tribe, Minnesota) Center for Community the RIDE (Sitka Tribe of Alaska) * Choctaw Tribal Transit (The Choctaw Nation of Oklahoma) * Heart of Iowa Regional Transit Agency Hualapai Transit (Hualapai Indian Tribe of the Hualapai Indian Reservation, Arizona) * Oglala Sioux Transit (Oglala Sioux Tribe) * Salt River Transit (Salt River Pima-Maricopa Indian Community of the Salt River Reservation, Arizona) * Legend: * = Recipient of FTA’s Tribal Transit Program funding. Appendix II: Implementation Status of GAO’s Recommendations to the Department of Transportation Appendix III: Comments from the Department of Transportation Appendix IV: GAO Contact and Staff Acknowledgment GAO Contacts Staff Acknowledgements In addition to the individuals named above, Heather MacLeod (Assistant Director); Jennifer Kim (Analyst-in-Charge); Matthew Bond; Delwen Jones; Rosa Leung; Theresa Lo; Anna Maria Ortiz; Cheryl Peterson; Malika Rice; Kelly Rubin; Pamela Snedden; Lisa Van Arsdale; and Sarah Veale made key contributions to this report.
Why GAO Did This Study Public transportation in rural areas is critical to connecting people to medical services, jobs, education, and shopping. FTA allocated about $2.1 billion in formula grants over the last 3 years to support rural and tribal transit. In 2014, GAO reported that providing transit services in rural areas can be challenging and that coordination of transportation services among federal programs is limited. GAO was asked to examine ongoing efforts and challenges of coordinating rural transit systems. This report addresses (1) factors affecting rural transit coordination and selected rural and tribal transit providers' coordination efforts and (2) the extent to which FTA facilitates coordination of rural transit services. GAO reviewed program documentation and literature on rural transit coordination. GAO also interviewed federal officials from FTA and the Department of Health and Human Services, which also funds transportation services, and rural transit stakeholders, including state transportation agencies, rural and tribal transit providers, and public transit industry groups. GAO selected states and rural and tribal transit providers based on federal-funding levels and geographic representation, among other factors. What GAO Found Coordination of rural transportation services across geographic jurisdictions and federal- and state-funding sources has the potential to reduce costs and improve services. Such coordination by transit agencies in rural areas can lead to efficiencies. A variety of factors, however, adversely affect rural transit coordination, including the availability of resources, according to GAO's literature review and stakeholder interviews. About 70 percent of the selected stakeholders GAO interviewed, including rural and tribal transit providers, explained that it is difficult to coordinate transit services in rural communities with limited resources, such as funding, staff, and technology. For example, three rural transit providers said that program managers sometimes assume multiple duties, such as a driver and dispatcher, a practice that affects their time and ability to coordinate. Other cited factors included the extent to which different requirements of federal programs that fund rural transit are aligned to allow transit providers to coordinate trips for riders with specific needs (e.g., people with disabilities) and the availability of coordinating mechanisms, among other factors (see figure). Nonetheless, selected rural and tribal transit providers said they were engaged in various coordination efforts to improve rural transit services. The most commonly cited efforts under way included coordinating trips—for example, by establishing convenient drop-off points—and sharing resources. The Federal Transit Administration (FTA) has several efforts under way to facilitate coordination, but results are mixed. At the federal level, FTA and the federal interagency Coordinating Council on Access and Mobility issued a strategic plan in October 2019, outlining their strategic goals. However, they have yet to submit to Congress a final report containing recommendations for enhancing interagency coordination. FTA officials told us they plan to submit the report by September 2020. At the state and local level, FTA has provided technical support to stakeholders to faciliate coordination. GAO, however, found limitations with FTA's current information-sharing approach. These limitations make information on coordination-related issues difficult to identify and access. Stakeholders want additional information from FTA on leading coordination practices, such as ways to coordinate with other providers. Improving communication and sharing additional coordination-related information could help rural and tribal transit providers identify additional coordination practices they could pursue to improve rural transportation services. What GAO Recommends GAO recommends that FTA develop a communication plan that will effectively share information with state and local stakeholders on coordination opportunities in an accessible and informative way. FTA partially concurred with the recommendation. As discussed in the report, GAO continues to believe the recommendation is warranted and should be fully implemented.
gao_GAO-20-351
gao_GAO-20-351_0
Background Campus Climate Surveys on Sexual Violence Campus climate surveys on sexual violence are designed to collect information on the incidence and characteristics of sexual violence on college campuses as well as related student attitudes and behaviors. The topics covered by campus climate surveys can vary, depending on the questions included on the survey instrument. For example, these surveys may include questions about incidents of sexual violence, such as the number of incidents of sexual assault, intimate partner violence, or stalking, among other topics. There are two different methods that colleges can use to administer these surveys: In a census survey, all members of a group, such as the student body of a college, are surveyed. This type of survey can be used when the group that is the focus of the survey is small, when substantial resources and time are available to obtain enough responses to the survey, or when there is reason to provide all members of the group the opportunity to participate. In a sample survey, a portion of the group is selected using statistical methods to provide accurate information about the larger group. Administering a survey to a sample of students can reduce the time and resources needed to obtain enough survey responses to produce accurate data. Sample-based surveys are appropriate when it is not practical or desirable to survey every member of a group. With both sample and census-based surveys, collecting data that accurately represents the experiences of respondents requires taking a variety of steps when designing, administering, and analyzing the survey, such as weighting or analyzing the completed responses to ensure that they represent the larger group. A 2014 White House Task Force report recommended conducting these surveys as an initial step in a college’s plan to address campus sexual assault. The report also suggested follow-up actions for colleges to consider, such as providing training for college officials and creating partnerships with community sexual assault support services. In addition, there have been efforts to compare campus climate survey results across colleges. Some states have also enacted laws requiring colleges in their state to administer campus climate surveys. These state laws may vary in the nature of the survey requirements, such as the types of colleges covered by these requirements and whether a particular survey instrument must be used. For example, Louisiana requires public colleges in the state to administer these surveys, while New York requires all colleges located within the state to do so. Additionally, Louisiana requires that schools use a standard survey instrument developed by the state, while New York allows colleges to select their own survey instrument. However, there is currently no federal requirement for colleges to conduct campus climate surveys on sexual violence. Federal Efforts Related to Addressing Campus Sexual Violence Education, Justice, and HHS currently engage in a variety of efforts to address sexual violence on college campuses, including overseeing relevant federal laws and funding prevention and response activities. Education and Justice oversee colleges’ compliance with Title IX of the Education Amendments of 1972 (Title IX), which prohibits discrimination on the basis of sex in any education program or activity that receives federal financial assistance. Title IX prohibits sex discrimination— including sexual harassment and sexual violence—that effectively denies victims equal access to recipients’ educational opportunities or benefits. Under Education’s regulations, colleges receiving federal financial assistance from Education, such as those participating in federal student aid programs, must establish procedures for resolving Title IX complaints, and take steps to ensure that members of the college community are aware of their rights under Title IX. In addition, these colleges must designate at least one employee to coordinate their efforts to comply with and carry out their responsibilities under Title IX. According to Education guidance, the Title IX coordinator is responsible for coordinating the college’s response to all complaints involving possible sex discrimination, including monitoring outcomes, identifying and addressing any patterns, and assessing effects on the campus climate. Education also oversees the Clery Act, which requires colleges that participate in student financial assistance programs under Title IV of the Higher Education Act, as amended, to collect statistics on certain crimes that occur on or near their campuses, including specified sex offenses, publish those statistics in an annual security report, and annually report them to Education. Colleges must also include a policy statement in their annual security reports describing their sexual violence prevention and awareness programs for students and employees. In addition, Justice and HHS have funded grants for campus sexual violence prevention and response efforts. HHS has also developed a technical assistance document for planning and implementing sexual violence prevention strategies on college campuses. Federal Data Sources Related to Campus Sexual Violence Education and Justice also manage key efforts to collect data related to campus sexual violence (see table 1). For example, Education oversees the Campus Safety and Security Survey, which collects information from colleges that participate in student financial aid programs on reported criminal incidents, including specified sex offenses, which occur on or near campuses that the colleges own or control, as required by the Clery Act. Colleges are required to include data on specified crimes that are reported to local police or campus security authorities and that occurred (1) on campus (including the subset of crimes that occurred in on-campus student housing facilities), (2) on public property within or immediately adjacent to campus, and (3) in or on non-campus buildings or property the college owns or controls. The survey collects data on the following offenses related to sexual violence: rape, fondling, incest, statutory rape, domestic violence, dating violence, and stalking. Education publishes the data on a public website. Justice collects data on crimes, including sex crimes, through the Bureau of Justice Statistics’ National Crime Victimization Survey (NCVS). The NCVS captures data on a range of offenses related to sexual violence: completed rape, attempted rape, threatened rape, sexual assault other than rape or attempted rape, unwanted sexual contact with or without force (e.g., grabbing, fondling), verbal threat of sexual assault other than rape, and stalking. The NCVS collects data through in-person interviews and phone calls with a nationally representative sample of households on the frequency, characteristics, and consequences of criminal victimization in the United States. In particular, the NCVS collects information about crimes reported and not reported to the police. Although the NCVS includes certain group residences, such as college residence halls, in its sample of households, the resulting data may not fully represent the sexual victimization experiences of college students residing on campus because the sample is primarily comprised of households. Research has found that individuals living in group residences may be at higher risk of sexual violence. Research has also noted concerns with how the NCVS is administered. Specifically, interviews are conducted in person at respondents’ homes or over the phone. As a result, victims may be less likely to honestly answer sensitive questions, such as those related to sexual violence, as their responses might be overheard by other members of their household or the offender. Stakeholders Said Campus Climate Surveys Provide Insights into Campus Sexual Violence, but Colleges Face Challenges Administering Them Stakeholders we interviewed, including survey developers, other researchers, and federal, state, and college officials, considered campus climate surveys a useful tool for learning more about the incidence of campus sexual violence and identifying areas for improvement to address it. However, stakeholders also noted that colleges face a variety of challenges with developing and conducting surveys, such as limited access to needed survey expertise and low response rates, which can affect the reliability of campus climate survey results. Campus Climate Surveys Can Provide More Information about Students’ Experiences with Sexual Violence and Their Awareness of Related Policies, Procedures, and Services Surveys Can Provide More Comprehensive Information on Incidents of Sexual Violence Nearly all stakeholders said that campus climate surveys provide an opportunity to learn more about the incidents of sexual violence occurring on individual campuses, such as those that students may not have previously reported to campus authorities or law enforcement. According to Justice officials and one researcher, campus climate surveys, which collect data directly from victims, can help overcome limitations in law enforcement data that rely on victims reporting to authorities (see sidebar). For example, the three campus climate surveys we reviewed are designed to capture information on incidents of sexual violence that students have experienced regardless of whether the incidents were previously reported to campus authorities or law enforcement. Underreporting of Traditional Crime Statistics According to a National Research Council panel, rape and sexual assault are generally underreported to law enforcement, which can affect traditional crime statistics for these incidents. For example, according to a 2014 Department of Justice report, National Crime Victimization Survey (NCVS) data showed the rate of rape and sexual assault for female college students was 6.1 per 1,000 (the 95 percent confidence interval ranges from 5.0 to 7.2 percent) for the period 1995–2013, and an estimated 80 percent of rape and sexual assault incidents went unreported to police (the 95 percent confidence interval ranges from 75 to 85 percent). College students responding to the NCVS who indicated they did not report incidents of rape and sexual assault to police cited a variety of reasons, such as considering the assault to be a personal matter, fear of reprisal, or not considering the victimization important enough to report. Similarly, Clery Act data are based on reports made to campus security authorities and law enforcement. According to one federally funded pilot study, data from campus climate surveys at nine colleges suggested that the majority of rapes are not represented in a college’s Clery numbers. In contrast, Clery Act data collected through Education’s Campus Safety and Security Survey provides information only on incidents that are reported to campus security authorities or law enforcement and that occurred on or near campuses that the colleges own or control. This can result in campus climate surveys identifying a larger number of campus sexual violence incidents than federal Clery Act data. For instance, a pilot study of campus climate surveys at nine colleges found that undergraduate students attending these colleges experienced an estimated 2,380 incidents of rape during the 2014-2015 academic year, of which an estimated 770 occurred on campus. In contrast, Clery Act data documented 40 reported rape incidents for these colleges during the 2014 calendar year. Several stakeholders we spoke with also said that campus climate surveys can provide information on a broader range of sexual violence incidents than federal crime statistics data, such as the National Crime Victimization Survey and Clery Act data, which collect information specifically on criminal offenses. For example, one researcher we spoke with noted that campus climate surveys can collect information about sexual harassment, which is not included in federal Clery Act crime statistics. The three surveys we reviewed collect information on a range of sexual violence incidents, including sexual assault, coerced sexual contact, stalking, intimate partner violence, and sexual harassment. Behavioral Questions Campus climate surveys on sexual violence ask students about a variety of topics that are often sensitive. One challenge of conducting these surveys is that students’ understanding of what behaviors are considered “sexual assault” or “rape” may differ and the words used to describe sexual violence will determine what is measured by the survey, such as incidents of rape. Research has found, for example, that surveys asking directly whether students experienced specific types of sexual violence can produce inaccurate data. To improve the quality of information collected by campus climate surveys, researchers ask students about specific behaviors and events that describe the incident rather than referring to it using a label such as “sexual assault” or “rape.” campus climate survey instruments we reviewed included questions that asked for additional context on incidents of sexual violence reported by students, such as the victim’s relationship to the perpetrator. Researchers we interviewed also noted that campus climate surveys can include behaviorally specific questions to identify conduct that survey respondents might not categorize as sexual violence (see sidebar). Each of the three surveys we reviewed used behaviorally specific questions to describe behaviors that may constitute sexual violence for survey respondents, without using specific terms, such as rape. For example: One survey we reviewed asks, “Since the beginning of the current academic year, has an intimate partner threatened to hurt you and you thought you might really get hurt?” instead of asking whether the respondent has experienced “intimate partner violence.” Another survey we reviewed asks, “How many times have one or more people left you unwanted messages (including text or voice messages)?” instead of asking if the respondent has experienced “stalking.” Seven of the nine researchers we spoke with considered behavioral questions to be a best practice for collecting data on sexual violence, including one that noted the general public may not be aware of the definitions of rape or other types of unwanted sexual contact or behaviors. However, one researcher we spoke with expressed concern that the wording of behavioral questions can be imprecise. Surveys May Help Colleges Identify Specific Areas for Improvement Each of the three campus climate surveys we reviewed included questions regarding student knowledge of the administering college’s policies and resources related to preventing and responding to sexual violence on campus. According to nearly all of the stakeholders we interviewed, these data can help colleges identify areas for improvement. In particular, about half of these stakeholders noted that campus climate survey results can help colleges address barriers to reporting. For example, officials from one college we spoke with reported increasing their efforts to educate students about where to go if they experienced sexual assault based on gaps in awareness identified through survey results. Further, information from campus climate surveys also helped one state identify how it could better assist colleges, such as by providing training on intimate partner violence, according to a state official. Several stakeholders reported that campus climate surveys may also help colleges assess their performance on reducing sexual violence. For example, two researchers said that these surveys can help colleges see where improvements were made and where additional action might be needed. Another researcher we spoke with noted that colleges are very interested in using campus climate surveys to establish baseline data and are beginning to understand the usefulness of having data on sexual violence prevalence. Colleges Face Challenges Administering Surveys and Analyzing Results While campus climate surveys can provide additional information on campus sexual violence, stakeholders reported that colleges face a variety of challenges with developing and conducting surveys, as well as analyzing the results. Surveys Can Be Costly and Require Technical Expertise to Administer Although some survey instruments are free, about half of the stakeholders we interviewed offered that some institutions, particularly smaller colleges, may not have the resources to effectively administer surveys on their own. Stakeholders cited costs associated with hiring contractors or relying on faculty and staff to administer and analyze the results of a survey. For example, one researcher we spoke with said that administering a survey can require having people available to respond to student questions about the survey. Officials from one college we spoke with said they relied on faculty volunteers to analyze survey results over a school break, due to a limited survey budget. About half of the stakeholders also noted that providing incentives to students can help increase survey response rates, yet incentives can also be the most expensive part of a college’s survey budget. For example, one college reported that the $10 incentives offered to students who completed the survey constituted the college’s largest survey expense. Given the potentially high costs of these surveys, officials in one state we spoke with reported that the state provided funding to help its colleges administer surveys, analyze results, prepare reports, and translate the survey results into action. About half of the stakeholders also reported that some colleges may not have technical expertise readily available to conduct a campus climate survey on sexual violence. As previously noted, colleges can administer campus climate surveys to a sample of students (sample approach) or all students at a college (census approach). According to federal guidance, a sample approach can reduce the amount of follow-up needed to encourage survey completion; however, expertise is needed to create a sampling frame that includes all, or nearly all, of a target population, and then to accurately select a sample of that population to survey that still represents the target population. Several stakeholders said colleges may face challenges in creating a representative sample of their students, in particular. For example: One researcher noted that colleges may not collect sufficient demographic data or have adequate funding to create a representative sample of their students. Another researcher observed that for both sample and census surveys, colleges may also lack the expertise to ensure, through statistical methods such as non-response bias analysis or weighting responses, that respondents are representative of the student body. Justice officials noted that properly administering campus climate surveys requires personnel with adequate statistical expertise, as well as support from college administration. Research shows that statistical methods like testing for non-response bias and weighting responses are an important consideration when developing estimates on prevalence since non- response bias can potentially limit the extent to which the results can be generalized to the entire student population. One college that had not conducted a campus climate survey also noted that doing so would be a challenge due to limited expertise with conducting surveys on sensitive topics, such as sexual violence. However, two of the selected colleges that conducted campus climate surveys reported working with a third party to ensure more reliable results. Surveys May Not Yield Reliable Results Due to Low Response Rates Response rates are a key factor in producing reliable survey results, and most stakeholders reported that obtaining a sufficient number of responses from students can be a challenge. Achieving a sufficient response rate can help ensure that the survey results are representative of the target population, so that the results can be used with confidence to inform decisions. However, our prior work on federal sexual violence data found surveys are subject to variable response rates over time, and different surveys may have different response rates, which may affect the resulting estimates and the validity of the data. The seven selected colleges that conducted surveys reported response rates ranging from less than 10 percent to more than 60 percent. Additionally, officials we interviewed in two of the selected states reported that their survey response rates were not high enough to generalize or draw meaningful conclusions regarding campus sexual violence, as originally intended. Officials in these states said they primarily included the data in required state reports, with limitations noted as needed. Most stakeholders noted that survey design or administration factors can affect response rates. About half of the stakeholders noted that keeping the survey short is critical to ensuring more students complete it, but some topics of interest to the colleges may not be covered as a result. For example, at one college, officials included survey questions about sexual assault and sexual harassment, but did not pursue questions about stalking due to concerns about survey length. One researcher also told us that long and complicated surveys may not work well on smart phones, which is how many students take these surveys. As for survey administration, most stakeholders noted that choices on how to administer the survey can also affect the response rates for surveys. For example, two researchers we spoke with said that it is better to leave surveys open to respondents for a longer time period to increase the response rate. In addition, a researcher and one state official noted that technical issues can interfere with obtaining a high response rate, such as sending survey invitations to university email accounts that students may not check regularly. Survey Results May Not Be Comparable Across Colleges About half of the stakeholders stated that differences in survey instruments and methodology may make it difficult for colleges to compare their results with the results of other colleges. Variation in questions and definitions. The surveys we reviewed varied in the wording used to ask respondents about their knowledge of institutional policies for reporting sexual violence. According to one researcher, differences in the wording of questions and structure of questionnaires can affect comparability across surveys. Officials in one state also reported that colleges used different definitions of key terms on their campus climate surveys, which made it challenging to reach general conclusions across colleges. Similarly, another researcher stated that differences in the definitions of terms used in colleges’ campus climate surveys make accurate comparisons difficult. Variation in time periods. The surveys we reviewed ask respondents about incidents of sexual violence occurring over different time periods, which may also limit the comparability of survey results across colleges. According to one researcher we spoke with, colleges using different survey instruments should not compare prevalence estimates with results from other surveys that ask about incidents of sexual violence for different time periods. For example, one survey instrument we reviewed asks students about their experiences with sexual violence during the current academic year. In contrast, the two other survey instruments we reviewed ask students about their experiences with sexual violence since first enrolling at college, which covers a longer time period for seniors than first-year students. Time periods may also affect the accuracy of the data collected. One researcher we spoke with, for example, stated that survey questions that cover longer time periods can introduce bias, such as the telescoping effect, whereby respondents recall certain events as being more recent than they actually are. Additionally, longer time periods may yield larger numbers of incidents, since more individuals may experience the behavior over time. To address these comparability challenges, some colleges have used the same survey instruments as other colleges. For example, two colleges included in our review participated in a survey effort among multiple colleges that was designed to allow for comparisons across participating schools. To make these comparisons, a third party administered the survey at participating colleges using a survey instrument with standardized questions and a standardized methodology to enable the measurement of prevalence, and then analyzed the results. In summary, while all stakeholders noted the value of conducting campus climate surveys, about half of them generally cautioned against requiring colleges to administer them in light of the associated challenges previously discussed. Officials at one college that voluntarily conducted a campus climate survey using a one-time grant stated they would have to use funds from faculty and staff salaries if they were required to conduct a survey in the future. Additionally, an official from one college that had not conducted a campus climate survey noted that high turnover in the Title IX coordinator position would make it difficult for the college to sustain a survey effort over time. Further, another college that has not conducted a campus climate survey to examine the incidence of sexual violence noted it would be difficult to design a standard survey instrument that would apply across all colleges, such as those that primarily serve students who take courses online. Selected Colleges Have Used Various Survey Design, Administration, and Outreach Strategies to Increase Understanding of Campus Sexual Violence The seven selected colleges that conducted campus climate surveys used various survey design, administration, and outreach strategies to learn about the incidence of campus sexual violence. Most of these colleges also chose to publicly report some survey results. Survey Design Choosing a survey instrument. These seven colleges considered several factors when choosing a survey instrument: Rigor. Officials from each of the seven colleges that conducted a campus climate survey said it was important to use a rigorous survey instrument, such as one that survey developers have validated or colleges have widely adopted. One college official explained that using a validated instrument provided assurances that helped secure a timely approval from the college’s institutional review board. Flexibility. Officials from five colleges said they valued the flexibility of using a survey instrument that could be modified based on the specific characteristics and needs of their colleges. For example, officials from one college said that the chosen instrument enabled administrators to use gender-inclusive language and ask questions about incidents of sexual violence from the perspective of the perpetrator in addition to the victim. Comparability. Officials from four colleges noted that comparability was a consideration when selecting a survey instrument, including the potential to compare survey results across colleges that share similar characteristics or at their own colleges over time. However, as previously discussed, stakeholders noted that differences across survey instruments can limit the comparability of survey results. Cost. Officials from four colleges said the cost of conducting campus climate surveys informed their selection of a survey instrument. For example, officials at one college said they used a free, publicly available survey instrument because the college lacked the resources to pay for an instrument. Length. Officials from four colleges identified survey length as another factor they considered. Officials from three of these colleges specifically noted that longer surveys may result in lower response rates. In addition, officials from one of these colleges stated that because longer surveys collect more data, the college would need more time to analyze the results. An official from another college expressed concern that longer surveys with multiple follow-up questions about incidents of sexual violence risk re-traumatizing victims. Modifying the survey instrument. Six of the seven colleges modified their survey instruments to some extent. Officials at five of the six colleges reported adding questions to their survey instruments. For example, two colleges reported adding questions to comply with a state survey requirement, while another college reported adding follow-up questions to collect information on events prior to an incident of sexual violence. Officials at three of the five colleges reported limiting the number of questions they added to keep the survey short. Officials at two of these colleges noted that lengthening the survey could result in fewer students completing it. Officials at one of these colleges cited additional fees that the vendor charged for such modifications as another factor in their decision to limit the number of questions they added. Officials from two colleges also said they modified the language in the survey instruments to reflect the names of specific offices and programs on their campuses. In contrast, officials at one college reported making no changes to their survey instrument because they planned to use the original survey as a baseline against which to compare future survey results. Survey Administration Identifying the survey population. Six of the seven selected colleges that conducted a campus climate survey distributed their surveys via email to all students in the target population (i.e., a census approach), and one worked with a third party to select a representative sample of students to receive the survey. As previously discussed, surveying a sample of students can reduce the amount of follow-up work needed to obtain sufficient responses to provide information about the student body as a whole. However, officials at four of the seven colleges cited other considerations for choosing a census approach. Specifically, officials from three of these colleges said that a census approach provided every student the opportunity to share their experiences and perspectives through the climate survey. Officials from two of these colleges further explained that administering the survey to a sample of students could give the appearance they were excluding students, some of whom might be victims of sexual violence, from participating in the survey. Another college reported using a census approach because it lacked the resources needed to develop a representative survey sample. Determining survey timing and frequency. All seven of the selected colleges that conducted a climate survey administered at least one survey during the spring semester. Officials from three of these colleges said that administering climate surveys in the spring ensures that first-year students have spent time on campus prior to taking the survey. However, officials at four colleges said that competing demands for students’ time, such as other surveys and final exams, are a tradeoff to administering these surveys in the spring. As a result, students may experience “survey fatigue”—that is, they may be less likely to respond to or complete the survey. The seven selected colleges administered surveys with varying frequencies. For example, one college reported administering its survey biennially in accordance with a state requirement, while two others administered their surveys less frequently (e.g., every 4 years) to avoid survey fatigue and low response rates. Protecting confidentiality. Six of the colleges reported taking steps to preserve the confidentiality of survey respondents. For example, officials from five colleges explained that in order to maintain respondents’ confidentiality they had to redirect students who completed the survey to a separate webpage to claim their incentive or enter a drawing. Officials from four colleges reported using a third-party vendor to help protect students’ confidentiality or, at a minimum, signal that the college had no direct role in collecting or storing student responses. For example, to protect students’ confidentiality, officials from three of these colleges said their vendors provided summary data, rather than student-level data, and did not report results with a low number of respondents. Officials from three of the six colleges reported consulting their institutional review boards to help ensure that the colleges protected respondents’ confidentiality. Officials from another college reported limiting how often they administered campus climate surveys to head off potential student concerns that they were being “tracked” during their time on campus. Survey Outreach Offering survey incentives. As part of their outreach efforts, six of the seven colleges offered incentives to students who completed surveys, which some research suggests can increase web-based survey participation rates (see fig. 1). For example, one college offered a $20 gift card to survey respondents, which college administrators considered critical to achieving a higher response rate. This comports with a study funded by Justice that found incentives between $20 and $30 appear to help maximize survey participation, whereas a $40 incentive does not clearly offer any additional advantage. To manage the cost of incentives, two colleges offered a limited number of incentives to students via lottery drawings. Officials from one of these colleges said it funded its lottery for five $200 gift cards with proceeds from an on-campus student event. Another college offered a coupon for a free drink at a campus coffee shop to the first 300 survey respondents. An official from the college that did not offer incentives in its most recent climate survey said that incentives would help improve response rates for future surveys. While incentives can help increase survey participation, two colleges noted that offering incentives may require additional precautions to prevent abuse. For example, one college had to put its survey on hold to fix a technical error that enabled a student to collect additional incentives by completing the survey multiple times. Another college with experience offering survey incentives reported that it received calls and emails from other colleges requesting assistance with preventing such abuses. Marketing the survey. Each of the seven selected colleges that conducted a climate survey used email to invite students to respond to the survey and various marketing efforts to encourage survey participation (see fig. 1). Officials from five of the colleges reported following up with email reminders. For example, one college reported adding the incentive dollar amount to the subject lines and another reported varying the gender of those who sent follow-up emails and the timing of them to increase student responses. In addition, five colleges reported using social media to advertise their climate surveys. Recognizing the importance of gaining institutional buy-in, officials at all seven colleges said they engaged college administrators or faculty in their marketing efforts. For example, officials at one college said that deans of its various schools were asked to send emails encouraging students to take the survey. The officials credited this particular email strategy for doubling the college’s survey response rate. Officials from five colleges also reported involving student leaders and influencers in their marketing efforts, such as creating t-shirts for students to wear that included information about the survey; having students publish an op-ed in the campus newspaper promoting the survey; and asking student leaders to share information about the survey with student organizations. Survey Reporting Six of the seven selected colleges publicly reported at least some of the results of their surveys. Five colleges, for example, published survey results on their respective websites, and another created a campus poster with an infographic illustrating key survey results. Two of these colleges also presented the results during meetings with different student populations, such as fraternities and sororities and lesbian, gay, bisexual, transgender, and queer/questioning students. Officials from four colleges expressed that they felt a responsibility to be transparent. However, according to an official at one college, a potential drawback to making survey results publicly available is that the results could create or reinforce negative perceptions of a college’s climate regarding campus sexual violence. Finally, officials from the one college that had not publicly disclosed any survey results explained that, due to a lack of resources and in-house expertise, they did not feel sufficiently confident in their analysis of the survey results to publish them. Federal Agencies Have Provided Information to Colleges about Developing and Implementing Campus Climate Surveys, Among Other Efforts Since the issuance of the White House Task Force to Protect Students from Sexual Assault report in 2014, federal agencies have created and disseminated informational resources for colleges interested in conducting campus climate surveys. For example, Justice’s Bureau of Justice Statistics and Office on Violence Against Women funded the development of a publicly available survey instrument and a validation study from 2014 to 2016, to provide colleges and researchers with access to a free and reliable survey instrument to collect school-level data on campus climate and sexual victimization. In 2017, Justice also collaborated with HHS’s Centers for Disease Control and Prevention to provide funding and project planning assistance for a pilot study to develop and test a campus climate survey for use at two Historically Black Colleges and Universities. According to Justice and HHS officials, this survey instrument was based on the validated Justice survey instrument, with some modifications made to the campus climate questions. In October 2019, Justice officials told us the agency had decided not to proceed with funding for the study due to concerns that modifications to the original validated survey instrument would result in data that are not comparable to data from the validation study. In addition, Justice has developed technical assistance materials for colleges interested in conducting a campus climate survey. For example, from 2016 to 2017 Justice’s Office on Violence Against Women issued documents outlining lessons learned from the Justice survey validation study, talking points to help college administrators and students communicate about climate surveys, and a frequently asked questions sheet on campus climate surveys. These documents covered a range of topics, including the goals of a campus climate survey, best practices for developing survey content, and tips for choosing survey participants and protecting their confidentiality, among others. Justice’s campus climate survey, validation study, and technical assistance documents are publicly available on Justice’s website. Justice’s campus climate survey and validation study are also available through the Center for Changing Our Campus Culture, an online clearinghouse developed and maintained by a nonprofit organization with funding from Justice’s Office on Violence Against Women. The clearinghouse provides resources for colleges on addressing sexual assault, domestic violence, dating violence, and stalking. For example, the clearinghouse includes documents outlining (1) selected research initiatives and resources on campus climate surveys, (2) suggested campus sexual assault policies and procedures, and (3) steps college institutional review boards and administrators can take to oversee research on sexual violence while maintaining participant confidentiality. Most stakeholders we interviewed were aware of federal information and resources available to assist colleges in conducting campus climate surveys. For example, officials at two of the colleges reported using Justice’s survey instrument for their campus climate surveys, with officials from one college noting they selected the instrument because it had been validated as a reliable instrument. An official from another college reported using Justice’s validation study during the survey instrument selection process, to better understand the strengths and weakness of survey instruments and potential sources of bias in the data collected. In addition to the resources provided by Justice, Education has offered information to colleges regarding the prevention of campus sexual violence. For example, Education’s 2015 Title IX Resource Guide encouraged Title IX coordinators to help colleges develop a method, appropriate to their college, for surveying students about the campus climate. Additionally, to address Title IX concerns or complaints, Education may enter into voluntary resolution agreements with colleges. These agreements describe the changes colleges agree to make to ensure their procedures for preventing and responding to sex discrimination comply with the law. According to agency officials, Education may include campus climate surveys as part of these voluntary agreements, on a case-by-case basis. Justice and HHS have also funded campus sexual assault prevention and response grants. For example, Justice’s Office on Violence Against Women provides grant funding to colleges to help improve responses to sexual assault and other types of domestic and sexual violence through its Grants to Reduce Sexual Assault, Domestic Violence, Dating Violence, and Stalking on Campus Program. According to a Justice official, colleges receiving these grants are allowed, with prior approval, to use a small percentage of the grant funds to conduct campus climate surveys for program improvement purposes, but it is not a requirement of the program. Additionally, HHS’s Office on Women’s Health provided funding for the College Sexual Assault Policy and Prevention Initiative from 2016 to 2019 to organizations that partnered with colleges to provide technical assistance and support in developing sexual assault policies and prevention strategies. According to HHS officials, grantees were encouraged to conduct campus climate surveys to establish baseline data for their partner campuses. HHS officials also reported providing grantees with information on different campus climate survey instrument options, including a free, publicly available survey instrument. One college we spoke with reported partnering with one of these HHS grantees to conduct baseline and follow-up campus climate surveys and to develop comprehensive campus prevention strategies. For example, officials from the college and the grantee told us they used funds from the grant to help the college establish memoranda of understanding with community-based organizations, such as the local women’s crisis center, to support students living off-campus who may have experienced sexual violence. Agency Comments We provided a draft of this report to the Departments of Education and Justice for review and comment. The Departments of Education and Justice provided technical comments, which we incorporated as appropriate. We also provided relevant report sections to the Department of Health and Human Services, and to third parties, including survey developers and states included in our review, for technical comments. The Department of Health and Human Services, survey developers, and state officials provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Education, the Attorney General, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or larink@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology This report examines (1) what key stakeholders view as the strengths and limitations of using campus climate surveys to examine the incidence and characteristics of sexual violence on college campuses, (2) what approaches selected colleges have used to survey their students about the incidence of sexual violence on campus, and (3) what role federal agencies play in helping colleges develop and implement climate surveys. To inform our examination of stakeholders’ views on the strengths and limitations of campus climate surveys, we reviewed three commonly used survey instruments that included questions regarding the incidence of sexual violence, including sexual assaults, coerced sexual contact, stalking, and intimate partner violence. Each of these survey instruments is also available, online or by request, for any college to use for free. For each of the surveys, we reviewed survey questions and methodological reports, and conducted interviews with representatives from the organizations involved in developing them. The Association of American Universities (AAU), an association of 65 research universities, developed its survey instrument in conjunction with the research firm Westat. AAU administered its survey to participating colleges in spring 2015 and spring 2019. The Administrator Research Campus Climate Collaborative (ARC3), an organization of sexual assault researchers, university administrators, and student and legal affairs professionals, developed and tested its campus climate survey from 2014 to 2015. The final survey instrument was made available to colleges in 2015. According to the survey developers, there is no comprehensive list of schools that have conducted the ARC3 survey. The Department of Justice (Justice) survey instrument was initially developed by the White House Task Force to Protect Students from Sexual Assault in 2014, and later refined and tested by Justice in collaboration with RTI International, a research organization. The survey instrument, also known as the Campus Climate Survey Validation Study, is publicly available online. According to Justice officials, there is no comprehensive list of schools that have conducted the Justice survey. Additionally, we reviewed two key federal data sources on campus sexual violence: Clery Act data from Education’s Campus Safety and Security Survey and the National Crime Victimization Survey (NCVS) from Justice. We identified these data sources based on a review of prior GAO work and interviews with Education and Justice officials. We examined documentation for these data sources and interviewed the responsible agency officials to determine the type of data they collect on campus sexual violence, the methods for collecting this information, and their limitations. We determined these data sources were sufficiently reliable for our purposes. To inform all three objectives, we also interviewed a total of 25 stakeholders with relevant expertise, including representatives of four organizations involved in developing the three surveys we reviewed and five additional researchers who have studied campus sexual violence; officials from 10 colleges; officials from four states; and federal officials from Education and Justice. We refer to the representatives of these organizations and entities collectively as “stakeholders” in our report. When discussing stakeholder views, we group them into the following categories: “several” (between four and nine), “about half” (between 10 and 14), “most” (between 15 and 19), and “nearly all” (20 or greater). In instances where we report on the views of specific groups, such as colleges, researchers, or state officials, we refer to the individual group and enumerate the number of group members. During these interviews, we gathered information on issues related to designing and conducting campus climate surveys and analyzing and communicating survey results. We also discussed federal information and resources available to help colleges develop and implement campus climate surveys. Findings from our interviews summarize selected stakeholders’ views regarding campus climate surveys on sexual violence. These findings do not represent the views of all researchers on these topics and do not represent the experiences of all colleges developing or implementing these surveys. To identify researchers with a variety of perspectives, we reviewed research on sexual violence and conducted targeted web searches. We then selected individuals or organizations with experience conducting research on campus sexual violence or developing and administering a campus climate survey on sexual violence. We also spoke with representatives of the organizations responsible for developing the three climate survey instruments we reviewed. We used multiple approaches to identify the 10 selected colleges included in our review since there is no central repository of information on whether colleges have conducted a campus climate survey on sexual violence. Colleges that conducted a campus climate survey. Based on targeted web searches, we identified colleges that had conducted a campus climate survey and then grouped them according to which of the three survey instruments they used. We analyzed data from the Department of Education’s Integrated Postsecondary Education Data System to identify the characteristics of these colleges, including sector (i.e., public, private not-for-profit, and private for-profit), program length (i.e., 2-year and 4-year), size, and geographic location. Colleges that had not conducted a campus climate survey. We also used data from the Integrated Postsecondary Education Data System to help identify colleges that had not conducted campus climate surveys on sexual violence. Specifically, we grouped colleges into categories by sector and program length and randomized the lists within each category. To select specific colleges, we started with the college in each category at the top of the randomized list and conducted targeted web searches in an effort to ensure the college had not publicly reported conducting a campus climate survey. We conducted outreach to the Title IX coordinators at each of the selected colleges via email or telephone to confirm whether or not the college had conducted a campus climate survey on sexual violence. In total, we selected 10 colleges, including seven that have conducted campus climate surveys that examine the incidence of sexual violence on their campuses and three that have not. We selected these colleges to ensure variation in size, sector (i.e., public, private not-for-profit, and private for-profit), program length (i.e., 2-year and 4-year), geographic location, survey instrument used, and whether the college was located in a state that as of January 1, 2017 had a statutory requirement in effect for at least some colleges in their state to conduct a campus climate survey (see table 2 for selected colleges by program length, sector, and use of campus climate survey). We interviewed Title IX coordinators and other knowledgeable officials regarding the selected colleges’ experiences with conducting campus climate surveys and their perspectives on the strengths and limitations of these surveys. As part of our efforts to obtain a variety of perspectives, we also conducted semi-structured interviews with officials from four states regarding the use of campus climate surveys in their states. We selected three states (Louisiana, New York, and Washington) that as of January 1, 2017, had a statutory requirement in effect for at least some colleges in their state to conduct a campus climate survey, and one state (Ohio) that recommended colleges conduct such surveys. To identify states that required or recommended that colleges conduct campus climate surveys, we used several approaches to develop a preliminary list, including consulting with researchers, reviewing annual reports from 2014 to 2019 prepared by the National Conference of State Legislatures on state higher education legislation, and conducting targeted web searches. Based on these reviews, we judgmentally selected four states to ensure a diversity of state experiences with requiring or recommending campus climate surveys. We also confirmed applicable state requirements with state officials. The selected states differed in the nature of the survey requirement or recommendation, such as the types of colleges covered (e.g., public or public and private) and how frequently the survey was required or recommended to be administered. To supplement information gathered from our interviews, we also identified and reviewed studies and reports that examined the design and use of campus climate surveys. We conducted a targeted search of various databases to identify studies on leading survey practices. We selected studies for additional review based on their relevance to our objectives, and, using a standard review instrument, assessed the quality and rigor of each study’s findings and methods. Our report includes information about leading survey design and implementation practices from those studies we found were appropriate through this review process. To examine the role that federal agencies play in helping colleges develop and implement climate surveys, we reviewed Justice and Education resources available to help colleges conduct campus climate surveys. Additionally, we reviewed information on campus sexual violence prevention grants provided by the Department of Health and Human Services. We also reviewed relevant federal laws and regulations, as well as federal guidance and other documentation pertaining to campus sexual violence and campus climate surveys. We conducted this performance audit from July 2018 to April 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Debra Prescott (Assistant Director), Maria Gadel (Analyst-in-Charge), Jonathan Adams, Will Colvin, Caitlin Cusati, Kirsten Lauber, and Erica Vilay made key contributions to this report. Additional assistance was provided by MacKenzie Cooper, Sarah Cornetto, Holly Dye, Monika Gomez, Dana Hopings, Connor Kincaid, Sheila R. McCoy, Mimi Nguyen, and Almeta Spencer.
Why GAO Did This Study Sexual violence–which can include crimes such as rape and other forms of sexual coercion–is widely acknowledged as a problem on college campuses. Although Education collects some data on sexual violence at colleges that receive federal funding, measuring the prevalence of campus sexual violence has proven difficult, due in part to underreporting of these incidents to law enforcement. While some researchers have used surveys to gather additional information regarding sexual violence on college campuses, estimates from these surveys can vary widely due to factors such as differing methodologies and response rates. This report examines (1) key stakeholders' views on the strengths and limitations of campus climate surveys on sexual violence, (2) approaches selected colleges have taken to survey their students, and (3) the role federal agencies play in helping colleges develop and implement these surveys. GAO reviewed documentation for three widely administered survey instruments, and relevant federal laws, regulations, and guidance. GAO interviewed 25 stakeholders, including researchers; Education and Justice officials; officials in four states that required or recommended campus climate surveys as of January 1, 2017, a date selected to allow time for implementation; and 10 colleges—including seven that conducted campus climate surveys—selected based on program length (2- or 4-year), geographic diversity, and other factors. What GAO Found Campus climate surveys that examine sexual violence occurring on individual college campuses have several strengths and limitations, according to stakeholders GAO interviewed. Strengths. Nearly all stakeholders said colleges can use these surveys to gather more comprehensive information about incidents of campus sexual violence, such as those not previously reported to the colleges or law enforcement. Surveys can also provide information on students' knowledge of the colleges' procedures for reporting incidents, among other topics, which can help colleges identify areas for improvement. Limitations. Most stakeholders said getting students to respond can be challenging. In addition, about half of stakeholders said some colleges may not have the resources to effectively administer these surveys, and results across colleges that use different surveys may not be comparable. The seven selected colleges that conducted surveys reported using various approaches to survey their students about the incidence of campus sexual violence. Each college used one of three widely used surveys, but six modified them to some extent. Six colleges sent the survey to all undergraduates, and one surveyed a representative sample of students. Colleges also reported using multiple outreach strategies to increase participation, including offering incentives, such as gift cards, to students who completed the survey; using social media; and involving student leaders (see figure). Colleges' reported response rates ranged from less than 10 percent to more than 60 percent. The Departments of Justice (Justice), Education (Education), and Health and Human Services (HHS) have created and disseminated informational resources for colleges interested in conducting campus climate surveys. For example, from 2014 to 2017, Justice made funding available for the development of a campus climate survey instrument for public use, and developed technical assistance materials covering various topics, including how to choose survey respondents and protect their confidentiality. In addition, in 2015, Education issued guidance encouraging colleges to develop ways to survey students about the campus climate. Justice and HHS have also funded grant programs that allowed grantees to use some funding to conduct campus climate surveys.
gao_GAO-20-280T
gao_GAO-20-280T_0
Background DOD’s policy is to ensure that eligible personnel and their families have access to affordable, quality housing facilities and services consistent with grade and dependent status, and that the housing generally reflects contemporary community living standards. From the inception of MHPI, the military departments were provided with various authorities to obtain private-sector financing and management to repair, renovate, construct, and operate military housing in the United States and its territories. Through these authorities, the military departments have entered into a series of agreements with private partners to provide housing to servicemembers and their families. The military departments have flexibility in how they structure their privatized housing projects, but typically the military departments lease land to private developers for 50- year terms and convey existing housing located on the leased land to the developer for the duration of the lease. The developer then becomes responsible for renovating and constructing new housing and for the daily management of these housing units. At the end of fiscal year 2017, 14 private partners were responsible for 79 privatized military family housing projects—34 for the Army, 32 for the Air Force, and 13 for the Navy and Marine Corps—in the United States, each of which includes housing at one or more military installation. The Deputy Assistant Secretary of Defense for Facilities Management, under the authority, direction, and control of the Assistant Secretary of Defense for Sustainment, is responsible for all matters related to MHPI and is the program manager for all DOD housing, whether DOD-owned, DOD-leased, or privatized. In this capacity, the Deputy Assistant Secretary is to provide both guidance and general procedures related to military housing privatization, as well as required annual reports to Congress on the status of privatized military housing projects. However, it is the responsibility of the military departments to execute and manage privatized housing projects, including conducting financial management and monitoring their portfolio of projects. Each military department has issued guidance that outlines its responsibilities for privatized housing, such as which offices are responsible for overseeing privatized housing projects. We have previously reported on DOD’s privatized housing program. Table 1 provides a summary of key findings and recommendations from our prior reports and the implementation status of the recommendations. DOD Conducts Some Oversight of the Condition of Privatized Housing, but Efforts Are Limited in Key Areas Each military department conducts a range of oversight activities—some more extensive than others—for its privatized housing projects. For example, among other things, military departments review sample work order requests and inspect housing during the change of occupancy process. DOD guidance requires that the military departments ensure eligible personnel have access to quality housing facilities and services that generally reflect contemporary living standards. Further, DOD’s housing manual states that because privatization creates a long-term governmental interest in privatized housing, it is essential that projects be attentively monitored. Through its guidance, DOD delegates oversight responsibility of the individual privatized housing projects to each of the military departments. In addition, according to documents we reviewed, individual project business agreements set guidelines that convey the management, operation, and maintenance duties to the private partner, with the caveat that the military departments still have the right to access the premises or private partner records to ensure compliance with applicable laws. We determined that OSD and the military departments’ oversight has been limited in key areas. Specifically, our ongoing review showed (1) the scope of oversight of the physical condition of privatized housing has been limited; (2) performance metrics focused on quality of maintenance and resident satisfaction may not provide meaningful information on the condition of privatized housing; (3) there is a lack of reliable or consistent data on the condition of privatized housing; and (4) past DOD reports to Congress on resident satisfaction are unreliable due to inconsistent handling and calculation of the data, and therefore may be misleading. Military Departments Conduct Some Oversight of the Physical Condition of Privatized Housing, but Scope of Efforts Is Limited DOD delegates oversight responsibilities of the individual privatized housing projects to each of the military departments, and each military department has subsequently issued guidance outlining oversight roles and responsibilities. Military department oversight activities generally fall into two categories—(1) daily oversight of management and operations and (2) reviews of compliance with each project’s business agreements. Daily oversight of management and operations. Daily oversight of a project’s management and operations is to be conducted by each installation’s military housing office. Military housing officials told us activities to monitor the physical condition of housing units generally include reviewing sample work order requests, following up with a sample of residents to check on their experience with recently completed work, and inspecting housing units during the change of occupancy process. Based on our preliminary observations, the implementation and scope of these activities varies and can be limited. For example, during our site visits conducted from June through August 2019, we identified the following installation-specific practices: Military housing office officials at one Army installation told us that they inspect 100 percent of housing units that have completed change of occupancy maintenance. In contrast, officials from an Air Force installation told us that they inspect 10 to 20 percent of housing units that have completed change of occupancy maintenance. Military housing officials at one Marine Corps installation told us that for one of the two partners that own housing on the base, they had access to only 3 percent of completed work order tickets from the previous month, as reported to them by the private partner. Officials from a Navy installation told us that they had access to the private partner’s maintenance record system and would pull reports on new resident housing occupants who had made 6 or more maintenance calls in a 30-day period. Military housing officials at half of the sites we visited stated that staffing levels limited their ability to carry out oversight duties, such as work order data analysis and housing inspections. Reviews of compliance with each project’s business agreements. Reviews of compliance with a project’s business agreements are a joint effort between the local military housing office, the private partners, military department installation commands, and other echelons of command. These reviews can include neighborhood tours to view project amenities such as community centers, playgrounds, and pools, all of which are owned, maintained, and operated by the private partner companies, as well as exteriors of housing units. However, our preliminary work showed these reviews have been limited in the scope of their assessment of the physical condition of the housing units, as interior walk-throughs may have been limited to just a few housing units at each installation. According to military department officials, each department is currently taking steps to revise guidance and standardize daily oversight activities in an effort to provide consistent oversight across projects and installations, and to increase the focus of oversight on the physical condition of housing. The military departments are taking additional steps, such as increasing staffing levels, improving training for military housing office officials, and ensuring that housing officials have independent access to data. However, each military department is working to implement service-specific initiatives with only limited guidance from OSD on the level of oversight expected of the services as it relates to the condition of the housing. While existing OSD guidance provides objectives to the military departments for oversight of the condition of DOD-owned housing, guidance for privatized housing is focused on the implementation of projects, construction of new housing units, and financial management. The guidance does not include objectives for monitoring the condition of privatized housing projects, such as objectives focused on both ensuring the operation and maintenance of privatized housing to standards that provide safe living conditions for servicemembers and providing authorities to installation commanders to oversee those standards. We will continue to assess any implications of the lack of OSD guidance as part of our ongoing review. DOD Uses Several Metrics to Monitor Private Partner Performance, but the Metrics May Not Provide Meaningful Information on the Condition of Privatized Housing The military departments each use a range of project-specific performance metrics to monitor private partner performance, but our ongoing work showed that the metrics designed to focus on resident satisfaction and on the quality of maintenance conducted on housing units may not provide meaningful information or reflect the actual condition of the housing units. Most but not all of the private partners are eligible to receive performance incentive fees based on generally meeting the performance metrics established in each individual project’s business agreement. Private partner performance is measured through a variety of metrics, such as resident satisfaction, maintenance management, project safety, and financial management. To determine how well the private partners are performing under the metrics, military housing office officials told us that they rely on a range of specific indicators established in the project business agreements. However, the indicators themselves may not provide meaningful information on the private partner’s performance in maintaining quality housing units. For example, our preliminary work identified the following: Maintenance management. One indicator of performance of maintenance management that is regularly included in project business agreements measures how often the property manager’s response time to work orders meets required time frames established in the project’s business agreements. While this indicator measures the timeliness of the private partner’s response, it does not measure or take into account the quality of the work that was conducted or whether the resident’s issue was fully addressed. Some projects include indicators that aim to more directly measure quality, such as the number of work orders placed during the first 5 business days of residency, which may indicate the extent to which change of occupancy maintenance was completed. Resident satisfaction. One example of an indicator of resident satisfaction is whether a project has met the target occupancy rates established in the business agreements. An OSD official we spoke with and private partner officials told us they use occupancy as an indicator of satisfaction based on the assumption that residents would move if they were dissatisfied with their housing unit. However, based on our focus groups, this may not be a reliable assumption. Although most residents are not required to live in military housing, residents in each of our 15 focus groups indicated a variety of reasons for choosing to live in privatized housing, many of which did not have to do with their satisfaction with the quality or condition of their homes. For example, residents cited factors influencing their decision to live in privatized housing, such as living in close proximity to military medical or educational services for children or other family members who receive benefits through the military’s Exceptional Family Member Program, access to quality schools, and a lack of safe and affordable housing in the surrounding community. OSD and military department officials we spoke with recognized that the current metrics do not consistently focus on or prioritize the private partners’ performance with maintaining housing units and ensuring resident satisfaction. In October 2019 OSD issued new guidance standardizing the performance incentive fee framework across the military departments. According to OSD and the private partners with whom we spoke, this guidance was developed through a joint effort with the military departments and the private partners; it provides a framework where the metrics for resident satisfaction and maintenance management will account for a majority of the fee, with project safety and financial performance weighted less heavily. However, according to officials from OSD and officials we spoke with from each of the military departments, the specific indicators used to drive the metrics will need to be negotiated with each of the private partners for each project. Performance indicators designed to more directly measure the quality of maintenance conducted on housing units and resident satisfaction will provide military departments more transparency into private partner performance with regard to these two important metrics—metrics that are often directly tied to the performance incentive fees provided to the private partners. DOD and Private Partners Collect Maintenance Data on Privatized Housing, but These Data Are Not Captured Reliably or Consistently for Use in the Ongoing Monitoring of Housing Units The housing projects’ business agreements typically include a requirement for the private partner to maintain a records management system to record, among other things, maintenance work requested and conducted on each housing unit. According to private partner officials, each company uses commercial property management software platforms that are used for activities such as initiating maintenance work orders and dispatching maintenance technicians. Some private partner officials also stated that data from the work order tracking systems were intended to prioritize and triage maintenance work, not to monitor the overall condition of privatized housing units. While data from these work order tracking systems may be useful for point-in-time assessments of work order volume at a given installation, military department officials told us that efforts are underway to monitor work order data to increase the military departments’ oversight and the accountability of the private partners for providing quality housing. However, in our ongoing work we observed that these data are not captured reliably or consistently for use in the ongoing monitoring of the condition of privatized housing units. We received and reviewed data from each of the 14 private partners’ work order tracking systems covering each of the 79 privatized family housing projects. Based on our preliminary analysis of the initial data provided by the private partners, we noted the following: Data anomalies. We identified anomalies in work order data from each of the 14 partners. For example, we identified instances of, among other things, duplicate work orders, work orders with completion dates prior to the dates that a resident had submitted the work order, and work orders still listed as in-progress for more than 18 months. Inconsistent use of terminology. Based on our preliminary review of the data provided by the private partners and discussions with private partner officials, we noted cases where work orders were inconsistently entered into the work order tracking systems with respect to two primary factors—(1) how the request is described by the resident or interpreted by the official entering the data, which can differ for each work order, and (2) the existing range of pre- established service category options in the private partner’s work order tracking system, which differ among the partners. Differing practices for opening and closing work orders. At some installations we visited, private partners noted changes in practices for opening and closing work orders, limiting the usefulness of the data in monitoring the status of work orders over time and thus the condition of privatized housing. According to military department officials, efforts to review data from the private partners’ work order tracking systems has increased, and military department officials told us that they have found similar limitations. However, there are no standard practices currently in place for assessing the accuracy and reliability of the work order data or for setting standard terminology and practices for opening and closing work orders. DOD Provides Reports to Congress on Resident Satisfaction with Privatized Housing, but Data in These Reports Are Unreliable and May Be Misleading DOD is statutorily required to provide reports to Congress that include, among other things, information about military housing privatization projects’ financial health and performance and backlog, if any, of maintenance and repairs. These reports have included information on resident satisfaction with privatized housing based on results of the annual military department satisfaction surveys. Based on our preliminary work, we have determined that information on resident satisfaction in these reports to Congress on privatized housing has been unreliable and may be misleading due to variances in the data the military departments collect and provide to OSD and in OSD’s calculation and presentation of these data. In May 2019, OSD issued its report for fiscal year 2017, which stated that overall resident satisfaction for calendar year 2017 was 87 percent. For OSD’s fiscal year 2017 report, the military departments provided data on resident satisfaction based on information from the annual resident satisfaction surveys. Specifically, OSD’s instructions to the military departments required the military departments to report satisfaction based on resident responses to the question that asks: “Would you recommend privatized housing,” with results indicating how many tenants responded “yes,” “no,” or “don’t know.” However, the military departments’ approaches for collecting data in their annual resident satisfaction surveys varies, which limits their ability to assess whether residents would recommend privatized housing. Instead of asking whether residents would recommend privatized housing, the military departments’ annual resident satisfaction survey asks residents the following: “How much do you agree or disagree with the following statement, ‘I would recommend this community to others.’” A resident’s satisfaction with his or her community and inclination to recommend it to others may not be reflective of satisfaction with either the privatized housing unit or privatized housing in general. Residents are then provided the following response categories on a scale of 5 to 0: (5) strongly agree, (4) agree, (3) neither agree nor disagree, (2) disagree, (1) strongly disagree, and (0) not applicable, no opinion, don’t know, or no answer. Through our analysis, we have identified variances in the methods the military departments use to translate the residents’ responses into the “yes,” “no,” or “don’t know” categories. The variances in how the military departments calculate “yes,” “no,” or “don’t know” result in inconsistencies in how resident satisfaction is ultimately reported to Congress. Specifically: For the fiscal years 2015 through 2017 reports, Navy officials told us that they counted responses reported in categories 5 and 4 as “yes,” responses in categories 2 and 1 as “no,” and responses in categories 0 and 3 as “don’t know.” For the same time period, Air Force officials told us that they counted responses in categories 5, 4, and 3 as “yes,” responses in categories 2 and 1 as “no,” and responses in category 0 as “don’t know.” The Army calculated responses differently for the fiscal years 2015, 2016, and 2017 reports. Specifically: For the fiscal year 2015 report, the Army counted responses in categories 5, 4, and 3 as “yes,” responses in categories 2 and1 as “no,” and responses in category 0 as “don’t know.” For the fiscal year 2016 report, the Army counted responses in categories 5 and 4 as “yes,” responses in categories 2, 1, and 0 as “no,” and responses in category 3 as “don’t know.” For the fiscal year 2017 report, the Army counted responses in categories 5 and 4 as “yes,” responses in categories 2 and 1 as “no,” and responses in categories 0 and 3 as “don’t know.” In our ongoing work, we have also identified instances of errors and inaccuracies in how OSD calculates these data and reports on resident satisfaction to Congress. Specifically, we found missing data points and incorrect formulas, among other errors, in OSD’s calculation of the data submitted by the military departments for OSD’s fiscal year 2017 report to Congress. For example: The formula used by OSD to calculate overall resident satisfaction for the fiscal year 2017 report did not include data for several projects, including for four Army projects that, as of September 30, 2017, accounted for over 18 percent of the Army’s total housing inventory. Additionally, we identified that OSD did not include resident satisfaction data for a Navy project in its fiscal year 2017 report to Congress, even though when we reviewed the Navy’s submission to OSD, we found that the Navy had included data for that project. For one Air Force project, OSD reported identical resident satisfaction data for the fiscal year 2015, 2016, and 2017 reports, despite the fact that Air Force officials had noted in their submissions to OSD that the resident satisfaction data were from the annual resident satisfaction survey conducted in December 2013. We also found that presentation of data in OSD’s report to Congress may be misleading because OSD did not explain the methodology it used to calculate the overall resident satisfaction percentage or include caveats to explain limitations to the data presented. Specifically, OSD did not include information on overall response rates to the annual satisfaction survey for each military department, nor did it include response rates by project. Low response rates can create the potential for bias in survey results. For example, in the report for fiscal year 2017, OSD reported that 25 percent of residents living in renovated housing units for one privatized housing project were satisfied with their housing, but we found that only four residents had provided responses to this question. Thus, only one resident reported being satisfied. In addition, we found that OSD did not provide an explanation in the report for why five projects were listed as “not applicable.” According to OSD officials, this error was a quality control issue that they plan to address. According to OSD officials, OSD and the military departments are reviewing the resident satisfaction survey questions and will be identifying and implementing measures to ensure an accurate and reliable process to compile, calculate, report and compare MHPI resident satisfaction by military department and across DOD. Military Housing Offices Have Not Effectively Communicated Their Role as a Resource for Servicemembers Experiencing Challenges with Privatized Housing Military housing office officials located at each installation are available to provide resources to servicemembers experiencing challenges with their privatized housing, among other services, but these offices have not always effectively communicated this role to residents of privatized housing. The military housing office is to provide new residents with information on their local housing options, to include referral services for housing options. According to some military housing office officials, the military housing office then works with the private partner to identify the eligibility and type of home the servicemember qualifies for, if the resident chooses to live in privatized housing. According to some residents we spoke with in one of our focus groups, beyond this initial interaction, military housing office officials generally do not interact with residents on a regular basis. Additionally, residents who participated in our focus groups noted they were sometimes confused about the military housing offices’ roles and responsibilities with regard to the maintenance of their home; there was a perception that the military housing office was not working independently of the partner in the residents’ best interest; or they did not know the military housing office existed. The military department oversight agencies have acknowledged resident confusion and a lack of awareness regarding the role of the military housing offices as an issue. In May 2019, the Army Inspector General reported to the Secretary of the Army that at 82 percent of Army installations with privatized housing, residents did not know how to escalate issues to either the private partner or the Army housing office. Additionally, the Army Inspector General reported that installation command teams and staff cited multiple circumstances where military housing offices and tenant advocacy roles and responsibilities were unclear. Further, some military housing office officials with whom we spoke during our site visits acknowledged the gap in resident awareness regarding the existence and purpose of the military housing office. Some military housing officials also noted that some residents are unaware of the difference between the military housing office and the private partner office, due in part to their physical co-location and unclear building signage. Each military department has issued information that establishes that its housing offices can assist in the resident dispute resolution process. Specifically, if servicemembers are experiencing a dispute with a private partner, military department guidance establishes varying roles for their respective military housing office officials. For example, Army policy states that each installation should have an official tasked with supporting servicemembers regarding resident issues that cannot be resolved by the private property manager. This individual is also responsible for resolving every resident complaint and the military housing office, if required, can request mediation by the garrison commander. Despite this guidance, according to DOD officials, the military departments had generally decreased their staffing and oversight of daily privatized housing operations since the enactment of MHPI. For example, Army officials we spoke with in January 2019 told us that they typically filled 80 percent of available military housing office positions across their installations. Additionally, officials stated that housing offices were generally staffed with two or three officials responsible for assisting servicemembers with housing needs both on the installation as well as in the local community. Further, the officials told us that the team at one Army installation was decreased from about 15 to 3 positions. According to OSD officials, while housing offices should generally not require the number of personnel that were necessary prior to privatization, reductions following sequestration reduced housing staff below the level necessary to fully perform required privatized housing oversight as it was originally envisioned at the outset of the program. OSD has recognized that the military departments’ communication with residents about their role as a resource for them has been limited. In February 2019, the Assistant Secretary of Defense for Sustainment testified before Congress that a way forward in addressing resident concerns would require focus in three key areas: communication, engagement, and responsiveness. Some military housing office officials told us they have taken steps to increase resident awareness, such as increasing the advertising of the military housing office’s role and contact information, conducting town hall meetings, and rebranding their military housing offices to differentiate them from the private partners. For example, a Marine Corps housing office official stated that the housing office established a document, which is distributed to residents by the private partner, informing residents of housing office contact information and the service’s 3-step dispute resolution process, but efforts have not been standardized across all projects. DOD and Private Partners Are Implementing Initiatives to Improve Privatized Housing, but May Face Challenges OSD, the military departments, and the private partners have identified and begun collaborating on a series of initiatives aimed at improving residents’ experiences with privatized housing, but our preliminary work showed that these efforts face challenges. According to an OSD official, a series of initiatives has been identified and are some are currently in various phases of development and implementation. Tri-service working groups, each chaired by a designated military department and comprising officials and legal counsel from each military department as well as private partner representatives, are leading efforts to develop and implement the initiatives. In particular, DOD and the private partners are collaborating on the following key initiatives: Development of a Resident Bill of Rights. The Resident Bill of Rights is to provide clarity to residents on their rights and responsibilities while living in privatized military housing. Development of a common tenant lease. The common lease framework will be binding in all 50 states, but also include addendums to capture state and local laws, as required. The common lease would provide residents of privatized housing with similar terms in their leases, regardless of where they are living and which private partner owns their housing unit. Establishment of a resident advocate position. The resident advocate position, according to an OSD official, will be available to provide independent advice, education, and support to residents. However, an OSD official noted that the military departments have not yet determined whether this individual would be active duty or civilian and where the position would fall organizationally—specifically, whether it would be part of the military housing office. Development of a standardized adjudication process. The military departments and private partners are developing a common dispute resolution process that would apply to all projects. According to OSD, this process would provide residents the right to have housing issues heard and resolved by a neutral third party. DOD and Congress are exploring additional initiatives and legislative proposals. However, both DOD and private partner officials have noted several challenges that could impact their ability to implement some of these initiatives and legislative proposals. Key challenges include the following: The need to collaborate with and obtain input and agreement from the large number of stakeholders involved in privatized housing. Many of the initiatives aimed at improving privatized housing require not only agreement between DOD and the private housing partners, but may also require discussion with and approval by the project bond holders. This requirement could limit the military departments’ legal authority to unilaterally make changes to existing business agreements. The private partners noted that the bond holders may be reluctant to agree to changes to the business agreements that could result in higher project costs. Limited military department resources. The military departments had reduced their involvement in daily privatized military housing operations as part of the overall privatization effort. This included reducing staffing levels at the installations. Each of the military departments has plans to increase the military housing office staffing at each installation to allow for enhanced oversight. The potential for negative financial impacts to the projects that may outweigh the intended benefits of the initiatives. Representatives from many of the private partners we met with expressed concern that some proposed initiatives may result in a financial burden for their projects, such as legal fees associated with the development of a common lease and the various addendums that would be required; unanticipated costs of hiring outside third party inspections; or the potential impact to project revenue that would result from residents withholding rent. Some of the private partners noted that the financial impact of unfunded requirements to projects that are already experiencing financial distress could result in even fewer funds available to reinvest in the physical condition of the housing units. In summary, while the privatization of military housing has resulted in private partners assuming primary responsibility for military housing, DOD maintains responsibility for overseeing privatized housing and ensuring that eligible personnel and their families have access to affordable, quality housing facilities and services. While DOD and the private partners have taken steps to address concerns raised about their ability to adequately maintain and oversee the condition of these housing units and provide quality housing for servicemembers, the extent to which the efforts will be sustained and result in improvements remains unclear. We are continuing our broader review of DOD’s oversight of privatized housing, including the issues addressed in this statement and will make recommendations as appropriate in our final report, which we anticipate issuing in early 2020. Chairman Inhofe, Ranking Member Reed, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff members have any questions about this testimony, please contact Elizabeth A. Field, Director, Defense Capabilities and Management, at (202) 512-2775 or FieldE1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Kristy Williams (Assistant Director), Tida Reveley (Analyst in Charge), Austin Barvin, Ronnie Bergman, Vincent Buquicchio, William Carpluk, Juliee Conde-Medina, Mae Jones, Jordan Mettica, Kelly Rubin, Monica Savoy, and John Van Schaik. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study In 1996, Congress enacted the Military Housing Privatization Initiative in response to DOD concerns about inadequate and poor quality housing for servicemembers. Today, private partners are responsible for the ownership, construction, renovation, maintenance, and repair of about 99 percent of housing units on military bases in the continental United States. DOD's policy requires that the department ensure eligible personnel and their families have access to affordable, quality housing facilities. The Office of the Secretary of Defense is responsible for providing guidance and general procedures related to military housing privatization. The military departments are responsible for executing and managing privatized housing projects. Drawing from ongoing work, GAO discusses (1) DOD's oversight of privatized military housing for servicemembers and their families, (2) efforts of the military departments to communicate their roles and responsibilities to servicemembers and their families, and (3) DOD and private partner development and implementation of initiatives to improve privatized housing. GAO reviewed relevant policies, guidance, and legal documents; visited 10 installations; conducted 15 focus groups; analyzed maintenance work order data; and interviewed relevant DOD and private partner officials. GAO will continue its ongoing work and make recommendations as appropriate in the final report. What GAO Found Each military department conducts a range of oversight activities—some more extensive than others—for its privatized housing projects, but these efforts have been limited in key areas. Specifically, based on GAO's ongoing work: The Department of Defense (DOD) conducts oversight of the physical condition of housing, but some efforts have been limited in scope. Military departments have guidance for conducting oversight of the condition of privatized housing. This oversight generally consists of reviewing a sample of work order requests, visually inspecting housing during change of occupancy, and conducting other point in time assessments. However, GAO found that these efforts are limited in scope. For example, interior walk-throughs may have been limited to just a few homes at each installation. DOD uses performance metrics to assess private partners, but metrics may not provide meaningful information on the condition of housing. The Office of the Secretary of Defense (OSD) has recently issued guidance to ensure consistency in the framework used to measure project performance. However, the specific indicators used to determine if the metrics are being met may not fully reflect private partner performance. For example, a common measure is how quickly the private partner responded to a work order, not whether the issue was actually addressed. DOD and private partners collect maintenance data on homes, but these data are not captured reliably or consistently. DOD is expanding its use of work order data to monitor and track the condition of privatized housing. However, based on GAO's analysis of data provided by all 14 private partners, these data cannot reliably be used for ongoing monitoring of privatized housing because of data anomalies and inconsistent business practices in how these data are collected. DOD provides reports to Congress on the status of privatized housing, but some data in these reports are unreliable and may be misleading. DOD provides periodic reports to Congress on the status of privatized housing, but reported results on resident satisfaction are unreliable due to variances in the data military departments provide to OSD and in how OSD has calculated and reported these data. Military housing offices located at each installation are available to provide resources to servicemembers experiencing challenges with their privatized housing, but GAO's ongoing work showed these offices have not always effectively communicated this role to residents. For example, residents in GAO's focus groups noted confusion over the roles and responsibilities of these offices, and military housing officials have found that residents could not readily differentiate between military and private housing officials. DOD, working with the private partners, has made progress in developing and implementing a series of initiatives. However, both DOD and private partner officials have noted several challenges that could affect implementation, including limitations to DOD's legal authority to unilaterally make changes to the terms of the projects and limited resources to implement increased oversight.
gao_GAO-19-261
gao_GAO-19-261_0
Background In fiscal year 2019, states are required to spend at least 8 percent of CCDF funding for “quality activities”—activities that are designed to improve the quality of child care services the state provides. These activities may include supporting the professional development of the child care workforce and improving the supply and quality of child care programs and services for infants and toddlers. Table 1 describes examples of quality activities states may choose to fund with their required quality set-aside, as well as requirements for states to carry out certain activities from the CCDBG Act of 2014, where applicable. After setting aside funds for quality activities and administrative activities, states must spend at least 70 percent of discretionary funds that remain on subsidies for eligible families. They provide subsidies to eligible families through the CCDF program in the form of certificates or vouchers to use for child care in homes, child care centers, and classrooms, or through grants or contracts to child care providers. Children receiving CCDF subsidies may receive care alongside nonsubsidized children— that is, children who may be eligible for child care subsidies but who do not receive them, or who may be ineligible for child care subsidies. A Majority of States Reported Relying on CCDF Funds to Support Key Quality Child Care Activities A majority of states used fiscal year 2017 CCDF funds to entirely or mostly fund 7 of 10 major state child care activities, according to our survey of CCDF administrators in the 50 states and D.C. (see fig. 1). The 10 child care activities included in our survey, components of which are also required by CCDF, are key means through which states may choose to improve the quality of their child care services (i.e., quality activities). They also represent diverse aspects of a state’s child care system. Among states that relied on CCDF funding to support the quality activities, we found that, on average, states funded 6 of the 10 activities entirely or mostly with CCDF. Nearly one-third of states (16) funded at least 8 of the 10 to that degree. States reported that they relied on CCDF funding most frequently for the following activities: child care resource and referral systems, consumer education, and health and safety standards establishment and training. Child care resource and referral systems. More than three-quarters of states (40) reported in our survey that all (22) or most (18) of the funding they used for their child care resource and referral systems in fiscal year 2017 came from CCDF. Statewide systems of child care resource and referral agencies can serve an important role in supporting state quality improvement efforts, though not all states have them, according to HHS. For example, child care resource and referral agencies may provide training and technical assistance to child care providers and share consumer education with families, among other things. States may use CCDF funds to establish or support a system of local or regional agencies that is coordinated by a lead child care resource and referral organization. Officials in the states we interviewed described various ways in which their child care resource and referral agencies support child care providers and parents, such as: Delivering professional development, including training and technical assistance, to child care providers, regardless of whether or not the providers accept subsidized children, according to several CCDF administrators interviewed. Supporting parents by determining eligibility for subsidies, providing referrals for care, and offering information on child care quality, according to state officials. For example, one state houses eligibility specialists in regional child care resource and referral agencies, through which families apply for subsidies, while another state uses these agencies to refer families to child care providers and support families with specialists, including mental health consultants and infant specialists, as needed. Consumer education. About 70 percent of states (36) reported that all (12) or most (24) of the funding they used for consumer education activities in fiscal year 2017 came from CCDF. Consumer education activities are intended to help parents seeking child care make informed decisions and improve access to information that supports child development. States must certify that they have policies to make public the results of child care providers’ monitoring and inspection reports, as well as certify that they will collect and disseminate information on child care services available through CCDF, research and best practices concerning child development, and state policies regarding the social- emotional and behavioral health of children, among other requirements. Moreover, many of the 15 states we interviewed used child care resource and referral agencies to do this. Examples from our state interviews illustrate that: One state promotes awareness of its quality rating and improvement system for child care providers through materials available from the state’s child care resource and referral agencies, according to its CCDF administrator. Another state’s child care resource and referral system has a public awareness campaign aimed at the parents of infants, toddlers, and preschoolers to help families understand and identify quality child care, according to the head of the state’s child care resource and referral network. Parents in a third state can obtain information on child development through resources available from lending libraries, according to the state’s CCDF administrator. Health and safety standards. About 70 percent of states (36) also reported entirely funding (15) or mostly funding (21) the development or deployment of training for health and safety standards with CCDF in fiscal year 2017. According to the CCDBG Act, states are required to certify that they have health and safety standards in specific topic areas, such as the use of safe sleeping practices and pediatric first-aid, and certify that all CCDF providers will receive minimum health and safety training in these areas. Most of the 15 states that we interviewed went beyond CCDBG Act requirements and elected to apply their health and safety training requirements to all licensed child care providers in the state, and in some cases, to child care providers that are exempt from licensing. In doing so, officials described how their requirements served to elevate the health and safety of children in care regardless of whether they receive CCCF subsidies. Several state officials specifically credited the CCDBG Act as the impetus for their states’ changes. State officials we interviewed also described taking various approaches, including offering financial incentives, to facilitate child care providers in meeting training requirements. Examples of these approaches and their impact include the following: One state official said that while the state child care agency had wanted to increase health and safety requirements for child care providers for years, the reauthorization of the CCDBG Act propelled the state forward in its efforts to increase child care quality and require the same health and safety training of all licensed and license-exempt providers. One state offers health and safety grants to child care providers to meet these requirements, while another is considering increasing child care provider payment rates to a level that will allow them to meet the updated health and safety requirements, according to state officials. CCDF administrators in two states told us they are developing online training modules for the required health and safety training so child care providers can access the modules more easily and for free or have mailed training DVDs to every child care program in the state. CCDF administrators in almost all of the 15 states we interviewed told us their states set aside more than the minimum amount that CCDF required to support quality in 2017. They described how their states use quality set-aside funds to support child care licensing programs, accreditation, and quality rating systems for child care providers, among other things. Some state officials we interviewed also described specific supports for infants and toddlers, such as partnerships to provide training for child care providers around the care of this age group, and increases in provider payment rates for infant and toddler care, which is costly to provide, from the infant and toddler-specific set-aside. According to one state CCDF administrator, the ability to divert funds to activities that benefit infants and toddlers is critical as this is the neediest age—a time when children and parents need the most support. States Report That Consumer Education, Licensing, and Professional Development, Among Other Quality Activities, Also Affect Children Not Receiving Subsidies A range of CCDF quality activities, including consumer education, child care licensing, and professional development of the child care workforce affect the care of children not receiving subsidies (nonsubsidized children), according to our 51-state survey of CCDF administrators (see fig. 2). On average, states reported that 9 of the 10 activities included in our survey affect nonsubsidized children receiving child care in the state, with more than 40 percent of states (22) reporting that all of the activities affect nonsubsidized children, according to our analysis of the survey data. As previously noted, the activities serve as key supports for building quality in state child care systems. Of these activities, CCDF administrators unanimously cited three in our survey as affecting nonsubsidized children: consumer education; licensing, monitoring or background checks for child care; and professional development. Below are some specific examples of the way nonsubsidized children are affected by these activities, as discussed with CCDF administrators in our 15 state interviews. Consumer education. During our interviews, state officials discussed ways in which their CCDF programs share important information on child care quality and child development with all families, including those not receiving subsidies. As previously noted, many of the 15 states we interviewed rely on their child care resource and referral agencies to provide such information to the public. HHS requires states to have a website that includes, among other things, a searchable list of licensed child care providers and information about the provider’s quality rating, if available. States we interviewed use these and other consumer education tools, such as billboards, public service announcements, and commercials in an effort to reach a wide-ranging audience. Licensing, monitoring, or background checks. According to the CCDBG Act, states must certify they have policies to annually conduct unannounced inspections of all licensed CCDF providers for compliance with all child care licensing standards, including health, safety, and fire standards, with at least one pre-licensure inspection. But most of the 15 states we interviewed have elected to apply certain CCDBG Act requirements for CCDF providers, including those pertaining to monitoring and inspections, to all licensed providers in the state, according to their states’ CCDF administrators. Officials in several states suggested that updating their requirements for all licensed providers with the CCDF requirements establishes a high-quality foundation for child care that reflects the importance of a healthy and safe environment for all children receiving care, regardless of whether children receive a subsidy. Examples from some states that we interviewed are: In one state, where subsidized children make up about 20 percent of children in licensed care, the state’s CCDF administrator estimated that significant numbers of nonsubsidized children benefit from higher quality care, including from more extensive monitoring of all licensed providers. Another state applied the requirements for CCDF providers to all license-exempt child care providers (including those not serving subsidized children), which helps ensure that all children in care benefit from the updated monitoring, health and safety, and background check requirements. An official from another state that does not apply CCDBG Act requirements more broadly said that, because nonsubsidized children share classrooms with subsidized children, requirements that apply to subsidized providers, in turn, also still benefit the nonsubsidized children in their classrooms. In particular, he said that the requirement that all CCDF providers that serve subsidized children be inspected has opened up child care centers that previously, as license-exempt providers, were not inspected, and has resulted in improvements in some centers. Professional development. CCDF administrators we interviewed recognize professional development as key to high-quality child care for all children, including nonsubsidized children. The CCDBG Act requires states to describe the training and professional development requirements designed to enable CCDF providers to promote the social, emotional, physical, and cognitive development of children. According to HHS, states must also require ongoing training for CCDF providers that is accessible, appropriate to the age and setting of the children served, and aligned to a progression of professional development that includes a minimum number of annual hours of training for the child care workforce. As with certain other CCDBG Act requirements, a majority of states we interviewed have established the same professional development requirements for all licensed child care providers, whether or not they care for subsidized children, according to state officials. One CCDF administrator said that the updated CCDF requirements for subsidized providers were an impetus for her state to raise the training requirements for providers that do not care for children receiving subsidies and that are unlicensed. She said the updated, more comprehensive training requirements help ensure that all children are in care with child care providers that parents can trust. CCDF administrators also highlighted characteristics of their states’ professional development activities that serve to positively impact nonsubsidized children as well—namely, availability, accessibility, and affordability of professional development opportunities to child care providers. For example, state officials told us of making these opportunities available to all child care providers through online training courses, training and onsite consultation from child care resource and referral agencies, technical assistance and coaching, and resource lending libraries. Nearly all states we interviewed use their states’ quality set-aside funds to support such training, technical assistance, and/or coaching opportunities. Where training may not be free, CCDF administrators told us of financial incentives that assist child care providers in their efforts to increase quality through professional development. For example, several states use their quality set-aside funds to offer scholarship grant programs available to child care providers to help increase their qualifications, whether or not they care for subsidized children. One state offers incentive payments based on a provider’s level of attainment in the state’s career ladder, for which all providers are eligible to apply, according to its CCDF administrator, and can result in provider development that benefits the nonsubsidized children in their care. In addition to spending on quality activities, states reported through our interviews that nonsubsidized children also indirectly benefit from state spending on subsidies. According to officials in many of the 15 states that we interviewed, states’ spending on subsidies helps increase the economic stability of CCDF providers, which, in turn, also benefits nonsubsidized children in their care. Officials said that subsidizing providers to help pay for the cost of care for eligible families can provide a consistent source of revenue for CCDF providers that allows them to continue stable operations, invest in professional development, and increase teacher pay, for example. Such spending, in turn, can lead to improved child care quality as well as access (i.e., by helping providers stay in business) to nonsubsidized children, too, who are in their care, according to state officials. However, officials in many states we interviewed also noted that CCDF subsidies or related policies may negatively impact nonsubsidized children and families. For example, several said that state increases in payment rates for CCDF providers may lead providers to similarly increase the rates they charge for the nonsubsidized children in their care, which, some noted, could drive families for whom such care is no longer affordable to alternative, unregulated providers that may have lower quality standards. Rate increases can be particularly difficult for middle-income families who do not qualify for CCDF and are struggling to meet the current market rate of child care, according to one state’s CCDF administrator. CCDF administrators from several other states also noted a drop in CCDF child care providers in recent years due to various factors, including low payment rates, extensive CCDF requirements for inspections and background checks, and an insufficient number of children to sustain operating costs, for example. In much of one state’s neediest areas, local elementary schools often provide the highest quality care, according to the state’s CCDF administrator; however, with the addition of background checks that some school districts have found administratively burdensome and duplicative, the official said that many districts have dropped out of the CCDF program. States Most Often Report Plans to Spend the New CCDF Funds on Quality Activities That Affect All Children in Care, Despite Funding Uncertainty Among quality activities, states most often reported plans to spend the new discretionary CCDF funding from the Consolidated Appropriations Act, 2018 on three—licensing, consumer education, and professional development—the same activities that all states reported affect nonsubsidized children, according to our survey of CCDF administrators in the 50 states and D.C. (see fig. 3). Licensing, monitoring, or background checks. More than two thirds of states we surveyed (34) plan to spend the new CCDF funds on child care licensing or related activities of monitoring and background checks. During our interviews, many state CCDF administrators provided examples of how they plan to use new funds on licensing-related activities, such as hiring or increasing pay of licensing staff or making administrative or system improvements to facilitate the interstate background checks required under the CCDBG Act. For example, one state plans to enhance its online background check portal to streamline interstate coordination while another state plans to help providers pay for the interstate background check fees by offsetting the increased cost for the next 1 or 2 years. A third state, which has been operating under an HHS waiver that allowed for delayed implementation of the interstate background check requirements, now plans to use new funds to conduct the required checks, according to the state’s CCDF administrator. Without the new funds, officials from two states said that they may have had to reduce funding to other child care activities, including subsidies, in order to allocate the additional resources needed to comply with licensing, monitoring, or background check requirements. Consumer education. More than half of states we surveyed (30) said they plan to spend new funds on consumer education activities. Some state officials we interviewed described plans to enhance public state child care websites to make them more user-friendly or available in other languages, such as Spanish. For example, one state plans to improve online access to provider information by featuring a dashboard with a snapshot of each provider’s license history, including inspection violations. Officials from another state said they plan to use new funds to launch a public engagement campaign to provide timely and important information about child care and state-based child care services. In the absence of the new funding, officials from two states said they would likely need to reduce their efforts to better educate families statewide about important child development information and the states’ publicly- available tools that can help parents identify high-quality child care. Specifically, officials from one state said they would have to forgo plans to make their public child care website more sophisticated and consumer- friendly and officials from another state said they would not be able to conduct their planned public education campaigns. Professional development. More than half of states we surveyed (30) said they plan to fund professional development activities for child care providers. Officials we interviewed in several states told us about plans to use the new funds to implement or improve online professional development systems, such as by increasing online course offerings or creating training applications accessible by cellphone, which can improve accessibility for all child care providers. We also heard about plans in five states to use some new funds to provide specialized training, including training focused on infant and toddler-specific topics, caring for children exposed to trauma, and emergency planning and response. CCDF administrators from two other states described plans to fund more scholarships for child care workforce training and certification programs, including Child Development Associate credential programs. Lastly, officials in one state told us they plan to create a mentorship program whereby high-quality licensed providers mentor licensed-exempt providers in order to help providers who are interested in becoming licensed improve their quality and professional development qualifications. Without the new funds, officials from one state said they would not have been able to continue to support as many professional development opportunities that support both subsidized and nonsubsidized children, such as conferences, networking events, and coaching. Another CCDF administrator expressed concern that in the absence of the new funds, her state may have struggled to implement a new workforce registry system that tracks child care providers’ education and credentials. Most states reported plans to allocate the new funds to multiple state child care activities, according to our analysis of the survey data. Specifically, we found that more than two thirds of states plan to fund at least three of the activities, and half of states plan to fund at least five activities. Moreover, according to our survey, about 40 percent of states (20) also plan to spend at least some of the new funds to increase the proportion of funding set aside for quality activities beyond the required minimum for the year—which, as described earlier, they can use to fund these activities. During our interviews, we heard about states’ plans to spend new funds on a variety of qualifying activities, including child care resource and referral systems, accreditation of child care providers, and development of high-quality program standards. In the absence of the new funds, one state CCDF administrator told us that the state would likely have had to eliminate some optional quality activities, such as financial support to help providers become accredited. She further explained that the state is more willing to cut back on quality activities when there is insufficient funding than to disenroll families from the CCDF program. Aside from quality activities, states we surveyed also reported plans to spend new CCDF funds toward subsidies. More than half of states (31) plan to spend at least some of the new funds on increasing payment rates for CCDF providers or lowering parental copayments. For example, one state official we interviewed told us about plans to increase payment rates for infant and toddler care, with a goal to increase access to child care for infants and toddlers across the state. In addition, about half of states (25) we surveyed reported plans to spend new funds to implement two requirements that allow families to continue receiving subsidies for a longer period of time—the 12-month eligibility period and the graduated phase-out of assistance. Lastly, nearly one-third of surveyed states (16) reported plans to use new funds to pay for subsidies for children on their wait lists to receive child care. CCDF administrators in all of the states we interviewed that use a wait list (5) stated that they might have had to expand their wait lists in the absence of the new funds. However, several state CCDF administrators expressed uncertainty about their states’ plans for using the new CCDF funds in interviews (conducted in May and June 2018). Officials from more than a third of the 15 states we interviewed (6) said their spending plans were still in flux. In some of these states, officials said they were still developing and reviewing their funding proposals as part of their state’s legislative and budgeting process and they were awaiting future legislative approval or spending authorization. For example, in one state, the CCDF administrator said she was awaiting information on how much money the state would receive before she planned to convene stakeholder groups to discuss potential funding proposals. In another state, the CCDF administrator said her office needed to wait for other local budget appropriation decisions before her office could commit the new CCDF funds to specific priorities. Officials in more than half of the 15 states we interviewed also told us they faced challenges making spending decisions because they were unclear whether the new funds would be provided on an ongoing basis. For example, CCDF administrators in two states that plan to expand subsidies to children on their wait lists expressed concerns about having to disenroll children from the program if funding is discontinued. Officials from several states suggested that they are proceeding cautiously with spending decisions given there is no guarantee that the increased funds will be provided in the future, while an official from another state told us they are operating under the assumption that the new funds will be provided on an ongoing basis and do not have a contingency plan in the event that the funds are not continued. Agency Comments We provided a draft of this report to HHS for review and comment. HHS provided technical comments only, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of the Department of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or larink@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix II. Appendix I: List of States Interviewed Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Janet Mascia (Assistant Director), Avani Locke (Analyst-in-Charge), and Elizabeth Hartjes made key contributions to this report. Also contributing to the report were Seto Bagdoyan, James Bennett, Randy De Leon, Kirsten Lauber, Sheila R. McCoy, Jonathon Oldmixon, Jessica Orr, James Rebbe, Almeta Spencer, and Amy Sweet.
Why GAO Did This Study CCDF is the primary source of federal funding for child care subsidies. States administering CCDF are subject to requirements that improve the quality of child care for all children, nonsubsidized as well as subsidized. In March 2018, the Consolidated Appropriations Act, 2018 was enacted, which provided $5.2 billion in additional CCDF discretionary funding for fiscal year 2018, approximately twice the amount provided in fiscal year 2017. GAO was asked to review state use of CCDF funds and their potential impact on nonsubsidized children. GAO examined (1) the extent to which states use CCDF funds to support their child care system, (2) the kinds of CCDF–related activities states engage in that affect children who are not receiving CCDF subsidies, and (3) how states plan to use the increase in CCDF funding from the Consolidated Appropriations Act, 2018. GAO collected information from state CCDF administrators through a survey to the 50 states and the District of Columbia (D.C.) and interviews with officials in 15 states, including D.C., selected to reflect diverse characteristics and locations. GAO also reviewed relevant federal laws, regulations, and guidance, and interviewed Department of Health and Human Services officials. GAO makes no recommendations in this report. What GAO Found A majority of states used funding from the Child Care and Development Fund (CCDF) in fiscal year 2017 to entirely or mostly support 7 of 10 major state child care activities GAO identified in its survey of 51 state CCDF programs. These activities, components of which are also required by CCDF, represent diverse aspects of state child care systems and are a key means through which states may choose to improve the quality of their child care. States reported that they relied primarily on CCDF funding for child care resource and referral systems, consumer education, and health and safety standards establishment and training more frequently than for other activities. States reported in GAO's survey that a range of CCDF quality activities affect the care of children not receiving CCDF subsidies (nonsubsidized children), including three activities cited by all states—consumer education, child care licensing, and professional development of the child care workforce. CCDF administrators in most of the 15 states GAO interviewed said they have elected to apply certain requirements for caregivers subsidized under CCDF to all state licensed child care providers. For example, child care providers may be subject to monitoring and professional development requirements, whether or not they care for children receiving subsidies. CCDF administrators also stated that, as a result, all children in the care of licensed providers in these states—including nonsubsidized children—benefit from the enhanced requirements. States most often reported in GAO's survey that they plan to spend new CCDF funds provided in the Consolidated Appropriations Act, 2018, on quality activities that benefit all children in child care including licensing, consumer education, and professional development. For example, officials GAO interviewed in several states described plans to enhance public state child care websites to make them more user-friendly for all families or available in other languages, such as Spanish. However, more than a third of the interviewed states said their spending plans were still in flux, and more than half said they faced challenges making spending decisions because it was unclear whether the new funds would be provided in the future.
gao_GAO-19-386
gao_GAO-19-386_0
Background Roles and Responsibilities of SOCOM and the ASD- SO/LIC SOCOM has a unique structure and responsibilities in that it has both combatant command responsibilities and military service-like functions for organizing, training, and equipping SOF. Under sections 164 and 167 of Title 10, United States Code, the SOCOM commander is responsible for training and ensuring the combat readiness of assigned forces and monitoring the preparedness to carry out assigned missions of SOF assigned to unified combatant commands. In addition, SOCOM is responsible for developing special operations strategy, doctrine, and tactics; the employment of forces of the command to carry out assigned missions; requirements validation; acquisition of special operations- peculiar equipment; and formulating and submitting requirements for intelligence support, among other things. In its combatant command function, the commander of SOCOM is responsible for and has the authority to conduct the following special operations activities: (1) direct action, (2) strategic reconnaissance, (3) unconventional warfare, (4) foreign internal defense, (5) civil affairs, (6) military information support operations, (7) counterterrorism, (8) humanitarian assistance, (9) theater search and rescue, and (10) other activities such as may be specified by the President or the Secretary of Defense. Congress initially established the position of the ASD-SO/LIC in the NDAA for Fiscal Year 1987. As previously discussed, in 2016 Congress enhanced the role of the ASD-SO/LIC in section 922, which is codified in section 138(b) of Title 10, United States Code. The ASD-SO/LIC’s current statutory responsibilities include overall supervision, including policy and resources, of special operations activities listed above; exercising authority, direction, and control of all special operations-peculiar administrative matters relating to the organization, training, and equipping of SOF; and assisting the Secretary of Defense and USD (P) in the development and supervision of policy, program planning and execution, and allocation and use of resources for irregular warfare, combating terrorism, and special operations activities. DOD Directive 5111.10, Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict (SO/LIC), first issued in 1995 and most recently updated in 2011, also prescribes the roles and responsibilities for the ASD-SO/LIC. Among other things, the ASD- SO/LIC serves as the principal staff assistant to the USD (P) and the Secretary of Defense on special operations and low-intensity conflict matters and counterdrug policy, among others. DOD Directive 5111.10 also establishes responsibilities, functions, relationships, and authorities for the ASD-SO/LIC on issues such as the coordination and oversight of policy for humanitarian assistance, refugee affairs, and foreign disaster relief activities (e.g., emergency relief for Ebola). Prior to the enactment of section 922, OASD-SO/LIC coordinated regularly with SOCOM on administrative matters, such as reviewing SOCOM’s budget materials. Specifically, the administrative chain of command for SOF-related matters was formally changed by section 922 to give the ASD-SO/LIC more oversight over SOCOM through direct interaction with the Secretary of Defense. Section 922 provided the ASD-SO/LIC with the statutory authority to exercise authority, direction, and control of all special operations-peculiar administrative matters relating to organizing, training, and equipping SOF. Section 922 did not alter SOCOM’s operational chain of command as a combatant command. DOD’s Report Summarizing Its Progress in Implementing Section 922 Section 1074 of the NDAA for Fiscal Year 2018 directed DOD to submit a report on the progress the department had made in implementing the requirements identified in section 922. Section 1074 specified seven reporting elements, such as the accounting of personnel currently assigned, that DOD’s report should address. DOD submitted its report on March 12, 2018, wherein it identified a high-level summary of actions taken, as shown in table 1 below. DOD Has Made Recommendations, Developed Actions, and Taken Steps to Address Requirements in Section 922 DOD Identified Recommendations and Developed Actions to Address Requirements in Section 922 In 2018, DOD identified 166 recommendations to address the reforms required by section 922 that are aimed at increasing the ASD-SO/LIC’s role in the management of SOF and special operations. To identify these recommendations and support the implementation of service secretary- like responsibilities under section 922, OASD-SO/LIC and SOCOM created a “tiger team” to review broad functional areas typically performed by the military service secretariats and determine the need for potential changes to the roles and responsibilities of OASD-SO/LIC and SOCOM related to addressing requirements in section 922. The tiger team included five working groups to review potential roles and responsibilities for budget, special access programs, personnel and readiness, program and requirements, and acquisition functions. Two officials, respectively representing OASD-SO/LIC and SOCOM, co-led each of these working groups. OASD-SO/LIC established design principles to help the working groups identify new roles and responsibilities for OASD-SO/LIC and SOCOM under section 922. These principles included the following three broad categories of authorities that OASD-SO/LIC could be expected to take on: Monitor: This role requires that OASD-SO/LIC be informed, observe, and check the progress or quality of an activity throughout the lifetime of the activity. This includes, for example, monitoring SOCOM’s submission of its presidential budget justification material to Congress. Review and coordinate: This role requires that OASD-SO/LIC review, analyze, and coordinate throughout the lifetime of an activity to ensure compliance with authoritative policy and with statutory and other regulatory issuances, and to ensure achievement of broad program goals. Coordination does not imply authority to compel agreement, however. An example of the review and coordinate role is that OASD-SO/LIC liaises with the military departments on military personnel issues. Approve: This role requires OASD-SO/LIC’s concurrence to give explicit or official sanction, permission, or ratification of an activity. An example of approval authority is that ASD-SO/LIC approves SOCOM’s Program Objective Memorandum (POM). We found the largest share of the 166 recommendations made by the working groups strengthened OASD-SO/LIC’s roles related to monitor and to review and coordinate, as shown in figure 1. Specifically, 80 out of 166 recommendations (48 percent) would strengthen OASD-SO/LIC’s role regarding monitor or review and coordinate. Twenty-two out of 166 recommendations (13 percent) would give OASD-SO/LIC approval authority—requiring OASD-SO/LIC’s concurrence to give explicit or official sanction, permission, or ratification of an activity. Of these 22 recommendations, 16 involved either joint approval—requiring both OASD-SO/LIC and SOCOM to jointly approve the action—or partial approval—that is, OASD-SO/LIC would have approval authority on certain aspects of an action item. Sixty-four out of 166 recommendations (39 percent) did not recommend any change to OASD-SO/LIC’s role. In addition, the majority of the recommendations, about 156 out of 166 (about 94 percent) would not change SOCOM’s roles. OASD-SO/LIC used the 166 recommendations to inform the development of 87 actions in OASD-SO/LIC’s monthly reports to Congress. We found that with regard to the 87 actions identified in OASD-SO/LIC’s February 2019 monthly report, 49 percent of the action items (43 out of 87) focused on OASD-SO/LIC’s participation in meetings. For example, prior to the implementation of section 922, OASD-SO/LIC attended Joint Resources Management Board meetings. After implementing section 922, OASD- SO/LIC exercised its review and coordinate responsibility by attending Joint Resources Management Board meetings, thereby formalizing OASD-SO/LIC’s prior role. According to DOD officials, there is a value in adding OASD-SO/LIC as a participant in key meetings and formalizing OASD-SO/LIC’s review and coordinate role. For example, officials explained that, by participating in meetings, OASD-SO/LIC can have more situational awareness about key topics and can better advocate for the SOF enterprise. DOD Has Taken Several Actions to Address Section 922 Requirements DOD, through OASD-SO/LIC, has taken various actions, including changes in roles and responsibilities, related to addressing requirements in section 922. According to OASD-SO/LIC officials, its actions reflect an incremental approach to strengthening OASD-SO/LIC’s roles and responsibilities. In February 2019 OASD-SO/LIC reported to Congress that it had completed 56 of its 87 actions. For example, one of the actions identified in the February 2019 monthly report was the need to enhance OASD-SO/LIC’s role in the development and approval of SOF-related program and budget matters. The report further identified a number of actions, including having OASD-SO/LIC approve SOCOM’s POM. According to the report, OASD-SO/LIC was briefed on and approved SOCOM’s POM for fiscal years 2020-2024. As another example, the report identified the need to enhance OASD-SO/LIC’s oversight of SOF- related military construction activities and contingency basing. This included a requirement that OASD-SO/LIC co-chair SOCOM’s Military Construction Summit, which according to officials deals with acquisition- related issues regarding military construction and is used to inform the POM. According to the February 2019 report, OASD-SO/LIC co-chaired the summit for fiscal year 2019, and its formal role as co-chair will be reflected in future updates to SOCOM guidance. The February report also explained that the Deputy Secretary of Defense approved a new Special Operations Policy and Oversight Council directive that identified the ASD- SO/LIC as the lead for that council. The Deputy Secretary of Defense also delegated the ASD-SO/LIC with authority to approve waivers to hire civilian personnel during a civilian hiring freeze. Many of the actions taken thus far formalize pre-existing, informal relationships between OASD-SO/LIC and SOCOM. According to OASD- SO/LIC officials, a formalization of a pre-existing role occurs when OASD- SO/LIC identifies a role that OASD-SO/LIC performed informally before addressing requirements under section 922 and continues to maintain the role officially under its section 922 responsibilities. Based on the February 2019 report to Congress, we found that 26 out of 56 implemented action items (about 50 percent) formalize ongoing OASD- SO/LIC roles and responsibilities that were previously conducted informally. Officials stated that all of the actions relating to budget execution are formalizations of previously existing informal roles and responsibilities. For example, according to OASD-SO/LIC and SOCOM officials, OASD-SO/LIC had an informal role in reviewing SOCOM’s POM prior to section 922, such as participating in the review of the POM without formal approval authority. According to DOD officials familiar with the POM process, giving OASD-SO/LIC approval authority for SOCOM’s POM essentially formalized what had been done in the past, while allowing OASD-SO/LIC to perform a more thorough review. Similarly, officials stated that OASD-SO/LIC had an informal role in developing SOCOM’s budget justification books prior to the passage of section 922. Another action identified in DOD’s February 2019 monthly report is OASD-SO/LIC’s role in budget submission. Officials explained that, in an effort to enhance OASD-SO/LIC’s role in budget submission, OASD- SO/LIC has formalized this role. According to the officials, the benefit of this formalization is that OASD-SO/LIC has greater access to the process of producing justification books. There have been similar examples of formalization of pre-existing roles in other areas as well. For example, prior to section 922, SOCOM’s public affairs requirements were coordinated with USD (P)’s public affairs office. Rather than duplicate SOCOM’s existing public affairs role with an additional public affairs office for the ASD-SO/LIC, OASD-SO/LIC coordinates with the USD (P)’s public affairs office. Lack of Clear Time Frames and Guidance Are Challenges to Completing Implementation of the ASD-SO/LIC’s Roles and Responsibilities Most Actions That Remain to Be Implemented Do Not Have Clear Time Frames Most of the actions remaining to be implemented do not have clear time frames for implementation. Based on our analysis of the February 2019 monthly report, we found that 31 out of 87 identified actions remain unimplemented. Of these 31 actions, three have clear time frames for implementation. For example, one of the remaining actions involves enhancing the ASD-SO/LIC’s role in SOF military personnel-related issues. Among other things, this includes liaising with the military departments on relevant military personnel issues and coordinating on related policy issues. The February 2019 monthly report includes an action related to OASD-SO/LIC’s plans to coordinate a process to monitor promotions of SOF personnel and communicate issues with military departments. The report specifies that the ASD-SO/LIC expected to implement this process in 2019. As another example, documenting and funding for the Secretariat for Special Operations was expected to be resolved by the first quarter of fiscal year 2019. However, the remaining 28 actions do not have time frames for implementation. For example, some of the actions associated with implementing the ASD-SO/LIC’s key functions, such as acquisitions and legislative affairs, do not have clear time frames for implementation. Regarding acquisitions, OASD-SO/LIC is developing standard operating procedures, such as regular coordination and meetings, but it has not established time frames for the creation or implementation of these procedures. Similarly, OASD-SO/LIC and SOCOM are prescribing roles with regard to legislative affairs pending further departmental guidance, but they have not established time frames within which these roles will be defined. DOD officials identified some reasons for not having identified time frames for the remaining actions. First, according to OASD-SO/LIC officials, their initial efforts were focused on identifying and prioritizing the list of actions needed to implement section 922, as reflected in the March 2018 report required by law. Since then, according to OASD-SO/LIC and SOCOM officials, OASD-SO/LIC has taken an incremental approach to implementing these actions, addressing items on a case-by-case basis as they occur. For example, OASD-SO/LIC initially placed a higher priority on implementing its fiscal roles and responsibilities, partly because the POM cycle included deadlines associated with the President’s Budget for Fiscal Year 2020. Throughout the cycle, OASD-SO/LIC determined its specific role in each step of the POM process as the step arose. Second, OASD-SO/LIC officials stated that they had not established clear time frames linked to action items because the ASD-SO/LIC was new in that role and they were waiting for him to determine OASD-SO/LIC’s broader strategy and goals, which they could use to inform implementation time frames. However, we note that the ASD-SO/LIC has been in that position since December 2017, and OASD-SO/LIC has hired new personnel who could help develop and track time frames. Standards for Internal Control in the Federal Government emphasizes the need to establish time frames to implement actions effectively, and as we reported in June 2018, establishing time frames with key milestones and deliverables to track implementation progress are important for agency reform efforts. Failure to do so can have significant consequences. For example, by not establishing clear time frames for updating guidance that defines the ASD-SO/LIC’s acquisition roles, the ASD-SO/LIC is at risk for having unclear roles and responsibilities that may overlap between SOCOM and the Office of the Secretary of Defense on functions related to acquisitions. According SOCOM officials, having clearer time frames to update DOD guidance could enable OASD-SO/LIC and SOCOM to operate more efficiently and effectively. Without establishing clear time frames for the implementation of key oversight functions and other actions, the ASD-SO/LIC may not be able to fully execute OASD- SO/LIC’s service secretary-like authority, and DOD decision-makers may not be well positioned to track progress and evaluate whether or how the ASD-SO/LIC’s completed and pending actions support the full implementation of section 922. Outdated Guidance Limits Clarity of Understanding of the ASD-SO/LIC’s Broader Roles and Responsibilities under Section 922 While the ASD- SO/LIC’s responsibilities, functions, relationships, and authorities are established in DOD Directive 5111.10, Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict (ASD SO/LIC) (Mar. 22, 1995) (incorporating Change 2, Oct. 21, 2011), this directive is outdated and does not reflect the ASD- SO/LIC’s statutory roles under section 922 and codified at 10 U.S.C. § 138. For example, DOD Directive 5111.10 states that the ASD- SO/LIC shall serve under the authority, direction, and control of the USD (P). However, section 922 states that the ASD- SO/LIC’s exercise of authority of all special operations-peculiar administrative matters related to the organization, training, and equipping of SOF shall be subject to the authority, direction, and control of the Secretary of Defense. According to DOD officials, while there is other guidance that broadly lays out DOD roles and responsibilities, this guidance lacks details concerning operationalizing ASD- SO/LIC’s roles and responsibilities under the new administrative chain of command, creating potential confusion regarding the ASD- SO/LIC’s roles and responsibilities on some key SOF-related issues. For example: SOF personnel issues: SOF personnel activities include readiness reporting, training, education, warrior care, awards, decorations, and death notification. Support for SOF personnel issues is generally dispersed among different components, including the military services, SOCOM, the office of the Under Secretary of Defense for Personnel and Readiness (USD (P&R)), and OASD- SO/LIC. Although DOD Directive 5111.10 states that the ASD- SO/LIC “shall advise and coordinate with the Under Secretary of Defense for Personnel and Readiness on manpower” issues, it does not define whether manpower issues include SOF career management, such as special pay and promotion. According to DOD officials, DOD lacks overarching guidance that would clarify ASD-SO/LIC’s role on manpower issues. DOD Directive 5111.10 also does not provide specific information about the extent of the ASD- SO/LIC’s coordination role as it relates, for example, to issues such as career management, retirement, pay, or promotion with regard to USD (P&R) responsibilities on SOF personnel management. As a result, according to DOD officials, the lack of clear and updated guidance has caused some confusion among DOD components. According to OASD-SO/LIC officials, after section 922 was implemented, OASD- SO/LIC’s initial attempts to provide strategic outreach for SOF personnel faced some challenges because officials were not included in key personnel meetings. For example, OASD-SO/LIC officials told us they were not included in some meetings that discussed delegating civilian hiring waivers. By not participating in some key SOF personnel-related meetings, OASD-SO/LIC could have missed the opportunity to advocate for similar waiver authority. According to DOD officials, USD (P&R) officials did not fully understand the ASD- SO/LIC’s authorities under section 922 when OASD-SO/LIC officials attended some meetings. Despite this confusion, the ASD-SO/LIC has taken some steps to strengthen its role on SOF personnel issues. For example, according to DOD officials, during the federal government civilian employee hiring freeze, DOD delegated civilian employee hiring waivers to the secretaries of the military departments but did not include waivers for the ASD-SO/LIC or SOCOM. Without the waiver authority to re-instate SOF personnel, SOCOM would have to request a waiver separately through the military services. OASD-SO/LIC officials told us that by ensuring the ASD-SO/LIC was granted a similar waiver authority, OASD-SO/LIC officials streamlined the process and supported SOCOM’s efforts to hire additional SOF civilian personnel. However, the ASD-SO/LIC’s authority on SOF personnel matters remains unclear and SOF personnel issues are generally dispersed among the authorities of USD (P&R), military services, and SOCOM. Overall, it remains unclear what, if any, authorities the ASD-SO/LIC has with respect to leading and coordinating the department’s SOF personnel issues. Budgetary authority: SOF-related budgetary issues include the SOCOM special operations–specific–funding budget materials, the POM, acquisition, and congressional requests for information, among other things. DOD officials told us that before section 922 was enacted, the ASD-SO/LIC reviewed SOF-peculiar budget materials (generally linked to major force program funding) prior to submission of the POM, and the ASD-SO/LIC was notified of SOF-related congressional unfunded priority list submissions. The ASD-SO/LIC did not have principal staff assistant authority to approve the POM. DOD Directive 5111.10 states that the ASD-SO/LIC will provide overall supervision of the preparation and justification of the SOF budget and programs and will review the SOCOM POM. However, the DOD directive has not been updated to provide the ASD-SO/LIC with clear oversight and approval authority over special operations– specific funding, which traditionally has been controlled by SOCOM. DOD Directive 5111.10 also states that the ASD-SO/LIC will advise and coordinate with the Under Secretary of Defense for Acquisition and Technology on acquisition priorities, but this does not provide the ASD-SO/LIC with oversight of the SOF acquisition process. In addition, DOD does not have any guidance that gives ASD-SO/LIC clear oversight roles regarding the SOF acquisition process. By comparison, SOCOM is responsible for the development and acquisition of special operations-peculiar equipment, materiel, supplies, and services in accordance with section 167(e) of Title 10, U.S. Code, and it executes funding in operation and maintenance, procurement, and military construction accounts, among other things. According to OASD-SO/LIC senior officials, the ASD-SO/LIC has some authority over special operations–specific funding through the POM process. According to OASD-SO/LIC officials, after implementing section 922, the ASD-SO/LIC established a new principal staff assistant authority to approve the POM in 2018. However, DOD officials familiar with SOF-related budgetary issues stated that it is unclear how much authority the ASD-SO/LIC has over funding issues to adjudicate potential disagreements between the services and SOCOM on either SOF-specific or common funding issues. Special Access Programs (SAP): SAPs are programs established for a specific class of classified information that impose safeguarding and access requirements that exceed those normally required for information at the same classification level. Given the sensitive nature of these programs, DOD has established different levels of authorities to create and manage SAPs. According to DOD Directive 5205.07, Special Access Program (SAP) Policy, the Deputy Secretary of Defense designates certain DOD component heads, or DOD agency heads—for example, the secretary of a military department or the Commander, SOCOM—as cognizant authorities to manage and execute their respective SAPs. While the ASD-SO/LIC has always played a role in SOF-related SAPs, DOD officials stated that the role is expected to evolve as part of the implementation of section 922. OASD-SO/LIC’s February 2019 monthly report includes several actions intended to enhance the ASD-SO/LIC’s role in the management of SAPs, and OASD-SO/LIC has already begun participating in various SAP-related conferences and meetings. However, according to DOD officials, the ASD-SO/LIC’s future role related to SAPs remains unclear in existing guidance. For example, DOD Directive 5111.10 states that the ASD-SO/LIC will provide oversight over all special operations and low-intensity conflict related sensitive SAPs. Although the ASD-SO/LIC and SOCOM officials told us that they are currently further defining these roles, the DOD directive has not been updated to clarify whether the ASD-SO/LIC should be included in the SAP governance process, which includes designating the ASD-SO/LIC as a cognizant authority with service secretary-like SAP responsibilities. DOD officials expressed some concerns that until these matters are clarified in guidance, it will remain unclear whether the ASD-SO/LIC and SOCOM should work together on SAP issues, and how their relationships with the various Under Secretaries of Defense with oversight authority will be managed. Standards for Internal Control in the Federal Government states that management should define objectives clearly and assign responsibility for key roles throughout the organization. Specifically, the standards call for management to define objectives in specific terms so that they are understood at all levels of the entity. This involves clearly defining what is to be achieved, who is to achieve it, how it will be achieved, and time frames for its achievement. We have also previously reported that management practices key to program success include clearly identifying organizational roles and responsibilities and clarifying program objectives. OASD-SO/LIC and SOCOM officials stated that updated guidance is needed to help clarify the ASD-SO/LIC’s roles and responsibilities under section 922. In December 2018 OASD-SO/LIC officials told us that they were starting to update guidance on the ASD- SO/LIC’s roles and responsibilities under section 922 in DOD directive 5111.10. However, OASD-SO/LIC officials did not provide details about the information that would be updated, and did not provide a copy of that draft guidance. In addition, OASD-SO/LIC officials did not have clear time frames regarding when the guidance will be updated. As DOD updates the ASD-SO/LIC’s roles and responsibilities either in DOD Directive 5111.10 or through new guidance, it has an opportunity to clarify changes in its relationship with DOD components involved in overseeing SOF administrative matters related to personnel, budgetary authority, and SAPs. The SOF enterprise is a complex system, and without clearly identified roles and responsibilities for a service secretary- like role for the ASD-SO/LIC, other DOD components—such as the military departments, USD (P), and USD (P&R) —may not know the extent of the ASD-SO/LIC’s and SOCOM’s authorities in key issues where they have vested interests. For example, it will remain unclear what authorities the ASD-SO/LIC has with regard to SOF-related administrative matters, and which entities will have visibility over any problems or resourcing decisions related to the SOF enterprise. By clarifying the ASD-SO/LIC’s roles and responsibilities with regard to its relationship with SOCOM and other DOD components, DOD can more effectively implement the intent of section 922. DOD Has Taken Steps to Develop a Hiring Plan but Has Not Fully Incorporated Some Key Strategic Workforce Planning Principles OASD-SO/LIC Has Hired Additional Personnel and Taken Steps to Develop a Hiring Plan to Guide Future Growth OASD-SO/LIC has taken steps to develop a hiring plan to identify personnel requirements and an approach to hiring additional personnel. DOD’s efforts began in 2017, when OASD-SO/LIC commissioned the Army Office of Manpower and Reserve Affairs to conduct a manpower study to provide an analysis of manpower requirements based on unconstrained resources that are necessary to satisfy the service secretary-like responsibilities under section 922. The Army’s manpower study was based on nine functions, including budget, acquisitions, and legislative activities. For each function, the study identified corresponding tasks and the average man hours, or time needed, to complete each task. The study, which was included in DOD’s March 2018 report to Congress, ultimately estimated that up to 64 full-time equivalent (FTE) positions might be needed to implement the ASD-SO/LIC’s section 922 responsibilities. According to OASD-SO/LIC officials, the study provided an initial framework for OASD-SO/LIC to determine its staffing needs, but the study was not comprehensive and OASD-SO/LIC’s hiring needs will likely continue to change in the future. Over the past 2 years, according to OASD-SO/LIC officials, OASD- SO/LIC has begun to hire personnel to fulfill various roles and responsibilities. Specifically, the number of FTEs hired to support OASD- SO/LIC’s implementation of section 922 increased from 14 in March 2018 to 24 as of December 2018. In addition, section 361 of the John S. McCain NDAA for Fiscal Year 2019 gave the ASD-SO/LIC additional flexibility to hire staff in fiscal year 2019. For example, section 361 directed that not less than $4 million in fiscal year 2019 shall be used to fund additional civilian personnel to help implement section 922. Section 361 also provided the OASD-SO/LIC an exemption from the statutory civilian personnel limitation in the Office of the Secretary of Defense imposed by 10 U.S.C. § 143. Figure 2 shows OASD-SO/LIC’s hiring actions to date, along with key events related to the implementation of section 922. In December 2018 OASD-SO/LIC officials completed a basic hiring plan to guide future personnel growth as OASD-SO/LIC continues to implement actions related to section 922. The plan—documented in a 10 slide presentation—includes OASD-SO/LIC’s short-term hiring goals through the start of fiscal year 2020, a hiring approach involving a mix of permanent and temporary staff, and the identification of targeted skillsets for personnel hired. For example, the plan includes targets related to achieving key skills, such as force planning and shaping the President’s Budget for Fiscal Year 2021. The plan also calls for OASD-SO/LIC to grow from 27 current FTEs to a total of 55 FTEs in fiscal year 2020. OASD-SO/LIC’s Hiring Plan Does Not Fully Incorporate Key Strategic Workforce-Planning Principles While OASD-SO/LIC’s current hiring plan represents a first step toward developing a broad overview of its hiring goals and some key hiring considerations, it does not fully incorporate some leading practices for strategic workforce-planning. As we have previously reported, strategic workforce planning addresses two critical needs: (1) aligning an organization’s human capital program with its current and emerging mission and programmatic goals; and (2) developing long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. While agencies’ approaches to workforce planning will vary, we have previously identified several key principles that strategic workforce planning should address, irrespective of the context in which the planning is done. GAO’s prior work on workforce planning identified the following five key principles: involve top management, employees, and other stakeholders in developing the strategic workforce plan; determine the critical skills and competencies needed to achieve long-term goals; develop strategies that are tailored to address critical competency gaps; build the capacity needed to address requirements important to supporting workforce strategies; and monitor and evaluate the agency’s progress toward its human capital goals. However, we found that as of December 2018, the OASD-SO/LIC’s hiring plan had not fully incorporated several of these key strategic workforce-planning principles, as described below: The hiring plan was not fully aligned with long-term goals. A key principle in strategic workforce planning is strategic alignment, which occurs when an agency’s human capital program is linked with its mission and goals. However, we found that OASD-SO/LIC has not clearly linked its hiring plan with its overall mission and goals. For example, the hiring plan mentions short-term goals, such as analyzing the budget for fiscal year 2021 and long-term goals, such as strategic assessment and aligning the organization with National Defense Strategy requirements. However, the plan does not define strategic assessment, and it lacks detail about how newly hired personnel in fiscal year 2019 will help OASD-SO/LIC meet long-term goals related to strategic assessment. For example, OASD-SO/LIC recently hired seven personnel, but it is not clear whether the newly hired personnel have skills that match competencies, such as the ability to work with Special Access Programs, identified in OASD-SO/LIC’s hiring plan. We have previously reported that unless hiring needs are clearly linked with long-term goals, the hiring plan may be incomplete or premature. OASD-SO/LIC’s approach did not fully involve stakeholders. While stakeholder involvement is not statutorily required, another key principle of effective strategic workforce planning is to involve top management, employees, and other stakeholders in developing, communicating, and implementing strategic workforce plans. We found several cases in which OASD-SO/LIC did not involve stakeholders in its key efforts. For example, although OASD-SO/LIC senior officials shared information about the hiring plan with one senior official at SOCOM, several OASD-SO/LIC and SOCOM officials stated that OASD-SO/LIC did not communicate the hiring plan’s expectations or strategies more broadly, to involve a full range of OASD-SO/LIC and SOCOM officials and other stakeholders, such as USD (P). In another example, when OASD-SO/LIC hired personnel from September 2018 through December 2018, several OASD- SO/LIC and SOCOM officials were unclear about the specific roles and responsibilities of new personnel hired. The hiring plan did not include strategies to address critical competency gaps and identify related personnel requirements. Leading principles of effective strategic workforce planning hold that agencies should develop strategies to address critical skill gaps and systematic personnel requirements processes, which are considered a good human capital practice across government. However, we found that OASD-SO/LIC’s hiring plan did not include completed competency-gap assessments or have procedures in place to periodically reassess personnel requirements. Without a systematic process to periodically assess personnel requirements, OASD-SO/LIC could not determine whether the Army study’s initial estimates were the most efficient choice for the workforce. For example with regard to the legislative affairs positions, OASD-SO/LIC and SOCOM officials told us that the Army manpower study’s initial estimate of eight FTEs was too high. OASD-SO/LIC officials eventually hired two FTEs for the legislative affairs office, but the hiring plan did not include a methodology to analyze the workforce and explain why two FTEs would fit within the Army study’s framework. According to OASD- SO/LIC officials, OASD-SO/LIC also did not use a standardized process to assess whether two FTEs would meet its requirements. According to OASD-SO/LIC officials, the hiring plan is the first step in developing an initial framework, and they stated that it lacked implementation details. OASD-SO/LIC officials stated that they anticipate building upon the hiring plan as the current workforce plan evolves over time. In addition, OASD-SO/LIC officials stated that key priorities include strengthening OASD-SO/LIC’s participation and oversight of SOF resources through the POM and fiscal guidance processes. As a result, the hiring plan includes information about new personnel focused on fiscal oversight, such as analyzing the budget in fiscal years 2020 through 2021, but it does not clarify long-term goals, competency gaps, and program results tied to other priorities, such as legislative and acquisition- related functions. Officials from OASD-SO/LIC and SOCOM agreed that incorporating key principles in the strategic workforce plan would help them determine the most appropriate size and composition of OASD- SO/LIC’s workforce. Until OASD-SO/LIC completes a comprehensive strategic workforce plan that includes key principles as outlined above, OASD-SO/LIC may not know what gaps exist in skills and competencies, and what their workforce strategies to fill those gaps should be. These issues could put OASD-SO/LIC at risk of hiring personnel who may not adequately meet its needs as defined by section 922. Conclusions As DOD increasingly relies on SOF, the department has taken steps to implement section 922. Given the expanded statutory authority under section 922, the ASD-SO/LIC has greater authority to oversee and advocate for the SOF enterprise. The ASD-SO/LIC has implemented several actions to clarify and strengthen its oversight roles and responsibilities, and it has many additional planned actions underway. However, without time frames to implement action items and revised or new guidance that clearly articulates the ASD-SO/LIC’s roles and responsibilities with regard to SOCOM and the wider SOF enterprise, these changes may not be fully effective. In addition, without a strategic workforce plan that fully incorporates leading practices to ensure that the department has the right people, in the right place, at the right time, OASD-SO/LIC may not be well prepared to respond to future workload changes and manage its human capital strategically. As OASD-SO/LIC makes progress in its hiring plan, it is important for OASD-SO/LIC to develop a strategic workforce plan to ensure that it appropriately addresses the human-capital challenges of the future and better contributes to the agency’s efforts to meet its missions and goals. Recommendations We are making three recommendations to the Secretary of Defense: The Secretary of Defense should ensure that the Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict defines time frames for completing action items necessary to implement the Assistant Secretary of Defense for SO/LIC’s expanded section 922 responsibilities. (Recommendation 1) The Secretary of Defense should ensure that the Assistant Secretary of Defense for the Special Operations and Low-Intensity Conflict updates existing guidance or develops new guidance to clarify the roles and responsibilities of the Assistant Secretary of Defense for SO/LIC and relationships with DOD components that have vested interests in the SOF enterprise—such as the military services, SOCOM, the Under Secretary of Defense for Personnel and Readiness, and the Under Secretary of Defense for Policy. (Recommendation 2) The Secretary of Defense should ensure that the Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict builds upon its hiring plan by developing a strategic workforce plan that incorporates key principles, such as aligning the plan with long-term mission goals; fully involving stakeholders in developing the plan; and including strategies to address critical competency gaps and identify related personnel requirements. (Recommendation 3) Agency Comments and Our Evaluation In written comments on the draft of this report, DOD partially concurred with our recommendations. Comments from DOD are summarized below and reprinted in appendix I. DOD also provided technical comments, which we incorporated as appropriate. DOD partially concurred with the first recommendation that the ASD- SO/LIC define time frames for completing action items necessary to implement the ASD-SO/LIC‘s expanded section 922 responsibilities. In its response, DOD stated that most time frames have been established or the action completed. Additionally, DOD noted that some actions may not be completed because they depend on events, actions or leadership decisions that are outside of OASD-SO/LIC’s control. We agree that some DOD leadership decisions have yet to be made. However, 28 out of 31 already identified actions do not have clear time frames for implementation. Further, time frames can be modified as events change or better information becomes available. As we discuss in the report, establishing time frames with key milestones to track implementation progress are important for agency reform efforts. Without clear time frames, ASD-SO/LIC may not be able to fully execute its service secretary-like authority. DOD partially concurred with the second recommendation that the ASD- SO/LIC update DOD Directive 5111.10 to clarify the roles and responsibilities of the ASD-SO/LIC and relationships with DOD components that have vested interests in the SOF enterprise. DOD is in the process of revising this directive, but DOD noted that the purpose of DOD Directive 5111.10 is to define only specific Department-wide roles and missions for ASD-SO/LIC and is not the appropriate issuance to define ASD-SO/LIC’s relationship with other DOD components in the SOF enterprise. Given that DOD does not believe DOD Directive 5111.10 is the appropriate issuance to clarify ASD-SO/LIC’s relationships with DOD components, we modified our recommendation from focusing solely on updating DOD Directive 5111.10 to updating existing guidance and/or developing new guidance. Updating or developing guidance that clarifies ASD SO/LIC’s relationship with DOD components, such as the military departments, USD (P), and USD (P&R) would likely allow for improved oversight of and collaboration on SOF matters related to personnel, budgetary authority and SAPs. DOD partially concurred with the third recommendation that the ASD- SO/LIC build upon its hiring plan by developing a strategic workforce plan that incorporates key principles, such as aligning the plan with long- term mission goals; fully involving stakeholders in developing the plan; and including strategies to address critical competency gaps and identify related personnel requirements. In its response, DOD agreed that there is room to improve the involvement of stakeholders. In addition, DOD stated that it developed a strategic workforce plan that aligns with long-term mission goals and has identified strategies to address critical competency gaps, including target skillsets. However, as noted in our report, the 10 slide presentation that constitutes the hiring plan lacks details that would be included in a comprehensive workforce plan. For example, the hiring plan did not explain how the hiring needs would be specifically tied to long-term goals, such as National Defense Strategy requirements. Although the hiring plan mentions some skillsets, it does not include a competency gap assessment or assess personnel requirements. As noted in our report, OASD-SO/LIC and SOCOM officials stated that the initial personnel requirements developed by the Army study were inaccurate for several reasons, including the lack of a standardized process to assess personnel requirements. Accordingly, we continue to believe that until OASD-SO/LIC develops a comprehensive strategic workforce plan that includes key principles outlined in our report, OASD- SO/LIC could be at risk of hiring personnel who may not adequately meet its needs to perform the roles and responsibilities of section 922. We are sending copies of this report to other interested congressional committees and the Acting Secretary of Defense. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-5431 or at russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix II. Appendix I: Comments from the Department of Defense Appendix II: GAO Contacts and Staff Acknowledgments GAO Contact: Staff Acknowledgments In addition to the contact named above, Jim Reynolds (Assistant Director), Tracy Barnes, Mikey Erb, Amie Lesser, Mike Silver, Cheryl Weissman, and Yee Wong (Analyst-in-Charge) made key contributions to this report.
Why GAO Did This Study As DOD increased its reliance on special operations forces, SOCOM's budget has increased from $5.2 billion in 2005 to $12.3 billion in 2018. Section 922 of the NDAA for Fiscal Year 2017 included provisions to enhance the Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict's responsibilities to be similar to those of a military department secretary regarding the organization, training, and equipping of special operations forces. The Joint Explanatory Statement accompanying the fiscal year 2018 NDAA included a provision for GAO to assess DOD's actions in response to section 922. This report assesses (1) the extent to which DOD has identified and taken actions to implement section 922; (2) what, if any, challenges it faces in completing implementation; and (3) the extent to which its hiring approach for the office of the ASD-SO/LIC has incorporated strategic workforce planning principles. GAO reviewed relevant documents and interviewed DOD officials. What GAO Found Since 2017 the Department of Defense (DOD) has made recommendations, developed actions, and taken steps to address requirements in section 922 of the National Defense Authorization Act (NDAA) for Fiscal Year 2017 to expand the Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict's (ASD-SO/LIC) roles and responsibilities. DOD officials noted that they have taken an incremental implementation approach to addressing section 922. In 2018, DOD identified 166 recommendations to change the ASD-SO/LIC's oversight of special operations forces (SOF). These recommendations were used to develop 87 actions that were necessary to implement section 922. Since February 2019, DOD has implemented 56 of these actions. For example, the Deputy Secretary of Defense approved a new Special Operations Policy and Oversight Council directive that identified the ASD-SO/LIC as the lead for that council. The Deputy Secretary of Defense also delegated the ASD-SO/LIC with authority to approve waivers to hire civilian personnel during a civilian hiring freeze. Although the office of the ASD-SO/LIC has taken many actions to implement section 922, DOD faces two key challenges in completing its implementation of the ASD-SO/LIC's new roles and responsibilities: Lack of time frames . As of February 2019, 28 out of 31 unimplemented actions associated with section 922 did not have clear time frames for implementation. According to ASD-SO/LIC and U.S. Special Operations Command (SOCOM) officials, they did not prioritize establishing time frames because they took an incremental approach to implementing actions and addressed them on a case-by-case basis. Without clear time frames for implementation, ASD-SO/LIC and SOCOM may be less effective in implementing section 922. Unclear guidance . Current guidance about ASD-SO/LIC responsibilities is outdated: for example, it states that the ASD-SO/LIC shall report directly to the Under Secretary of Defense for Policy. However, section 922 states that special operation forces-related administrative matters are managed directly by the Secretary of Defense to the ASD-SO/LIC. The special operations force enterprise is a complex system, and unless roles and responsibilities are clarified in guidance, other DOD stakeholders, such as the military services, may not know the extent of the ASD-SO/LIC's and SOCOM's authorities and responsibilities. DOD officials expressed some concerns that until these matters are clarified in guidance, it will remain unclear whether the ASD-SO/LIC and SOCOM should work together—for example, on personnel issues—and how their relationships with stakeholders with oversight authority will be managed. DOD partially concurred, and based on its comments, GAO modified one recommendation. The office of the ASD-SO/LIC has made efforts to develop a workforce plan, including commissioning a manpower study and taking steps to develop a hiring plan; however, these efforts do not fully incorporate some leading principles for a strategic workforce plan. For example, ASD-SO/LIC did not share the hiring plan with its staff, including key officials from the office of the ASD-SO/LIC and SOCOM. Without completing a comprehensive strategic workforce plan that includes key principles, the office of the ASD-SO/LIC may not know what gaps exist in skills and competencies in order to develop effective workforce strategies to fill those gaps. These issues could put the office of the ASD-SO/LIC at risk of hiring personnel who may not adequately meet its needs as defined by section 922. What GAO Recommends GAO is making three recommendations to DOD to establish time frames for section 922 actions; update applicable guidance to clarify roles and responsibilities for the ASD-SO/LIC and SOCOM; and develop a strategic workforce plan that incorporates key principles. DOD partially concurred with the recommendations and GAO continues to believe the recommendations are valid, as discussed in the report. GAO also modified one recommendation to address DOD concerns regarding its applicability.
gao_GAO-20-200
gao_GAO-20-200_0
Background According to ATF and USMS policy, the Directors of ATF and USMS have the authority to develop various policies, procedures, and guidance that specify the steps the components must or should take while investigating and adjudicating employee misconduct. Investigation Process ATF and USMS can receive allegations of employee misconduct from a variety of sources, including agency staff, the general public, and the DOJ OIG. Allegations of employee misconduct can include, for example, not following procedures associated with managing government-issued property or not reporting time and attendance accurately. Employee misconduct can occur outside of the workplace as well, such as local arrests of employees for domestic violence or driving under the influence of alcohol. ATF and USMS each have an intake or hotline function that is to initially assess the reported information and seriousness of each allegation to determine the appropriate next step in terms of which group or office within their respective component will conduct an investigation, if warranted. The investigation process involves engaging in fact-finding to the extent necessary to make an informed decision on the merit of an allegation. In accordance with ATF and USMS policy, for each misconduct allegation received, the components’ investigative office (Internal Affairs) must provide the DOJ OIG with “right of first refusal.” This review allows the DOJ OIG to either open an investigation or send the allegation back to the component for action. If the DOJ OIG declines the opportunity to investigate, the components assign the case to Internal Affairs. Specifically: For ATF, cases that involve matters related to integrity are investigated by ATF Internal Affairs, while other cases are generally referred to ATF divisions to conduct inquiries (known as management referrals). USMS typically assigns higher-level (i.e., more egregious) misconduct cases to Internal Affairs. For cases typically considered to involve lower-level offenses, USMS managers in divisions or districts conduct inquiries or fact finding locally. Each component has policies, procedures, and guidance for its Internal Affairs and local management for investigating cases of employee misconduct. Based on the investigative findings, the responsible office for each component can make a preliminary determination of whether there is sufficient evidence to support an allegation. Adjudication Process After investigations are completed, each component has an adjudication process whereby delegated officials propose discipline. For ATF, a headquarters entity—referred to as the Professional Review Board— proposes discipline for all cases investigated by ATF Internal Affairs. For cases involving misconduct by ATF employees outside of Internal Affairs jurisdiction, division management proposes and decides discipline. USMS utilizes various, delegated agency officials to propose and decide discipline depending on the type of investigation. Discipline for both ATF and USMS employees can range in severity, depending on the unique findings and circumstances of each investigation. For misconduct within USMS warranting a suspension of 14 days or less, local management proposes and decides discipline. For both ATF and USMS, during adjudication, proposing and deciding officials determine whether an allegation is substantiated or unsubstantiated when considering if an action is warranted. For substantiated cases that are determined to warrant action, components use their respective Table of Offenses and Penalties as a guide for disciplinary actions, which provides guidance for determining appropriate penalties. Each component is to provide employees with a letter of proposed discipline and an opportunity to respond before it makes a final decision on the discipline. After discipline is proposed and the employee’s response is considered, final discipline is determined by a delegated official (deciding official), distinct from the proposing official. In addition, delegated officials are to consider particular mitigating and aggravating factors on a case-by-case basis when determining the appropriate penalty for an act of employee misconduct. The relevant factors that are considered, as appropriate, in determining the severity of the discipline include, but are not limited to, the nature and seriousness of the offense and its relation to the employee’s duties, position, and responsibilities. This includes considering whether the offense was intentional or inadvertent, or was committed maliciously or for gain; the employee’s past disciplinary record; and whether the offense was frequently repeated. For both ATF and USMS, there are three categories of employee misconduct outcomes: Corrective/Non-disciplinary action. This is an administrative or non- disciplinary action, such as a letter of counseling or a letter of guidance and direction, that informs an employee about unacceptable performance or conduct that should be corrected or improved. Disciplinary action. This includes actions resulting in a letter of reprimand up to a suspension of 14 days or less. A letter of reprimand describes the unacceptable conduct that is the basis for a disciplinary action, and represents the least severe form of disciplinary action. Suspensions in this category involve the placement of an employee in a nonduty, non-pay status for up to and including 14 days. Adverse action. This involves a suspension of more than 14 days (including an indefinite suspension), demotion to a lower pay band or rate of pay, or removal (an involuntary separation from employment). According to the U.S. Merit Systems Protection Board, an indefinite suspension is appropriate when evidence exists to demonstrate misconduct of a serious nature, such as an employee has committed a crime for which a sentence of imprisonment can be imposed, when the agency has concerns that an employee’s medical condition makes the person’s presence in the workplace dangerous or inappropriate, or when an employee’s access to classified information has been suspended. Also, according to the board, a demotion is a reduction in grade or a reduction in pay, while a removal terminates the employment of an individual. Figure 1 provides an overview of ATF and USMS employee misconduct processes. Case Management Systems ATF and USMS have case management systems that are designed to maintain employee misconduct data––such as the date of the alleged incident, source of the allegation, description of the alleged misconduct, and the status of the investigation. ATF’s Professional Review Board uses another system to manage outcome data associated with Internal Affairs investigations. After adjudication of ATF Internal Affairs investigations, the board is to provide this outcome data to ATF Internal Affairs for inclusion in its system. Similarly, after the adjudication of management referrals for action, ATF managers are to provide outcome data to ATF Internal Affairs to include in its system. In addition to the system USMS uses to manage Internal Affairs investigations, the agency has a separate system to record outcome data. ATF and USMS Completed About 3,900 Employee Misconduct Investigations from Fiscal Years 2014 through 2018, with about 320 Involving Claims of Management Retaliation ATF Initiated About 1,600 Investigations of Employee Misconduct from Fiscal Years 2014 through 2018 ATF Investigations and Allegations Our analysis of ATF employee misconduct data found that ATF opened 1,581 employee misconduct investigations during fiscal years 2014 through 2018. As shown in table 1, the majority of ATF misconduct cases during this period were management referrals to divisions for informational purposes or for action. Table 2 shows that the most common allegation category of misconduct that ATF received from fiscal year 2014 through 2018 was job performance failure, representing 8 percent of all allegations, which includes not attending meetings, submitting reports of inspection late, or becoming agitated during performance feedback, among other things. ATF Offense Categories and Disciplinary Outcomes After investigations are completed, results are forwarded to the Professional Review Board for adjudication, and adjudication results are to be entered into ATF’s Human Resources system. For investigations that were adjudicated during the period we reviewed, six types of offense categories made up about 60 percent of those substantiated and captured in the ATF Human Resources system, as shown in figure 2. The exercise of poor judgment (14 percent) and the failure to adequately secure government property (13 percent) were the most common offenses. The employee misconduct outcomes for offenses ranged from corrective actions (e.g., letters of counseling or caution) to adverse actions such as suspensions and removals. Specifically, of the 503 investigations that had final actions reported in ATF case management system, disciplinary action—suspensions of 15 days or less and letters of reprimand— accounted for 176 (about 35 percent) of the final outcomes. Also, 135 (about 27 percent) of investigations adjudicated resulted in corrective actions (cautions such as a verbal or written warning). Further, 87 (about 17 percent) of these 503 investigations and management referrals were closed for various reasons, such as insufficient evidence of an employee’s inappropriate behavior or clearance of the charges after investigation, while adverse actions represented 47 (about 9 percent) of these outcomes, as shown in figure 3. USMS Completed About 2,300 Investigations of Employee Misconduct from Fiscal Years 2014 through 2018 USMS Investigations and Allegations Our analysis of USMS employee misconduct data show that USMS opened 2,347 employee misconduct investigations during fiscal years 2014 through 2018 that were also closed at the time USMS responded to our request for information. As shown in table 3, USMS Internal Affairs investigated the majority of the component’s employee misconduct cases. As shown in table 4, the most common misconduct allegations for USMS were violations of the code of professional responsibility (21 percent), conduct unbecoming or discourteous behavior (13 percent), and failure to follow procedures (12 percent). USMS Offense Categories and Disciplinary Outcomes As shown in figure 4, general misconduct while on duty and failure of staff to follow instructions were the most frequent offenses from fiscal years 2014 through 2018, representing 383 (about 25 percent) and 266 (about 18 percent) of offenses respectively. Additionally, according to USMS adjudication data, of the 2,347 investigations that were opened in fiscal years 2014 through 2018, USMS had adjudicated 1,729 misconduct cases at the time USMS responded to our request for information (March 2019 for investigations opened in fiscal years 2014 through 2017 and April 2019 for investigations opened in fiscal year 2018). As shown in figure 5, the most common disciplinary outcomes for USMS were non-adverse actions (corrective and disciplinary actions), which accounted for 988 (about 58 percent) of final outcomes. USMS did not take disciplinary action on 533 (about 31 percent) of completed investigations forwarded for adjudication. The deciding official will not determine an action against an employee if he or she does not believe the allegations warrant action. Adverse actions were less common, with removals, suspensions of 15 days or more, and demotions accounting for 83 (about 5 percent) of all employee actions. The remaining 120 (about 7 percent) of completed investigations forwarded for adjudication resulted in retirements, resignations, transfers and other outcomes such as settlement agreements. Over 300 Management Retaliation Claims from ATF and USMS Employees Were Investigated In Fiscal Years 2014 through 2018, with Few Resulting in Discipline According to the U.S. Merit Systems Protection Board, to prove a claim of management retaliation, the investigation must show that the employee engaged in a protected activity (e.g., filing an EEO claim); the agency official with knowledge of the employee’s protected activity took, failed to take, or threatened to take a personnel action against the employee; and there is a causal connection between the protected activity and the personnel action. From fiscal years 2014 through 2018, ATF and USMS employees submitted 70 claims of management retaliation directly to their Internal Affairs division or the DOJ OIG, and about 240 to their EEO Office. OSC does not record data in its case management system related to DOJ employee disclosures (claims) by component. ATF, USMS, and DOJ OIG Investigations From fiscal years 2014 through 2018, ATF, USMS, and the DOJ OIG completed 70 investigations of employee misconduct that alleged management retaliation. ATF Internal Affairs retaliation investigations. According to ATF investigations data, from fiscal years 2014 through 2018, ATF Internal Affairs investigated 23 cases alleging management retaliation. Of these 23 cases, Internal Affairs referred 20 to division management for informational purposes. Of the three cases that were investigated by ATF, two cases were investigated by division management and resulted in the employees being counseled by their supervisors. The third case was investigated by Internal Affairs and resulted in one employee receiving a clearance letter and another receiving a letter of caution, with another two employees retiring. USMS Internal Affairs retaliation investigations. According to USMS investigations data, from fiscal years 2014 through 2018, USMS Internal Affairs investigated 26 cases alleging management retaliation. Of these 26 cases, 12 were closed after the investigation was completed due to insufficient evidence. Of the remaining 14 cases, four resulted in employees retiring during or after adjudication, four had no employee action, three closed due to ongoing related cases, and there was one oral admonishment, one letter of counseling, and one suspension of 14 days. DOJ OIG retaliation investigations. According to our analysis of DOJ OIG data, from fiscal years 2014 through 2018, the DOJ OIG investigated 21 ATF or USMS cases alleging management retaliation (four ATF cases and 17 USMS cases). The DOJ OIG filed all four ATF cases in its management system for informational purposes only (no action), and also sent one of the four cases to ATF for informational purposes. Of the 17 USMS cases, the DOJ OIG filed 12 cases in its management system for informational purposes (no action), found that three cases lacked sufficient evidence, closed one case due to one of the involved employees being reassigned and the other resigning, and in one case made a procedural recommendation to the Director of USMS. Figure 6 shows the number of ATF, USMS, and DOJ OIG management retaliation investigations from fiscal years 2014 through 2018. ATF and USMS EEO Offices Investigations ATF and USMS employees may file claims of management retaliation through their agency’s EEO office. We analyzed ATF and USMS employee misconduct and EEO data to determine (1) the number of employees who had filed an EEO claim of management retaliation and (2) whether these employees were also subject to a misconduct investigation. ATF EEO management retaliation investigations. From fiscal years 2014 through 2018, the ATF EEO Office received 128 claims from 104 employees that included management retaliation as the basis, but none of these claims have been found to support a finding of retaliation. ATF EEO and employee misconduct data show that employees in 54 of the 128 EEO cases (36 total individuals) were also subject to misconduct investigations that were adjudicated during this time period. Of the 36 employees, 24 submitted their EEO claim subsequent to their misconduct investigation. The remaining 12 employees submitted their EEO claim prior to their first employee misconduct investigation. Figure 7 shows the number of ATF employees who filed EEO claims of management retaliation and were also the subject of an employee misconduct investigation. USMS EEO retaliation claims. From fiscal years 2014 through 2018, the USMS EEO Office received 110 claims from 69 individuals with management retaliation as the basis, of which one resulted in a final agency decision supporting the claim. USMS EEO and employee misconduct data show that individuals in 75 of the 110 EEO cases (49 total individuals) were also subject to a total of 134 employee misconduct investigations that were adjudicated from fiscal years 2014 through 2018. Of these 49 individuals, 32 submitted their EEO complaint subsequent to their misconduct investigation. The remaining 17 employees submitted their EEO claim prior to their first employee misconduct investigation, of which three claims resulted in a settlement agreement. Figure 8 shows the number of USMS employees who filed EEO claims of management retaliation and were also the subject of an employee misconduct investigation. U.S. Office of Special Counsel Investigations of Management Retaliation From fiscal years 2014 through 2018, OSC did not report any instances of management retaliation for ATF or USMS. OSC reported one investigation related to one USMS employee who improperly secured personally identifiable information, for which USMS took corrective actions. According to data maintained in an ATF Office of Chief Counsel case management system, ATF recorded eight instances where ATF counsel rendered assistance to OSC on retaliation-related matters. USMS Office of General Counsel does not maintain OSC-related data in any USMS case management system. ATF and USMS Did Not Consistently Document Some Key Internal Controls for Processing Allegations of Employee Misconduct or Fully Monitor These Processes ATF and USMS Documented the Implementation of Some Key Internal Controls, but Not for Others ATF and USMS have incorporated some key internal controls for processing employee misconduct allegations into their policies and procedures, but have not consistently documented the implementation of these controls. ATF and USMS have also established policy requirements related to timeliness in completing employee misconduct investigations, but have not established performance measures to monitor all of these requirements. Further, both ATF and USMS have established mechanisms to monitor various aspects of the components’ operations, but do not use these mechanisms to fully monitor key internal controls related to their employee misconduct investigation and adjudication processes. ATF and USMS documented the implementation of some key control activities that are important for ensuring the quality and independence in processing allegations of employee misconduct. However, they did not document other key control activities. Supervisory review of investigations. According to Federal Quality Standards for Investigations, supervisory or management review of misconduct investigations helps ensure that investigations are comprehensive and performed correctly. ATF and USMS both require this review in policy for misconduct investigations and have incorporated it in their respective procedures. Both ATF and USMS also have a policy or procedure for documenting this control activity in either their case management system or case file records. We found that ATF consistently documented supervisory review of its employee misconduct investigations. Overall, based on our case file reviews, we estimate that 98 percent of the population of ATF investigations or management referrals for action from fiscal year 2014 through fiscal year 2018 documented supervisory review. For our sample, we found documentation of supervisory review in all 36 Internal Affairs investigations and all 26 management referrals for action. We also found supervisory review for all 12 investigations or referrals in our sample with proposed adverse actions and all nine investigations or referrals in our sample that involved an individual who had filed an EEO claim of management retaliation. For USMS, we found that the agency consistently documented supervisory review of its Internal Affairs investigations, but did not consistently document this review for its district and division investigations. Overall, based on our case file reviews, we estimate that 60 percent of the population of USMS investigations (2,347) from fiscal year 2014 through fiscal year 2018 documented supervisory review. For our samples, we found documentation of supervisory review in 29 out of 30 of Internal Affairs investigations. However, for USMS district and division investigations, we found that 23 of 59 investigations had documentation of supervisory review through the required use of a field incident report. We also found that all 20 investigations in our sample with proposed adverse actions had documentation of supervisory review. Further, we found that six of the 12 USMS investigations in our sample that involved an individual who had filed an EEO claim of management retaliation had documentation of supervisory review. The remaining six cases without documentation of supervisory review were district or division investigations, which are typically considered to involve lower-level offenses. Although USMS policy on Field Operational Reports requires the use of a standard form to document supervisory review for district and division misconduct investigations, USMS officials stated that district and division management periodically document a completed investigation with an electronic email confirmation for various reasons, including that the investigation may involve non-adverse actions. However, according to USMS policy, a memorandum does not serve as a substitution for the required field report. Taking steps to ensure that supervisory review of division and district investigations is documented in accordance with USMS policy would provide greater management assurance that investigations are performed comprehensively and consistently, and that this control is operating as intended. Legal sufficiency review. ATF policy on Integrity and Other Investigations states that managers will review the investigative findings with the Office of Chief Counsel’s management division to propose and decide discipline or other actions. ATF also has procedures for documenting these activities in its case management systems. We found that ATF consistently documented legal sufficiency review during the adjudication phase for its Internal Affairs investigations. Specifically, we found that 32 of 36 cases investigated by Internal Affairs documented legal counsel review during the adjudication phase. One case of these 32 had review for the proposal, but was ultimately cleared. For the four cases without documentation of legal counsel review, this review was not applicable. Specifically, one case involved an employee who received a clearance letter; one case was still pending a final decision; one case involved an employee who was on military leave; and one case involved an employee who had retired. We also found that legal counsel review was documented in 11 of the 12 cases in our sample where adverse action was proposed—all of which were investigated by Internal Affairs—and the remaining case was still pending adjudication as of August 2019. Further, we found documentation of legal counsel review for six of the nine employee misconduct investigations that involved an EEO claim of management retaliation. Of the three investigations that did not have documentation, one was an Internal Affairs case where the final action was pending, and the other two cases were management referrals for action. Regarding ATF Internal Affairs investigations referred to division management for action, we found that legal counsel review was documented for nine of 26 cases during the adjudication phase for the proposed discipline, the final disciplinary action, or both. Documenting legal counsel review for cases referred to division management for action would provide ATF management greater assurance that all proposed discipline or other actions are legally sufficient. Although ATF policy requires managers to review investigative findings with the Office of Chief Counsel when handling management referrals, ATF officials stated that supervisors may handle the matters within the division without informing or consulting with legal counsel if there is no proposed discipline. According to ATF officials, the agency plans to revise its policy on Integrity and Other Investigations in August 2020, the next scheduled recertification of the order, to allow managers discretion in determining whether legal review is needed in instances where discipline is not imposed. USMS policy on Discipline Management Business Rules requires legal review for Internal Affairs investigations that involve a proposed adverse action, but does not require legal reviews for investigations that involve non-adverse actions. USMS also has procedures for documenting this activity in its case management system and physical case files. We found that USMS consistently documented the legal sufficiency internal control. Specifically, we found that all of the 20 proposed adverse actions in our sample documented legal counsel review. Of the 12 cases in our sample that involved an individual who had also filed an EEO claim, three had proposed adverse actions, all of which had documentation of USMS legal review. DOJ OIG right of first refusal. According to ATF and USMS policies on misconduct investigations and management referrals, for each misconduct allegation received, the components must provide the DOJ OIG the opportunity to review the case for right of first refusal. This review allows the DOJ OIG to either open an investigation or defer the case back to the component for investigation. This review is designed to maintain independence by determining which cases warrant investigation outside of ATF and USMS. We found that ATF and USMS consistently forwarded allegations of employee misconduct to the DOJ OIG for right of first refusal. Specifically, our analysis of ATF and DOJ OIG data found that the DOJ OIG did not have a record of receiving five out of 1,581 ATF investigations or management referrals for right of first refusal. There were also 41 instances for which ATF did not have a DOJ OIG case number, which prevented the DOJ OIG from checking its records for evidence that ATF had forwarded the case for right of first refusal. We found that 37 of the 41 cases occurred in fiscal years 2014 or 2015, with only four cases occurring in fiscal years 2016 through 2018. Our analysis of USMS and DOJ OIG data found that the DOJ OIG did not have a record of receiving 10 out of 2,347 investigations for right of first refusal. Verification of accuracy of case management system data. ATF and USMS do not have a policy requirement for the use of a method or tool to verify system data associated with both investigation and disciplinary processes. However, according to Standards for Internal Control in the Federal Government, management is to use quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives and addressing risks. The standards also state that data maintains value to management in controlling operations and making decisions, and management is to design control activities so that all records are complete and accurate. Regular reviews of case management data can identify outliers or abnormalities, such as missing information. ATF officials stated that agency managers verify that the initial information related to the allegation is accurate in the case management system. However, additional reviewers in the misconduct process do not verify investigation and adjudication information subsequent to the allegation in the case management system. The officials added that after Internal Affairs investigations and management referrals for action are completed, the record of investigation and supporting materials are reviewed by management to assess the quality of the investigation before uploading to the case management system. However, we found that information related to the investigation and adjudication of these allegations was sometimes not captured in automated data fields. Since uploaded documents cannot be analyzed easily, the Office of Professional Responsibility manually reviews these documents to compile an annual report on employee misconduct activities, such as the number of investigations and outcomes. According to ATF and USMS officials, employee misconduct procedures include supervisor review in several areas. For example, ATF and USMS officials stated that managers review reports of investigation and other documents to ensure certain information is recorded in case files or case management systems. ATF officials provided evidence that they verify certain data when a case is initiated, such as the identity of the subject and allegation. ATF officials also provided evidence that managers review the report of investigation for quality. USMS officials stated that they confirm that the employee under investigation is the correct employee in the system record and that the case was referred to the DOJ OIG for right of first refusal. ATF officials also stated that reviewers involved in employee misconduct processes compare case file documentation against case management system records. However, we found that hundreds of case management system records were missing key information, such as the final outcomes of employee misconduct investigations and DOJ OIG case numbers for ATF, and dates related to district or division investigations for USMS. We also found that ATF and USMS lack policy for verifying the accuracy and completeness of data recorded in their respective employee misconduct case management systems. This policy could be implemented, for example, through the use of a method or tool, such as a data entry checklist, that would guide agency officials when entering information into systems. Establishing policy could help ensure that case management system data are accurate and complete and would allow ATF and USMS to effectively monitor and report on their employee misconduct processes. ATF and USMS Have Established Timeliness Requirements for Completing Employee Misconduct Investigations, but Have Not Fully Established Performance Measures ATF and USMS have established requirements in their policies regarding timeliness in completing employee misconduct investigations. However, ATF has not developed performance measures to monitor its timeliness requirements. USMS has developed a measure to monitor its Internal Affairs investigations, but not for its district and division investigations. Standards for Internal Control in the Federal Government state that management should define objectives in measurable terms so that responsible personnel and management are held accountable, and their performance toward achieving those objectives can be assessed. ATF Does Not Have a Performance Measure to Monitor Timeliness ATF policy on Integrity and Other Investigations requires completing Internal Affairs investigations generally within 120 days, and management referrals for action within 60 days. ATF officials acknowledged the importance of addressing employee misconduct allegations in a timely manner. For example, ATF may withhold a positive human resource action or personnel assignment pending completion of a misconduct investigation, such as a promotion or becoming a member of a task force. ATF employees under investigation for misconduct may also be placed on restricted duty, which depending on the case may prevent the employee from accessing information systems and require the employee to surrender his or her government-issued firearms, vehicle, other property, and credentials. ATF officials stated that ATF management tracks ongoing investigations—for both Internal Affairs investigations and management referrals for action—and the amount of time they are open. ATF Internal Affairs officials stated that managers track the duration of all investigations on a weekly basis, and will inquire about the status of investigations and reasons why any exceed the duration standards. However, ATF has not developed a performance measure to monitor performance against timeliness requirements—for example, whether a certain percent of Internal Affairs investigations during a definite time period were completed within the required 120 days. Based on our analysis of ATF data, Internal Affairs met its policy requirement of completing its investigations within 120 days about 36 percent of the time (86 of 240 investigations). ATF data also show that the agency met its policy requirement of 60 days for about 49 percent (205 of 419) of its management referrals for action (see fig. 9). According to ATF officials, ATF does not use measures to monitor performance related to the duration of Internal Affairs investigations and management referrals for action due to numerous factors, such as investigators handling multiple cases at the same time and the involvement of the DOJ OIG. We have previously reported that other federal agencies have established such performance measures, which have taken these challenges into account when developing their methodology for measuring timeliness. Establishing a performance measure to monitor the timeliness of Internal Affairs and management referrals for action could provide ATF management more complete information in overseeing investigations and help improve the efficiency of employee misconduct processes. USMS Met Timeliness Goals for District and Division Investigations, but Lacks a Measure to Monitor These Investigations USMS policy requires completing Internal Affairs investigations within 90 days, and within 30 days for investigations referred to its districts and divisions. USMS officials noted the importance of addressing employee misconduct allegations in a timely manner, with regards to effecting positive human resource actions such as promotions. USMS Internal Affairs has developed a performance measure to monitor whether it is completing its investigations within the required 90-day time frame. According to USMS officials, the agency plans to change the required time frame for completing Internal Affairs investigations from 90 days to 180 days, which according to the officials is a time standard used by most other law enforcement agencies. USMS does not have a performance measure to monitor the duration of investigations conducted by its districts and divisions. According to USMS officials, these investigations do not involve high-level offenses that would pose a significant risk to the agency. Based on our analysis of USMS data, Internal Affairs met its policy requirement of completing its investigations within 90 days 35 percent of the time (468 of 1,320 investigations for which data were recorded in USMS systems), as shown in figure 10. Our analysis also shows that USMS met its policy requirement of completing its district and division investigations within 30 days over 99 percent of the time (489 of 490 investigations for which data were recorded in USMS systems). Although we found that USMS met its timeliness requirement related to district and division investigations over 99 percent of the time, management responsible for oversight have not developed a performance measure to monitor whether the agency meets its policy requirement. Therefore, the agency will not be able to identify any potential future performance issues. Monitoring these investigations is also important since data on the duration for about 25 percent (165 of 655) of district and division investigations that were opened from fiscal years 2014 through 2018 were not recorded in USMS systems at the time the agency provided the data. Developing a measure for the duration of district and division investigations would provide USMS leadership with greater assurance that the agency is complying with policy requirements. ATF and USMS Do Not Use Existing Oversight Mechanisms to Fully Monitor Key Internal Controls for Their Employee Misconduct Processes ATF and USMS do not use their existing oversight mechanisms to fully monitor key internal controls related to employee misconduct processes. Standards for Internal Control in the Federal Government call for management to establish and implement activities to monitor the internal control system and evaluate the results, as well as remediate identified internal control deficiencies. ATF Oversight Mechanisms ATF has two oversight mechanisms that it uses to monitor internal controls related to financial reporting, compliance activities, and operations—annual self-assessments and internal management reviews. However, according to ATF officials, the component does not use these mechanisms to monitor any internal controls related to its employee misconduct processes. Specifically, according to an ATF official, as part of ATF’s annual self- assessment program, all component divisions, including Internal Affairs, are to test financial processes, such as government credit card payments. The ATF Inspection Division also conducts internal management reviews to test compliance with the same activities that are covered by the self- assessment program. ATF officials stated that the scope of the self- assessment program does not include key internal control activities related to employee misconduct processes due to competing priorities. According to an Inspection Division official, the division also has not conducted an internal management review of the offices responsible for employee misconduct processes (e.g., the Internal Affairs division, the Professional Review Board, Bureau Deciding Official activities) in about 10 years due to competing priorities. ATF officials stated that the agency plans to review these divisions and offices in the future, but did not have any specific plans for how internal management reviews would be used for divisions and offices in the misconduct process or when these reviews would begin. While the scope of these reviews has not been determined, the officials stated that internal management reviews could include testing internal control activities related to allegations of employee misconduct, such as investigative review and approval, legal sufficiency review; and case management information system data reliability and completeness. Monitoring key internal controls related to employee misconduct processes through existing oversight mechanisms would help ATF management ensure that controls are being implemented as required by policy. USMS Oversight Mechanisms USMS has two oversight mechanisms that it uses to monitor internal controls related to financial reporting, compliance activities, and operations. Specifically, USMS’s Compliance Review Office, within the Office of Professional Responsibility, conducts on-site management reviews at USMS districts and divisions. USMS also has an annual self- assessment program that requires divisions and districts to self-assess their compliance with certain requirements by testing for and remediating any internal control deficiencies. However, because of competing priorities, USMS does not use these mechanisms to fully monitor key internal controls over employee misconduct processes. According to Office of Professional Responsibility Compliance Review officials, the scope of on-site management reviews conducted at selected USMS districts and divisions during fiscal years 2014 through 2018 did not include employee misconduct processes. The officials also stated that on-site reviews during this period did not include the Internal Affairs and Discipline Management divisions. According to USMS officials, the agency plans to conduct an on-site management review at the Internal Affairs division in fiscal year 2021. The officials added that the compliance review cycle for each district and division currently occurs once every 9 years, but that this review cycle will increase to once every 4 years. Our analysis of USMS annual self-assessment guides showed that from fiscal years 2014 through 2018, the guides included testing for most key controls related to employee misconduct processes. For example, Internal Affairs and Discipline Management self-assessment guides included questions on whether Internal Affairs forwards cases to the DOJ OIG for right of first refusal, the Chief of Internal Affairs reviews investigative reports, investigations are completed within 90 days, and data on allegations is entered into the case management system. The self-assessment guide for USMS districts and divisions included questions to assess compliance with the timeliness of investigations (within 30 days); use of the Table of Offenses and Penalties, consideration of Douglas Factors (certain factors that USMS is to consider about an employee when deciding discipline, such as the employee’s need for training); Delegations of Authority for proposing and deciding officials, and other Human Resource policy areas, such as administrative leave and eligibility for promotion. However, although legal sufficiency review of proposed adverse actions is required by policy and a key internal control, USMS did not design its self- assessment guides for the Internal Affairs and Discipline Management divisions to include testing for such reviews. Revising the scope of on-site management reviews to include employee misconduct processes and revising self-assessment guides to include testing for legal sufficiency of proposed adverse actions would help USMS gain greater assurance that these controls are implemented as required by policy. Conclusions ATF and USMS have established internal controls related to some employee misconduct investigation and disciplinary processes, but additional actions could strengthen their controls. Specifically, USMS does not ensure that supervisory review of division and district investigations is documented in accordance with agency policy. ATF and USMS also have not developed policy for verifying the accuracy and completeness of information in employee misconduct systems. Ensuring supervisory review is documented as required and establishing policy for verifying information in misconduct systems would provide greater consistency in processes, assurance that controls are operating as intended, and corrective actions are implemented as needed. ATF and USMS policy also have required timelines for completing investigations. However, ATF does not have a performance measure to monitor whether it is meeting its timeliness requirement, such as the percentage of Internal Affairs investigations completed within 120 days. USMS does not have a performance measure to monitor and assess its performance in meeting the required time to complete its district and division investigations within 30 days. Developing performance measures to monitor the timeliness of all investigations could provide more complete information for ATF and USMS management responsible for oversight and allow them to address any related performance issues in a timely manner. Further, ATF and USMS have established oversight mechanisms, such as internal management reviews, to monitor select aspects of the components’ operations, such as financial operations. However, ATF and USMS generally have not used these mechanisms to monitor internal controls related to employee misconduct processes, which would help ATF and USMS management ensure that controls are implemented as required by policy. Recommendations for Executive Action We are making a total of seven recommendations, including three to ATF and four to USMS. Specifically: The Director of the U.S. Marshals Service should take steps to ensure that supervisory review of division and district investigations is documented in accordance with USMS policy. (Recommendation 1) The Director of ATF should develop policy for verifying the accuracy and completeness of information in ATF employee misconduct systems. (Recommendation 2) The Director of the U.S. Marshals Service should develop policy for verifying the accuracy and completeness of information in USMS employee misconduct systems. (Recommendation 3) The Director of ATF should develop a performance measure to monitor the timeliness of misconduct investigations, according to policy requirements. (Recommendation 4) The Director of the U.S. Marshals Service should develop a performance measure to monitor the timeliness of district and division misconduct investigations, according to policy requirements. (Recommendation 5) The Director of ATF should modify existing oversight mechanisms to include the monitoring of key internal controls related to employee misconduct investigations. (Recommendation 6) The Director of the U.S. Marshals Service should modify existing oversight mechanisms to fully monitor key internal controls related to employee misconduct investigations. (Recommendation 7) Agency Comments We provided a draft of this product to DOJ for review and comment. DOJ concurred with all of our recommendations and did not provide written comments. ATF and USMS provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Attorney General, the ATF Acting Director, the USMS Director, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or McNeilT@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Sampling Methodology To assess the extent to which the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) and United States Marshals Service (USMS) components implemented key internal controls, we selected a stratified random sample of case files within the population of employee misconduct investigations that were opened by each component from fiscal years 2014 through 2018, and that were considered closed as by USMS as of March 13, 2019, for fiscal years 2014 through 2017 and April 26, 2019, for fiscal year 2018, with corresponding data on the outcomes of the investigations (resulting employee actions) as of March 27, 2019 for fiscal years 2014 through 2017 and May 3, 2019, for fiscal year 2018. ATF data are as of April 9, 2019, for internal investigations and as of August 2, 2019, for management referrals. We also stratified our samples based on whether the case files included adverse actions (a suspension of at least 15 days, demotion or removal) and whether an employee under a misconduct investigation had also filed an Equal Employment Opportunity (EEO) claim of management retaliation to assure that representation from both subgroups were included in our sample. We used fiscal year 2014 through 2018 data from the components’ information systems from which to randomly select a generalizable sample of 65 employee misconduct cases for ATF out of a population of 150 and 100 cases for USMS out of a population of 1,281. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. The sample was designed to produce 95 percent confidence intervals for percentage estimates that are within no more than plus or minus 10 percentage points within component. The precision is not high enough to generalize to the strata level and results should only be generalized to the component level (i.e. ATF and USMS). As part of these samples, we included investigation that resulted in proposed adverse actions and that involved employees who also submitted an EEO claim of management retaliation. Specifically: For ATF, our sample included 12 cases with proposed adverse actions and nine cases that involved individuals who had also submitted an EEO claim of management retaliation. For USMS, our sample included 12 cases with proposed adverse actions and 12 cases that involved individuals who had also submitted an EEO claim of management retaliation. Because some items we assessed applied only to a subset of cases, resulting in a smaller sample size, we report some findings as the range from the lower to upper bound of the 95 percent confidence interval. In cases with particularly small sample sizes, we describe results for the sample only, rather than attempting to generalize to the population of cases within the component. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments Triana McNeil at (202) 512-8777 or McNeilT@gao.gov In addition to the contact named above Eric Erdman (Assistant Director), Willie (Billy) Commons III, Dominick Dale, Anthony DeFrank, Justin Fisher, Eric Hauswirth, Ying Long, Amanda Miller, and Mike Tropauer made key contributions to this report.
Why GAO Did This Study Within the Department of Justice, ATF and USMS employ more than 10,000 staff responsible for protecting communities from violent criminals, investigating the illegal use of firearms, and apprehending wanted persons, among other things. Our recent studies of employee misconduct processes have highlighted the importance of internal controls to help ensure the quality and independence of these processes. We have also reported on employee misconduct investigations being used to retaliate against individuals who report wrongdoing. GAO was asked to review ATF and USMS employee misconduct investigation and disciplinary processes. This report (1) summarizes data on the number, characteristics, and outcomes of ATF and USMS misconduct investigations that were opened from fiscal years 2014 through 2018 and were closed by the time of GAO's review, and (2) examines the extent to which ATF and USMS have developed, implemented, and monitored internal controls for their employee misconduct processes. For each component, GAO reviewed policies, guidance, and performance reports; analyzed case management system data; analyzed random samples of misconduct cases; and interviewed officials involved in investigation and discipline processes. What GAO Found From fiscal years 2014 through 2018, the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) and U.S. Marshals Service (USMS) collectively investigated about 3,900 allegations of employee misconduct, as shown in the table below. About one-half of these investigations were closed with no disciplinary action because the components found that the allegations were unsubstantiated. For allegations that were substantiated by an investigation, the most common ATF offenses were poor judgment and failure to adequately secure property, while the most common USMS offenses were general violations of policy or procedure and failure to follow instruction. The most common outcomes for both ATF and USMS substantiated investigations were discipline including suspensions of up to 14 days and lesser penalties such as verbal or written warnings. During this period, ATF and USMS investigated over 300 allegations of management retaliation, with few resulting in discipline. ATF and USMS have developed some internal controls for managing their employee misconduct investigation and disciplinary processes, but have not consistently documented or monitored key control activities. For example: USMS policy requires supervisory review of district and division investigations, but the agency has not consistently documented this control in accordance with policy. ATF and USMS also lack policy for verifing the accuracy and completeness of information in employee misconduct systems. Ensuring supervisory review is documented as required and developing policy for verifying information in misconduct systems would provide greater assurance that controls are operating as intended. ATF and USMS have established policies and goals related to timeliness in completing various types of employee misconduct investigations (e.g., within 120 days). However, ATF has not established performance measures to monitor progress toward meeting the goals. USMS has measures to monitor timeliness for some types of investigations, but not for others. Establishing measures to monitor timeliness of investigations would provide more complete information to ATF and USMS managers responsible for oversight. ATF and USMS have established oversight mechanisms, such as internal management reviews, to monitor certain aspects of the components' operations, such as financial operations. However, ATF and USMS have not fully used these mechanisms to monitor internal controls related to employee misconduct processes, which would help ATF and USMS management ensure that controls are implemented as required by policy. What GAO Recommends GAO is making seven recommendations, including that USMS ensure supervisory review is documented; and that ATF and USMS develop policy for verifying system information, establish measures to monitor the timeliness of investigations, and improve monitoring of employee misconduct processes. DOJ concurred with our recommendations.
gao_GAO-19-352
gao_GAO-19-352_0
Background Federal Banking Regulators The purpose of federal banking supervision is to help ensure that depository institutions throughout the financial system operate in a safe and sound manner and comply with federal laws and regulations for the provision of banking services. In addition, federal banking supervision looks beyond the safety and soundness of individual institutions to promote the stability of the financial system as a whole. Each depository institution in the United States is primarily supervised by one of the following three federal banking regulators: The Federal Reserve supervises state-chartered banks that are members of the Federal Reserve System, bank and savings and loan holding companies, Edge Act and agreement corporations, and the U.S. operations of foreign banks. FDIC supervises insured state-chartered banks that are not members of the Federal Reserve System, state-chartered savings associations, and insured state-chartered branches of foreign banks. OCC supervises federally-chartered national banks and savings associations and federally-chartered branches and agencies of foreign banks. These federal banking regulators have broad authority to examine depository institutions subject to their jurisdiction. Federal Supervision and Examinations of Large Depository Institutions Federal banking regulators carry out a number of supervisory activities in overseeing management of large depository institutions (see table 1 for a summary of supervision programs for large depository institutions). The supervisory activities are conducted both off- and on-site. Generally, federal banking regulators use off-site systems to monitor the financial condition of an individual bank; groups of banks with common product, portfolio, or risk characteristics; and the banking system as a whole between on-site examinations. Federal banking regulators generally conduct on-site supervision by stationing examiners at specific institutions. This practice allows examiners to continuously analyze information provided by the financial institution, such as board meeting minutes, institution risk reports or management information system reports. This type of supervision is intended to allow for timely adjustments to the supervisory strategy of the examiners as conditions change within the institutions. FDIC, the Federal Reserve, and OCC are required to conduct a full- scope, on-site examination of each insured depository institution they supervise at least once during each 12-month period. The regulators may extend the examination interval to 18 months, generally for institutions that have less than $3 billion in total assets and that meet certain conditions, based on ratings, capitalization, and status of formal enforcement actions, among others. For large institutions, federal banking regulators do not conduct an annual point-in-time examination of the institution. Rather, they conduct ongoing examination activities that are generally intended to evaluate an institution’s operating condition, management practices and policies, and compliance with applicable laws and regulations. In particular, examiners review an institution’s condition using the Uniform Financial Institutions Rating System, also known as CAMELS (capital adequacy, asset quality, management, earnings, liquidity, and sensitivity to market risk). Evaluations of CAMELS components consider an institution’s size and sophistication, the nature and complexity of its activities, and its risk profile. Throughout the examination cycle, each target examination will result in a letter that is transmitted to the institution (where applicable). At the end of the supervisory cycle, a report of examination is issued to the institution. The target examination letter and report of examination may include supervisory concerns that examiners found and that an institution is expected to address within specific time frames. The regulators also issue supervisory guidance, which they describe as including interagency statements, advisories, bulletins, policy statements, questions and answers, and frequently asked questions issued to their respective supervised institutions. Supervisory guidance outlines the regulators’ supervisory expectations or priorities and articulates general views regarding appropriate practices for a given subject area. The guidance often provides examples of practices that the regulators generally consider consistent with safety and soundness standards or other applicable laws and regulations. According to the regulators, supervisory guidance is not legally binding. For instance, FDIC financial institution letters generally announce matters of interest to those responsible for operating an institution. Federal Reserve supervision and regulation letters address significant policy and procedural matters. OCC bulletins generally accomplish the same goals as FDIC and Federal Reserve letters. The letters and bulletins are published on each regulator’s website. Often, the contents of these documents are incorporated into broader examination manuals. Moreover, the federal banking regulators have developed internal control functions within the supervision programs for large depository institutions, which consist of several layers of review following examinations. Each regulator has a review process at the conclusion of examinations, and examiners prepare written products documenting their findings and meet with regional and headquarters officials to finalize decisions. Also, each regulator maintains an internal review function to ensure that examiners properly applied examination guidance. Forward-Looking Supervisory Approach We and others previously found that regulators identified underlying risks at depository institutions that failed during the 2007–2009 financial crisis well before their failure, but did not always take timely supervisory action. As stated by the regulators, the strength or weakness of bank management can reflect an institution’s underlying risk. For example, according to FDIC, the quality of management, including the board of directors and executives, is probably the single most important element in the successful operation of an institution. The Federal Reserve noted that the culture, expectations, and incentives established by the highest levels of corporate leadership set the tone for the entire organization and are essential determinants of whether an organization is capable of maintaining fully effective risk-management and internal control processes. Also, according to OCC, an effective corporate and risk governance framework is essential to ensuring the safe and sound operation of the institution and helping to promote public confidence in the financial system. In our past work, regulators told us they recognized bank supervision needed to be more forward-looking and had incorporated more forward- looking elements into examinations. Forward-looking supervision seeks to mitigate emerging risks before they affect the financial condition of an institution. Regulators can respond to emerging risks in the banking sector with a variety of supervisory tools. These include micro-prudential tools, which traditionally have focused on the safety and soundness of individual financial institutions, and macro-prudential tools, which can be used to address vulnerabilities across the banking system and broader financial system. Supervisory concerns are an important micro-prudential tool to support forward-looking supervision by ensuring that a depository institution takes early action to correct deficiencies. Also, trends in examination data and enforcement activity can provide information on regulators’ identification of and response to concerns of institution safety and soundness and emerging risks. Regulators’ Approaches to Oversight of Management at Large Depository Institutions Generally Were Consistent with Leading Risk- Management Practices Since 2009, federal banking regulators have revised policies and procedures to address management weaknesses at large depository institutions, including by differentiating levels of severity for supervisory concerns and specifying when to communicate them to management at the institutions. Based on our review of selected examination documents, the regulators’ policies and procedures often took different approaches for overseeing management of large depository institutions but each generally addressed leading risk-management practices. Regulators Made Progress in Addressing Oversight of Management Weaknesses and Timely Action on Supervisory Concerns Since 2009, federal banking regulators have revised policies and procedures to better address management weaknesses at large depository institutions identified in the aftermath of the financial crisis. Regulatory staff with whom we spoke noted that most important risk- management concepts had been included in their policies for some time. The post-crisis updates were intended to provide better definitions of certain risk categories and enable examiners to consider individual risks within the context of all risks facing the institution. For instance, in June 2009, FDIC re-emphasized the forward-looking approach, which FDIC states encourages examiners to consider the likelihood that identified weaknesses will cause material problems in the future, and consider the severity of damage to an institution if conditions deteriorate. FDIC further noted that this assessment reflects both the board of directors’ and management’s ability to identify, measure, monitor, and control the risks of the institution’s activities, ensure its safe and sound operations, and ensure compliance with applicable laws and regulations. FDIC policy provides that an assessment of management is not solely dependent on the current financial condition of the institution. Also, in 2015 FDIC updated policies and procedures for identifying and assessing the influence of dominant bank officials or policymakers on an institution, and stated the policy was intended to limit the influence of dominant officials when internal controls are inadequate and ensure independence of the risk-management function. In 2012, the Federal Reserve updated procedures for supervision of large financial institutions, which were intended to strengthen traditional firm- level supervision while also incorporating systemic considerations to reduce potential threats to the stability of the financial system and provide insights into financial market trends. In 2013, the Federal Reserve updated expectations for the assessment of an institution’s internal audit function and provided guidance about the degree to which examiners may rely on the work of an institution’s internal audit function. In 2015, OCC updated its Risk Assessment System to help examiners draw conclusions about the quantity of risk, quality of risk management, aggregate risk, and direction of risk for institutions under eight different risk categories. Also, in 2016, OCC published the Corporate and Risk Governance booklet of the Comptroller’s Handbook to incorporate heightened standards requirements for depository institutions with average total consolidated assets of $50 billion or more. The booklet provides guidance to examiners on board and management responsibilities, risk management assessment factors, and measurement and assessment of risk consistent with the heightened standards. Regulators also took steps to enhance their ability to resolve supervisory concerns in a timely manner through improvements to policies and procedures on identifying and communicating concerns. The regulators employ progressive enforcement regimes to address supervisory concerns that arise during the examination cycle (see table 2). If the institution does not respond to the concern in a timely manner, the regulators may take informal or formal enforcement action, depending on the severity of the circumstances. Informal enforcement actions include obtaining an institution’s commitment to implement corrective measures under a memorandum of understanding. Formal enforcement actions include issuance of a cease-and-desist order or assessment of a monetary penalty, among others. The regulators have continued to update these regimes to clarify the distinction between each level of concern and to improve communication of concerns to the boards of directors of depository institutions. For instance, in 2016, the board of directors of FDIC issued a statement setting forth basic principles to guide the identification and communication of supervisory recommendations. The board stated that a supervisory recommendation refers to FDIC communications with a depository institution that are intended to inform it of FDIC’s views about changes needed to its practices, operations, or financial condition. FDIC’s updated policies and procedures state that supervisory recommendations must be presented in writing and most are generally correctable in the normal course of business. When developing and communicating these recommendations, FDIC examiners are required to (1) address meaningful concerns, (2) communicate concerns clearly and in writing, and (3) discuss corrective action. Supervisory recommendations involving an issue or risk of significant importance and that typically would require more effort to address than those correctable in the normal course, would need to be brought to the attention of the board and senior management through matters requiring board attention (MRBA) comments. The Federal Reserve updated its policies and procedures on identification and communication of supervisory concerns in 2013. The supervision and regulation letter defined matters requiring immediate attention (MRIA) to include (1) matters that have the potential to pose significant risk to the safety and soundness of the banking organization; (2) matters that represent significant noncompliance with applicable laws or regulations; (3) repeat criticisms that have escalated in importance due to insufficient attention or inaction by the banking organization; and (4) in the case of consumer compliance examinations, matters that have the potential to cause significant consumer harm. The letter defines matters requiring attention (MRA) as deficiencies that are important and should be addressed over a reasonable period of time, but where the institution’s response need not be immediate. Therefore, the distinction between MRIAs and MRAs is the nature of and severity of the matter and the timing by which the institution must respond. No matter how serious the concern, it is addressed to the institution’s board of directors. According to the Federal Reserve’s policies and procedures, the communication of supervisory findings must be (1) written in clear and concise language, (2) prioritized based upon degree of importance, and (3) focused on any significant matters that require attention. The Federal Reserve proposed new supervisory concern policies and procedures in 2017, which provided that examiners and supervisory staff should direct most MRIAs and MRAs to senior management of institutions for corrective action. MRIAs or MRAs only would be directed to the board for corrective action when the board needed to address its corporate governance responsibilities or when senior management failed to take appropriate remedial action. The proposed policies would not change the definitions of MRAs and MRIAs or the content of communications to institutions. As of April 2019, the proposed policies and procedures had not been finalized. OCC updated its policies and procedures for examiners to identify and communicate MRAs in 2014 and further enhanced them in 2017. OCC’s policy states that MRAs describe practices that an institution must implement or correct, ideally before those deficient practices affect the bank’s condition. Specifically, MRAs describe practices that (1) deviate from sound governance, internal control, or risk-management principles, and have the potential to adversely affect the bank’s condition, including its financial performance or risk profile, if not addressed; or (2) result in substantive noncompliance with laws or regulations, enforcement actions, or conditions imposed in writing in connection with the approval of any application or other request by the bank. OCC refers to such practices as deficient practices. Such practices also may be unsafe or unsound— generally, any action, or lack of action that is contrary to generally accepted standards of prudent operation and the possible consequences of which, if continued, would be abnormal risk or loss or damage to an institution, its shareholders, or the Deposit Insurance Fund. OCC supervisory concerns are to be communicated in writing to the institution’s management and board of directors to ensure timely and effective correction. Written communications must incorporate the “five c’s” format: Describe the concern. Identify the root cause(s) of the deficient practice and contributing factors. Describe potential consequence(s) or effects on the bank from inaction. Describe supervisory expectations for corrective action(s). Document management’s commitment(s) to corrective action and include the time frame(s) and the person(s) responsible for corrective action. If the root cause of the deficient practice is not apparent, OCC’s procedures instruct examiners to direct management to perform a root- cause analysis as part of the corrective action. Based on Our Review, Regulators’ Policies and Procedures for Management Oversight Generally Were Consistent with Leading Risk- Management Practices The regulators’ revised policies and procedures that relate to oversight of risk management at large depository institutions and to supervisory concerns generally were consistent with leading risk-management practices. We reviewed leading standards and practices (such as federal internal control standards) and then developed criteria with which to assess the regulators’ policies and procedures. Criteria we used included that guidance be clear and actionable and that examiners review risk- management and control functions, identify existing and emerging risks, and review compliance with laws and regulations. (See table 3 for the specific criteria we applied, appendix I for more information on our methodology, and appendix II for the list of policy and procedure documents we reviewed). While individual policies or procedures may not have satisfied all of our criteria, when viewed collectively the policies and procedures generally addressed leading risk-management practices. For example, the policies and procedures almost always provided examiners with clear and actionable objectives for risk-management governance; enabled examiners to identify whether an institution had established a clear governance framework; assisted examiners in identifying, reporting, and recommending changes to address existing and emerging risks; and required review of institutions’ compliance with applicable laws and regulations. More specifically, we found FDIC risk-management policies and procedures for examining large insured depository institutions generally provide clear, actionable risk-management objectives with a few exceptions that did not materially affect our overall assessment. For instance, we identified that a policy document contains clear parameters for examiners to assess identified risks, which is consistent with our criteria, but the parameters did not include instructions for when examiners should consider changing a bank’s rating based on identified risk levels. However, related guidance for examiners in considering the impact of risk on the institution can be found in the definitions and descriptions of CAMELS ratings. We also found that FDIC developed adequate policies and procedures to evaluate corporate governance. In particular, consistent with leading practices, the guidance requires separation of board and management; identification and response to dominant officials; and encourages detailed review of the control environment. FDIC also has processes on risk assessment, and tracking and monitoring risk to address existing and emerging risks. For example, examiners are required to review updates to the institution’s risk- management processes for new lines of business. Similarly, we found that Federal Reserve policies and procedures for large depository institutions generally identify clear, actionable risk- management objectives and explain activities that might be riskier at some institutions compared to others, but a few policies and procedures were not fully consistent with our criteria. For instance, while corporate governance policies and procedures provide detailed materials for examiners to use during examination, and there is extensive guidance on risk identification, assessment, and communication, we noted relatively limited written procedures regarding escalation of concerns to enforcement actions. We discuss this issue in more detail later in this report. We also found that the Federal Reserve included forward-looking risk assessment procedures within risk-identification processes, including preliminary risk assessment to address existing and emerging risks. Finally, we found that OCC policies and procedures for large depository institutions generally provide clear requirements for examiner evaluation of the supervised institution’s quantity of risk, quality of risk management, and direction of risk. But the methods of measurement and specific tolerances for risk in these policies and procedures are not as clear as suggested by the leading practices. However, guidance to evaluate the potential impact of risk is separately available to examiners in OCC’s MRA and enforcement action policies and procedures. We found that consistent with our criteria, policies and procedures are detailed to provide examiners a clear framework to review banks’ corporate governance and risk-management systems. In particular, appropriate attention is paid to board oversight and effective management practice, including clear outlines for board and management responsibilities and independence. To address existing and emerging risks, OCC requires examiners to assess a specific set of risks within its risk-based supervision approach using the Risk Assessment System. OCC uses the Risk Assessment System in conjunction with CAMELS and other regulatory ratings during the supervisory process to evaluate an institution’s financial condition and resilience. Examiners Applied Their Policies but Communication of Supervisory Concerns Could Be More Complete Our review of examination documents of nine depository institutions found that examiners from the three banking regulators generally applied their policies and procedures and identified and communicated management weaknesses to those institutions. Practices for communicating concerns varied among regulators and some practices led to communications that often lacked complete information that would help institutions’ boards of directors ensure that senior management respond to emerging risks in a timely manner. Lastly, examiners generally followed up on prior supervisory concerns consistent with their policies and procedures. Examiners Generally Applied Their Policies and Procedures for Supervision of Management at Large Depository Institutions in the Examinations We Reviewed For the examinations we reviewed, we found that examiners generally applied policies and procedures to assess management oversight of risk at large depository institutions, including those relating to corporate governance, internal controls, and internal audit. We compared selected elements of examiner policies and procedures (focusing on the management component of CAMELS) with selected 2014–2016 examination documents to determine how examiners applied policies and procedures. (See appendix III for the questions we used to make these determinations). Our non-generalizable review of examination documents of nine institutions found that examiners reviewed areas relating to corporate governance, internal controls, and internal audit, which are key components of risk-management frameworks for institutional management and governance. For instance, to assess the adequacy of an institution’s overall corporate governance, FDIC, Federal Reserve, and OCC examiners of the selected institutions generally conducted reviews of areas such as board and management oversight and internal audit. For example: In examination documents for one of the institutions, we found that FDIC examiners examined materials regarding independence and qualifications of directors and policies and procedures related to risk assessments. We noted for another institution that Federal Reserve examiners reviewed materials regarding directors’ fulfillment of duties and responsibilities and policies and procedures relating to corporate compliance. Also, we observed that for one institution, in describing the leadership of the board and management, OCC examiners described aspects of the control environment, risk assessment, control activities, accounting, information, and communication as well as self- assessment and monitoring. At eight of the nine institutions we reviewed, we also found that regulators took steps that were designed to communicate deficiencies they identified before the weaknesses affected an institution’s financial condition. More specifically, examiners identified concerns related to board oversight; risk monitoring; policies, procedures, and limits; and internal controls. Also, for at least four of the nine institutions we reviewed, examiners reported they downgraded the management component rating based on weaknesses identified in management of risks independent of the institutions’ financial condition. For example, at one institution, we observed examiners reporting that weaknesses in an institution’s risk management contributed to a less-than-satisfactory or “3” rating for the management component. Additionally, examiners downgraded the management component rating for two institutions with satisfactorily-rated financial positions because of significant weaknesses in the risk- management program. In another instance, we observed examiners reporting that management’s need to complete remediation of previously identified weaknesses contributed to a “fair” or “3” rating for the management component of CAMELS. As previously discussed, in the past regulators did not always take timely supervisory action on the management weaknesses they identified. In all the reports of examinations we reviewed, examiners generally explained the basis for the rating they assigned to the management component of CAMELS, such as management’s responsiveness to addressing weaknesses and compliance with laws and regulations. Communication of Supervisory Concerns Varied among Regulators and Some Communications Did Not Provide Information on Cause or Potential Effect Practices for communicating supervisory concerns to institutions varied among regulators and some communications do not provide complete information that could help boards of directors monitor whether deficiencies are fully addressed by management. As discussed previously, the regulators require staff to communicate supervisory concerns to institutions through formal written communications. The written communications are generally directed to senior management and boards of directors, which have oversight responsibilities over senior management. According to the Federal Reserve, boards are inherently disadvantaged given their dependence on senior management for the quality and availability of information. One industry representative told us that supervisory concerns were not always clearly communicated, noting that communications of supervisory concerns sometimes can be difficult to interpret and correct. An official from one of the regulators stated that former examiners working as industry consultants sometimes may be hired to help interpret supervisory letters and assist depository institutions in responding to supervisory concerns. Federal internal control standards state that management should communicate quality information externally to help the entity achieve its objectives and address related risks. Quality information is defined as appropriate, current, complete, accurate, accessible, and provided on a timely basis. Other authoritative internal control sources, including Circular A-123 and the framework of the Committee of Sponsoring Organizations of the Treadway Commission (COSO) require cause analysis—that is, an identification of the cause of the deficiencies that have been found. Generally accepted government auditing standards require that auditors plan and perform procedures to develop all four elements of a finding (criteria, condition, cause, and effect) necessary to address audit objectives. Although these authoritative sources do not apply to federal banking regulators, the standards identify principles consistent with the goal of FDIC, Federal Reserve, and OCC guidance in ensuring clear and complete communication of supervisory recommendations. OCC. For two of the three OCC-supervised institutions whose examination documents we reviewed, OCC examiners generally communicated to boards of directors the information they would need to monitor to determine whether deficiencies were fully addressed by management. OCC’s policies and procedures on MRAs require examiners to identify and communicate in writing to depository institutions the concern, cause, consequences of inaction, required corrective action, and management’s commitment for corrective action. If the cause of the deficient condition is not apparent, examiners must direct the institution’s management to perform a root-cause analysis as part of the corrective action. According to OCC staff, they implemented the MRA requirements agency-wide in 2014 after having a positive experience applying them at the community bank level. OCC staff told us that it is necessary for examiners and institutions to understand the cause of a deficiency for examiners to make appropriate recommendations and institutions to address the concern and help ensure the deficiency does not reoccur. Failure of examiners to identify and communicate the root causes of inappropriate practices was among the key findings of an internal OCC review of supervision of sales practices at Wells Fargo. In September 2016, OCC took enforcement action against Wells Fargo for improper sales practices. In April 2017, OCC’s Office of Enterprise Governance and the Ombudsman published an independent review of OCC’s supervisory record for Wells Fargo, which identified gaps in OCC’s supervision and lessons learned. Review findings included that the OCC team responsible for supervising Wells Fargo did not ensure that examiners evaluated root causes of the improper sales practices. In addition, they found that the first MRA that identified the sales practices issue in 2010 did not list the issue as an unsafe or unsound practice and did not identify a root cause or responsible parties. Among the lessons learned was ensuring analysis of root causes and compliance with OCC MRA guidance. In our review, we also observed how OCC’s written communications of concerns changed as its requirements were implemented. For example, in documents from 2014 for two institutions, OCC examiners generally only communicated the concern or the required corrective action and management’s commitment to corrective actions. By 2016, examiners documented each of the required elements for MRAs in their written communication (for two institutions). FDIC. For the three FDIC-supervised institutions whose examination documents we reviewed, FDIC examiners did not communicate to boards of directors the information they would need to monitor whether deficiencies were fully addressed by management. For these three institutions, FDIC examiners stated the concern (deficiency) and required corrective action in their internal communications of supervisory recommendations and also externally with depository institutions. They sometimes stated the potential effect of the deficient condition on the safety and soundness of the institution. These practices were consistent with FDIC policies and procedures in place at the time. For example, in the written communication to one FDIC institution selected for our review, examiners conveyed specific information about the supervisory concerns, the effect of the deficiencies on the institution, and the required corrective action for the MRBAs related to an examination. In another instance, the communication of the supervisory concerns appeared less specific. In that case, examiners reported that the institution management’s actions did not fully address a deficient condition identified in the prior examination. We found that the prior written communication of concerns to the institution did not identify the cause of the deficient condition or propose specific action to be taken. FDIC staff told us they believed that updates to their policies and procedures in 2016 already require examiners to identify the cause for the deficient condition and communicate it to the depository institutions. Specifically, FDIC requires examiners to “describe the deficient practices, operations, or financial condition and how it deviates from sound governance, internal controls, or risk management or consumer protection principles, or legal requirements.” This requirement is similar to OCC’s requirement to “describe the concern.” Specifically, OCC examiners are required to “describe the deficient practice and how it deviates from sound governance, internal control or risk management principles.” FDIC’s policies and procedures do not require examiners to identify the factor(s) responsible for the deficient condition (the “why”) or communicate it to the institutions. Based on the examination documents we reviewed, we did not observe that FDIC examiners communicated the cause of the deficiency. Including the cause facilitates a better understanding of why an institution’s condition is not consistent with law or regulations and, ultimately, can help an institution determine how it could remedy the condition. Federal Reserve. In our review of examination documents for three institutions, Federal Reserve examiners did not include information that boards of directors would need to monitor whether deficiencies were fully addressed by management. Reserve Bank examiners stated the condition and required corrective action in their internal and external communications of supervisory recommendations to depository institutions, consistent with Federal Reserve policies and procedures. Furthermore, the condition and required corrective action were generally closely linked to the criteria examiners applied during the examination, which often consisted of Federal Reserve supervisory guidance. We found that the written communications to depository institutions did not always provide information that would convey the reason the deficient condition occurred (cause) or the potential consequences of the deficient condition (effect). As a result, the information conveyed in the written communications of supervisory concerns was limited. The Federal Reserve Board has broad criteria for Federal Reserve Bank examiners requiring them to communicate only the condition and required corrective action. Federal Reserve Board staff told us that they do not require examiners to identify the cause of a deficient practice or condition. Instead, they leave that responsibility to institutions. Staff stated that they believe the institution is in the best position to identify the cause. They noted that this also could reduce the amount of time examiners otherwise would spend searching for the cause. However, we noted that at least one Reserve Bank builds on the Board’s criteria for communicating supervisory concerns and developed policies and procedures that require examiners to identify condition, criteria, cause, and effect to support supervisory findings in review sessions with Reserve Bank management. As discussed previously, authoritative internal control sources require cause analysis. As an example applicable to banking regulators, OCC requires its staff to identify and communicate the cause of the deficiency that led to the supervisory concern, or, if the root cause is not apparent, to instruct institution management to identify root cause as part of its corrective action. OCC staff noted that identifying root cause in examinations does not require additional resources. Also, if the root cause is not apparent, examiners instruct the institution to identify root cause as part of the corrective action, per OCC’s MRA policy. Furthermore, a September 2018 interagency statement clarifying the role of supervisory guidance instructed examiners to not criticize institutions for a “violation” of supervisory guidance. Identification and communication of the potential effect of a deficiency could enable the Federal Reserve to move away from its practice of closely linking supervisory concerns to failure to comply with guidance and better explain why an institution’s condition is not consistent with law or regulations. FDIC and the Federal Reserve are missing an opportunity to communicate complete information, in writing, to the boards of institutions regarding the cause of the identified deficiency that led to the supervisory concern, which would facilitate a better understanding of why the institution’s condition deviates from safety and soundness standards. Additionally, without communicating the potential effect of a deficiency, the Federal Reserve is missing an opportunity to convey to boards of directors how the concern could undermine the institution’s safety and soundness. Examiners Generally Conducted Follow-Up of Prior Supervisory Concerns In the examination documents of nine institutions we reviewed, federal banking regulators generally followed up on supervisory concerns to determine an institution’s progress in correcting previously identified weaknesses. The regulators require that examiners follow up on corrective actions taken by depository institutions in response to supervisory concerns. Examiners used various methods to follow up on supervisory concerns, such as by conducting limited-scope targeted reviews of one or more issues or incorporating follow-up as part of their regularly scheduled examination of a functional area. In addition, we observed that at four institutions examiners performed follow-up as part of their ongoing supervisory activities. While there are time frame targets for completion of corrective action, concerns can remain open until examiners are satisfied with the effectiveness of the remedial actions taken to address the supervisory concern. For instance, at three institutions we found that examiners closed concerns in targeted follow-up examinations once they validated the completion of remedial action by reviewing documents and activities that verified the implemented action was effective. We also observed instances for at least three institutions in which examiners refrained from closing supervisory concerns because they determined that the institutions’ management had not yet adequately addressed the concerns and further attention was warranted to ensure the corrective action was sustainable. In performing regularly scheduled target examinations of specific functions or risk areas examined during a previous examination cycle, examiners assessed management’s progress in addressing prior supervisory concerns at eight of the nine institutions we selected for examination documentation review. They examined documents, and reviewed processes and other related actions taken by management to address weaknesses in the institution’s management of risk. Lastly, at four institutions, examiners reviewed management’s progress and reported updated information on the institutions’ actions to address supervisory concerns that were escalated to enforcement actions. For example, at one institution OCC examiners documented substantive discussion on the work they performed in conducting follow-up on a consent order, which included reviewing revised documents and reports as well as validation efforts by a third-party consultant. Review of Supervisory Concern Data Revealed Data Limitations and Incomplete Procedures for Escalation of Concerns Federal banking regulators collect and analyze supervisory concern data but do so to different degrees, and FDIC collects supervisory concern data in a manner that challenges management’s ability to fully monitor its supervision activities. We reviewed supervisory concern data for all institutions supervised by FDIC, OCC, and the Federal Reserve. The data we reviewed indicate that management weaknesses have been a consistent concern since 2012. In general, the amount of time supervisory concerns remain open generally has been reduced. The Federal Reserve and OCC track escalation of supervisory concerns to enforcement actions, but the Federal Reserve lacks specific, measurable guidelines for examiners to consider when supervisory concerns are not addressed in a timely manner. Regulators Use Supervisory Concern Data to Different Degrees but FDIC Data Are Limited Federal banking regulators analyze supervisory concern data to inform examination strategy and forward-looking supervision to varying degrees. FDIC staff uses the data to track the duration of open MRBAs. FDIC’s Risk Management Supervision Division has staff responsible for categorizing and analyzing MRBA summary comments quarterly and providing an analysis memorandum to the division’s management to assist with forward-looking risk identification. FDIC staff stated that these analyses supplement other data used to conduct supervisory follow-up. Federal Reserve Board staff told us that they use the data to track MRA and MRIA information over time within portfolios of depository institutions of different sizes. Staff noted that the data are used to inform supervisory strategy development for upcoming examination cycles. According to staff with whom we spoke, the data are useful for conducting horizontal reviews across a single portfolio and determining issues that crop up across institutions in that portfolio. Staff said that the data can be used to identify common issues as they relate to Board guidance. Staff said that the data also are used to determine whether MRAs and MRIAs are closed in a timely manner, both across portfolios and at a granular level—tracking the progress of individual firms. The data are aggregated across all supervision portfolios. OCC staff told us that they use MRA data to track the number of MRA concerns issued, amount of time open, the types of supervisory concerns for which an MRA was issued, and other information useful to OCC supervisory offices and the National Risk Committee. OCC conducts analysis of supervisory concern data in aggregate. Quarterly reports aggregate trends (including number of concerns, whether concerns are increasing or decreasing, and the number of banks with these concerns). For example, OCC analyzes the data by lines of business, examination areas, categories, and primary risk, which helps track existing risks and growing risks and whether MRA concerns have been escalated to enforcement actions. OCC staff said that data regarding aging of MRAs, which can raise visibility of longstanding concerns, are of particular interest to the National Risk Committee, which we observed in internal reports summarizing supervisory concern data. The regulators have internal tracking systems and policies and procedures to record and track examination data but FDIC does not collect certain data in a manner that provides management with comprehensive information to fully monitor the effectiveness of supervision activities. The Federal Reserve System has two systems for recording and tracking supervised institution data: the “C-SCAPE” platform for institutions with assets greater than $50 billion and all foreign banks, and the “INSite” platform for smaller community banks. Each Reserve Bank has issued guidance on recording MRAs and MRIAs specific to the examiners at those Reserve Banks. The MRA and MRIA data are recorded under a broad area of supervisory focus (for C-SCAPE) or MRA and MRIA category (for INSite), with subcategories for the name and description of the issue for greater detail. OCC’s supervisory information system is Examiner View, in which examiners record, update, and view MRAs. The baseline for the required fields is documented in OCC’s policy and procedures manuals on MRAs and Examiner View, as well as in a supplemental memorandum for large bank supervision. Since March 2017, the data have been recorded in a four-level concern framework (examination area, category, concern type, and topic), as determined by a cross- agency working group under OCC’s National Risk Committee. FDIC supervisory data are collected and retained in various systems. Supervisory recommendations are maintained (by institution) in text format in a separate system that is not readily searchable. FDIC maintains information on MRBAs that are not included in an enforcement action in the Supervisory Tracking and Reporting module of the ViSION system. Supervisory recommendations and MRBAs issued to large institutions supervised by FDIC are also tracked in spreadsheets by examination teams. Supervisory recommendations contained in an enforcement action are collected and tracked in the Formal and Informal Actions Tracking system. In 2017, FDIC updated its MRBA policies and procedures to require that examiners enter summary information into ViSION about individual MRBA events, rather than an overall summary of all MRBA events during an examination. But the summary approach means that MRBA data are not categorized at different levels (from a broad level such as examination area to more specific levels, including risk or concern type). Federal internal control standards state that management should use quality information to achieve objectives. Quality information is defined as appropriate, current, complete, accurate, accessible, and provided on a timely basis. Federal internal control standards also stress the importance of management conducting ongoing monitoring of the internal control system, which includes regular management and supervisory activities, comparisons, reconciliations, and other routine actions. As noted above, FDIC policies and procedures do not require examiners to record MRBAs under different categories in the MRBA reporting and tracking system. Instead, FDIC Risk Management Supervision staff is responsible for analyzing summary MRBA data entered by examiners and then categorizing the data for FDIC management reports. These categories are based on staff expertise rather than the experience of examiners in the field who developed the MRBAs. A structure that examiners could use to record more granular details about MRBAs directly after examinations would help ensure that reports prepared for FDIC management are not missing important details about FDIC MRBAs. Currently, FDIC management lacks complete information to better monitor the effectiveness of supervision activities in remediating emerging risks in a timely manner. Data Indicate Continuing Concerns about Management Weaknesses at Depository Institutions Through 2017 Our analysis of supervisory concern data and federal banking regulators’ internal reporting based on the data indicate that management weaknesses at depository institutions of all sizes continued to exist through 2017. The number of supervisory concerns issued for all concern categories decreased each year during 2012–2016. All the regulators frequently cited management as a primary risk area in the supervisory concerns issued during the period. For instance, management and board and loan and credit administration were the largest of 14 categories of MRBAs issued by FDIC in 2012–2016, each constituting about 22 percent of all MRBAs. Corporate governance was the largest of 26 categories of MRAs issued by the Federal Reserve in that period, constituting approximately 19 percent of all MRAs. The next largest category of MRAs issued was credit risk management at 13 percent. Enterprise governance and operations was the third-largest of 16 examination areas of MRA concerns issued and closed by OCC in 2012–2016, constituting about 11 percent of all MRA concerns. The largest examination area of MRA concerns issued was credit at about 37 percent, followed by bank information technology at 13 percent. Similarly, internal reports from the regulators for late 2016 through 2017 indicated that supervisory concerns about management’s ability to control and mitigate risk at depository institutions continued. Our review of the reports showed that corporate governance issues were among the most common categories for issued supervisory concerns. In addition, the Federal Reserve reported in November 2018 that governance and controls issues constituted about 70 percent of outstanding supervisory concerns for the Large and Foreign Banking Organizations portfolio. The Amount of Time Supervisory Concerns Remained Open Was Reduced Our review of supervisory concern data from the Federal Reserve and OCC from 2012 through 2016 generally showed that the amount of time concerns remained open was reduced (for example, see figure 2 for data on the supervisory concerns issued most frequently by the Federal Reserve and OCC during the period). Federal banking regulators told us that they have made efforts in recent years to have institutions remediate the deficiencies that cause supervisory concerns. FDIC data regarding MRBAs were limited and we were not able to determine how long MRBAs remained open by type of concern. Federal Reserve data indicated that the average amount of time needed to close corporate governance MRAs changed from 568 days in 2012 to 155 days in 2016. The time to closure for corporate governance MRAs ranged from 3 to 1,605 days for 2012-2016. Time to closure for credit risk-management concerns, the second-largest MRA category for the Federal Reserve, saw a similar decrease (from 431 days on average in 2012 to 246 days on average in 2016). For OCC, the average time to closure for enterprise governance and operations MRAs decreased from 517 days in 2012 to 245 days in 2016. The time to closure for enterprise governance and operations MRA concerns ranged from 7 to 1,724 days in 2012-2016. Time to closure for OCC’s largest MRA examination area (credit concerns) decreased from 445 days on average in 2012 to 241 days on average in 2016. Federal Reserve Lacks Specific Guidelines for Escalating Supervisory Concerns Federal banking regulators vary in the nature and extent of data they collect on escalation of supervisory concerns to enforcement actions. As noted above, under their progressive enforcement regimes, the regulators may take informal or formal enforcement action against an institution if it does not respond to a supervisory concern in a timely manner. OCC collects data on escalation of supervisory concerns to enforcement actions. These data show that about 2,300 MRA concerns, or about 10 percent of all MRA concerns, were escalated to enforcement actions from 2012 through 2016. Of this amount, 18 percent related to enterprise governance and operations concerns, the second-largest number of escalated MRA concerns behind credit concerns at 41 percent. Federal Reserve data for escalation of MRAs to MRIAs and enforcement actions were collected in a manner that made it difficult for us to reliably determine the extent to which escalation occurred. Therefore, we did not use the Federal Reserve’s escalation data. FDIC does not track escalation of supervisory concerns in a manner that allowed us to determine the extent to which escalation occurred. FDIC and OCC have relatively detailed policies and procedures for escalation of supervisory concerns to enforcement actions, while the Federal Reserve has broad guidelines. Although the Federal Reserve tracks escalation of supervisory concerns, as noted above, Federal Reserve policies and procedures do not delineate specific factors for examiners to follow in deciding whether to identify a concern as warranting possible enforcement action. Instead, the Federal Reserve provides broad guidelines; for instance, stating only that informal enforcement actions are tools used when circumstances warrant a less severe form of action than formal enforcement actions. Federal Reserve staff told us that in practice the facts and circumstances of the case dictate when escalation is appropriate. They said that they take into account the institution’s response to prior safety and soundness actions against the institution and determine whether the institution’s conduct meets enforcement action standards. However, the Federal Reserve has not defined specific and measurable guidelines for when a supervisory concern would require escalation to a more formal regulatory action (such as an enforcement action). In contrast, FDIC and OCC have relatively detailed guidelines for escalating concerns. For example, FDIC guidelines published in 2016 instruct examiners to consider several factors, including management’s attitude towards complying with laws and regulations and correcting undesirable or objectionable practices; management’s history of instituting timely remedial or corrective actions; and whether management established procedures to prevent future deficiencies or violations. Similarly, OCC guidelines published in 2017 instruct examiners to consider several factors, including the board and management’s ability and willingness to correct deficiencies within an appropriate time frame; the nature, extent, and severity of previously identified but uncorrected deficiencies; and the bank’s progress in achieving compliance with any existing enforcement actions. Federal internal control standards provide that management conducts risk assessment to develop appropriate risk responses. Key attributes of effective risk assessment are definitions of objectives and risk tolerances, and management defines risk tolerances in specific and measurable terms so they are clearly stated and can be measured. In assessing risks that might necessitate an enforcement action, the Federal Reserve’s guidelines do not provide its examiners with guidance as to the acceptable level of variation in an institution’s performance relative to the achievement of supervision objectives. Without formalized, specific, and measurable guidelines for escalation of supervisory concerns, the Federal Reserve relies on the experience and judgment of examiners, Reserve Bank management, and Federal Reserve staff to determine when escalation is appropriate. Reliance on a single mechanism or tool can be risky. For instance, institutional knowledge can disappear in times of turnover, such as occurred after the 2007–2009 financial crisis. In addition, reliance on judgement alone can produce inconsistent escalation practices across Reserve Banks and supervision teams. Conclusions Federal banking regulators have strengthened their approach to oversight of management at large depository institutions since 2009. This stronger approach is important as management weaknesses can reflect an institution’s underlying risk. However, we identified areas where written communication of supervisory concerns to institutions and monitoring of supervisory data at FDIC and the Federal Reserve could be strengthened. The communications of supervisory concerns from FDIC and the Federal Reserve did not fully convey why a practice at a depository institution was deficient and, for the Federal Reserve, the effect of the deficient practice on safety and soundness. Complete information about deficiencies is essential to ensuring timely corrective action by senior bank management before the deficiencies negatively affect safety and soundness at the institution. Furthermore, we identified data gaps in FDIC’s recording of MRBAs that resulted in incomplete information for FDIC management on supervisory concerns. Complete supervisory concern information would allow FDIC management to fully monitor the effectiveness of supervision activities (that is, to remediate risks in a timely manner). Finally, the Federal Reserve lacks specific, measurable guidelines for escalating supervisory concerns. Although escalation of a supervisory concern can depend on the facts and circumstances of the case, a lack of formalized, specific, and measurable guidelines for escalation of supervisory concerns could result in inconsistent escalation practices across Reserve Banks and examination teams. Recommendations for Executive Action We are making a total of four recommendations: two to FDIC and two to the Federal Reserve. The Director of the Division of Risk Management Supervision of FDIC should update policies and procedures on communications of supervisory recommendations to institutions to provide more complete information about the recommendation, such as the likely cause of the problem or deficient condition, when practicable. (Recommendation 1) The Director of the Division of Supervision and Regulation of the Board of Governors of the Federal Reserve System should update policies and procedures on communications of supervisory concerns to institutions to provide more complete information about the concerns, such as the likely cause (when practicable) and potential effect of the problem or deficient condition. (Recommendation 2) The Director of the Division of Risk Management Supervision of FDIC should take steps to improve the completeness of MRBA data in its tracking system, in particular, by developing a structure that allows examiners to record MRBAs at progressively more granular levels (from a broad level such as examination area to more specific levels, including risk or concern type). (Recommendation 3) The Director of the Division of Supervision and Regulation of the Board of Governors of the Federal Reserve System should update policies and procedures to incorporate specific factors for escalating supervisory concerns. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to FDIC, the Federal Reserve, and OCC for review and comment. During their review of the draft report, FDIC and the Federal Reserve provided oral comments about Recommendations 1 and 2 (to update policies and procedures for communication of supervisory concerns to provide more complete information, such as the likely cause and, for the Federal Reserve, potential effect). We modified the respective recommendations to address technical issues raised by their comments. FDIC provided written comments that are summarized below and reprinted in appendix IV. FDIC disagreed with Recommendation 1 and agreed with Recommendation 3. More specifically, FDIC stated that its current instructions to examiners meet the intent of Recommendation 1 (to update policies and procedures for communicating supervisory recommendations to provide more complete information). In particular, FDIC cited its policies and procedures on drafting supervisory recommendations in the report of examination, which include a section entitled, “Explain the Basis for any Supervisory Recommendations or Concerns.” FDIC stated this instruction requires examiners to communicate why there is a concern within the supervisory recommendation. Furthermore, FDIC issued an internal memorandum in October 2018 that reminds examiners to take prompt action to address root causes of deficiencies in complex and changing situations. FDIC stated that it began training in 2018 on developing strong enforcement action provisions to address root causes of deficiencies at problem banks, which continues in 2019. We describe FDIC’s policies and procedures in our report and agree that examiners are instructed to communicate why they are concerned about a deficient condition. However, examiners are not instructed to communicate what they believe to be the root cause of the deficient condition. We are encouraged that FDIC agrees it is important to identify root causes when addressing deficiencies in problem bank corrective actions. Nevertheless, the emphasis on identifying root cause is not found in examination policies and procedures. If, as FDIC indicated, examiners already identify the root causes of deficiencies during bank examinations, then FDIC can address our recommendation by formalizing that process in its policies and procedures. For Recommendation 3 (to improve MRBA data in its supervisory recommendations tracking system, by developing a structure that allows recording of MRBAs at more granular levels), FDIC agreed that a structure should be enhanced to allow staff to further categorize MRBAs at the point of entry into the system. FDIC further agreed that input of more granular information about MRBAs directly after examinations should provide the functionality to track an MRBA from a broad level such as examination to more specific levels, including concern type. The Federal Reserve provided written comments summarized below and reprinted in appendix V. The Federal Reserve did not state whether it agreed or disagreed with Recommendations 2 and 4 but responded that it would take our recommendations into consideration. For Recommendation 2 (to update policies and procedures for communicating supervisory concerns to provide more complete information, such as likely cause (when practicable) and potential effect), the Federal Reserve stated it recognizes that more effectively communicating supervisory concerns may achieve faster resolution of identified deficiencies and ultimately promote a more resilient banking system. The Federal Reserve noted it issued proposed guidance in August 2017 (which we discuss in the report) that would, in part, clarify expectations for communications of supervisory concerns, and that it continues to evaluate commenters’ suggestions. The Federal Reserve stated that it will consider ways to update its policies and procedures consistent with our recommendation. For Recommendation 4 (to update policies and procedures to incorporate specific factors for escalating supervisory concerns), the Federal Reserve stated it appreciated our recognition that the decision to escalate a supervisory concern ordinarily depends on the particular facts and circumstances of each case. The Federal Reserve stated that it will consider whether there are specific factors that staff should consider when escalating supervisory concerns. The Federal Reserve and OCC also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Chairman of the Board of Governors of the Federal Reserve System, the Chairman of the Board of Directors of FDIC, and the Comptroller of the Currency. This report will also be available at no charge on our website at http://www.gao.gov. Should you or your staff have questions concerning this report, please contact me at (202) 512-8678 or clementsm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology This report examines (1) the extent to which federal banking regulators’— the Federal Deposit Insurance (FDIC), Board of Governors of the Federal Reserve System (Federal Reserve), and Office of the Comptroller of the Currency (OCC)—revised policies and procedures for supervision of management at large depository institutions were consistent with leading risk-management practices; (2) how examiners applied agency policies and procedures for supervision of management at large depository institutions they oversee; and (3) trends in regulators’ supervisory concern data for all depository institutions since 2012 and how regulators tracked and used such data. General Methodology To address all our objectives, we focused on risk-management issues, such as those related to corporate governance, internal controls, and internal audit because management weaknesses in these areas could threaten the safe and sound operation of a depository institution. We selected this approach because recent GAO reports have addressed risk- management issues related to financial conditions such as capital and liquidity requirements, stress testing, and commercial real estate risk. We reviewed relevant federal laws and regulations, including sections of the Federal Deposit Insurance Act, Federal Reserve Act, National Bank Act, and interagency regulations on safety and soundness. We reviewed prior GAO reports, including reports on quantitative risk-management issues as they relate to financial condition, supervision of compliance with laws and regulations, and regulatory capture in bank supervision. We reviewed reports from the Offices of Inspector General for the federal banking regulators. We also drew on prior and on-going work related to regulatory capture in bank supervision. In addition, we reviewed the 2013 OCC-commissioned assessment of OCC’s supervision of large and mid-size institutions. We interviewed staff at FDIC, Federal Reserve, and OCC about examination policies and procedures for large depository institutions, processes related to supervision of management at such large institutions, and use of supervisory concerns to address weaknesses they identified. We interviewed staff in the Office of the Inspector General at each banking regulator. We also interviewed three industry representatives with prior experience in bank supervision to obtain their perspectives on bank examinations and supervisory concerns. Reviewing the Extent to Which Regulators’ Revised Policies and Procedures Were Consistent with Leading Practices For this objective, we took steps to identify relevant changes to examination approaches and processes (focusing on oversight of qualitative risk-management activities and communication of supervisory concerns). First we obtained confirmation from the regulators of the list of policies and procedures and other guidance documents we identified for review and solicited suggestions for additional documents to review. We then reviewed and analyzed guidance the agencies issued to examiners and depository institutions, relevant to (1) assessment of board and senior management’s management of risks, (2) metrics used to measure risk, and (3) assessment of depository institutions’ internal controls and audit procedures. Specifically, we reviewed and described regulators’ policy and procedural manuals, supervisory statements, and other supervisory guidance issued since 2009 to identify changes to the agency’s approach and process subsequent to the financial crisis. We focused primarily on changes to address oversight of risk management. We then reviewed documents from several standard-setting organizations to identify criteria for assessing risks and risk management. More specifically, we reviewed federal internal control standards; Internal Control - Integrated Framework of the Committee of Sponsoring Organizations of the Treadway Commission (COSO); safety and soundness standards developed by the federal banking regulators; Core Principles for Effective Banking Supervision of the Basel Committee on Banking Supervision; Federal Reserve’s enhanced prudential standards regulation, which applies to bank holding companies with assets greater than $10 billion and thus applies to the bank holding companies that own the depository institutions within the scope of our review; and GAO reports developing risk-management frameworks for government entities. Based on these documents, we selected a list of criteria to use in assessing the regulators’ risk-management guidance for examining large depository institutions (see table 3). We made connections between the principles listed in each of the documents to highlight the key elements of risk assessment, risk measurement, corporate governance, internal controls, and internal audit requirements. Additionally, we factored in regulators’ consideration of compliance with laws and regulations in their evaluation of the management component of CAMELS (capital adequacy, asset quality, management, earnings, liquidity, and sensitivity to market risk). Specifically for the first three criteria, we considered principles from GAO Standards for Internal Control, COSO’s Integrated Framework, the federal banking regulators’ safety and soundness standards, and the Federal Reserve’s risk management regulation. Additionally, for the second criterion we considered the Basel Committee on Banking Supervision Core Principles for Effective Banking Supervision. For the fourth criterion we considered the regulators’ safety and soundness standards. We also identified sub-criteria to help determine the extent to which the regulators’ guidance to address past supervisory weaknesses aligned with the criteria. Our baseline for the sub-criteria related to the first criterion was that the guidance communicates the need for clear lines of authority and responsibility for monitoring internal controls. The baseline for the sub-criteria related to the second criterion was that the guidance require independence of the risk management function. For the sub- criteria related to the third criterion, the baseline was that the guidance provide for identification of and timely action to address existing and emerging risks. Finally, for the sub-criteria related to the fourth criterion we looked for guidance to require compliance with laws and regulations, which regulators considered in the evaluation of management performance. Reviewing How Examiners Applied Policies and Procedures for Examinations of Risk Management at Large Depository Institutions Selection of Institution Sample For this objective, we undertook a multistep process to select institutions from which to obtain examination documents for review. First, we obtained the lists of institutions subject to examination by the regulators’ large bank examination programs in recent years. For FDIC, these were institutions with total assets of $10 billion or more; for the Federal Reserve and OCC, generally, these were institutions with assets greater than $50 billion. More specifically, we obtained a listing of all FDIC-supervised institutions in its Large Insured Depository Institution program that were subject to examination from June 2013 through March 2017, all Federal Reserve member banks in its Large Banking Organization portfolio as of December 2016, and all OCC-supervised institutions in its Large Bank Supervision portfolio from 2012 to 2016. Next, we selected a non-generalizable sample of three depository institutions from each of the regulators (nine in total) for which to request 2014-2016 examination documents for review. To assemble the sample, we determined the asset size of each institution supervised by the regulators’ large bank examination program as of December 2016, and selected institutions with a range of asset amounts. If these institutions were from the same geographic area (supervised by the same regional office or Reserve Bank), we selected other institutions with comparable asset amounts in order to have geographic dispersion in our sample. The purpose of this selection approach was to assess whether material differences existed in examinations conducted by the different regional offices in our sample. Also, if the selected institutions were headquartered in a foreign country, we selected other institutions with comparable asset amounts. The purpose of this selection approach was to omit institutions with only a branch office in the United States, which would allow the regulator to only examine a portion of the institution’s operations. In addition, if the selected institutions were not primarily engaged in traditional banking activities, we selected other institutions with comparable asset amounts. To make this determination, we conducted a separate analysis to determine if (1) the institutions engaged in traditional banking activities (accepting deposits and making consumer loans), (2) traditional banking activities made up a majority of the bank’s activities as recorded on the balance sheet, and (3) the bank’s loan activities were primarily domestic. The purpose of this selection approach was to omit companies that primarily conduct “non-traditional” banking activities such as investment banking and credit cards but have a regulated depository institution to support those activities. We conducted a separate analysis of OCC-supervised institutions in its Large Bank Supervision portfolio because a number of entities were nationally chartered banks under a foreign holding company or were not primarily depository institutions. In our analysis, we first determined if (1) an institution engaged in traditional banking activities, (2) traditional banking activities made up a majority of its activities as recorded on the balance sheet, and (3) the institution’s loan activities were primarily domestic. We included three federal savings banks in our universe of OCC-supervised institutions because we determined they were subject to many of the same supervision policies and procedures as national banks. We then determined that the geographic location of the examiners-in- charge for the institutions in the Large Bank Supervision portfolio determined the regional office to which the examiner-in-charge reported. To obtain geographic dispersion, we based our selection on the location of the examiners-in-charge to ensure that each examiner was associated with a different regional office. Using these criteria and considerations, we selected small, moderate, and large OCC-supervised institutions. Document Selection and Development of Questions for Regulators To determine how regulators applied agency policies and procedures for supervision of management during examinations of large depository institutions, we requested selected examination documents from the regulators for the nine institutions we selected. For FDIC, initially we requested 2016 examination documents for the three selected large institutions subject to the Large Insured Depository Institution program. For the Federal Reserve, we initially requested 2016 examination documents for the three selected large institutions subject to the Large Banking Organization program. For OCC, we initially requested 2016 examination documents for the three selected large national banks subject to the Large Bank Supervision program. We reviewed these examination documents to learn how examiners reviewed qualitative risk-management issues, such as those relating to the management component of CAMELS. Based on our initial review, we submitted another document request to the regulators. FDIC. Through our initial review of FDIC documents, we identified the risk categories for which FDIC examined corporate-wide risk-management functions. We then requested relevant examination documents for each of the three FDIC-supervised institutions, such as scope, summary, and conclusion memorandums and supervisory letters related to corporate-wide risk-management functions and the Bank Secrecy Act; examination documentation for supervisory recommendation (remediation) follow-up reviews that were reviewed during the 2014, 2015, and 2016 supervisory cycles; summary examination documents related to ongoing monitoring work; explanation of planned target review areas that appeared to cover review of corporate-wide risk-management functions for the same supervisory cycles that had not been completed; and supervisory plans and reports of examination for 2014 and 2015 examination cycles. In total, we reviewed 94 FDIC examination documents. We took as criteria the examination procedures from the examination documentation modules referenced in FDIC’s Basic Examination Concepts and Guidelines and the Management portion of the agency’s examination policy manual. We also incorporated elements of other FDIC policies and procedures, such as those relating to internal routine and controls, dominant officials, and incentive compensation. Our criteria also included FDIC memorandums to assess communication and follow- up on supervisory recommendations, including matters requiring board attention (MRBA). Finally, we used information on enforcement policies and procedures in the agency’s Report of Examination Instructions manual. Federal Reserve. Based on our initial review, we requested conclusion memorandums and supervisory letters (letters of findings) pertaining to several targeted and enhanced continuous monitoring examinations the Federal Reserve conducted during the 2014, 2015, and 2016 supervisory cycles at the three institutions we selected. In total, we reviewed 83 Federal Reserve examination documents. To assess how examiners applied agency policies and procedures, we used examination procedures contained in the Commercial Bank Examination Manual for most of our criteria. In particular, the Commercial Bank Examination Manual includes a section on “Assessment of the Bank” with detailed examination procedures for review of boards of directors, management, internal controls, and audit. In addition, we used guidance from supervision and regulation letters to the extent the information was not incorporated in the manuals. OCC. Based on our initial review, we requested examination documents for targeted and ongoing examination work related to enterprise risk management, operational risk, and other safety and soundness (management) for the 2014, 2015, and 2016 examinations cycles. Specifically, we requested ongoing supervision memorandums, conclusion memorandums, supervisory letters, and risk assessments. We also requested the supervisory strategy and report of examination for the 2014 and 2015 examination cycles. In total, we reviewed 268 OCC examination documents. As criteria, we applied examination procedures from the Large Bank Supervision booklet for certain risk elements related to bank governance and management. We also applied examination procedures for internal control and audit as criteria. In addition, we included agency guidance on follow-up for matters requiring attention (MRA) and enforcement action. We then developed questions to assess the examination documents based on the criteria we selected. See appendix III for our list of questions. Assessing How Examiners Applied Policies and Procedures Using a data collection instrument populated with the selected questions, we assessed each of the regulators’ examination documents. To demonstrate how examiners applied each criterion, we either took language from the examination document or included explanatory language of what the examiner did during the examination to assess risk management. We also tracked the examiner’s findings on each individual risk area we reviewed to the annual report of examination to ensure that the risk was considered in the context of the entire institution. The results of our review of depository institution examination reports and examination documents are not generalizable to all of the regulators’ examination reports and documents. Each individual review serves as an independent assessment of the examiners’ application of relevant agency guidance. Examining How Regulators Tracked and Used Supervisory Concern Data To evaluate the extent to which the federal banking regulators ensured that large depository institutions addressed risk management-related supervisory concerns, such as MRA, and addressed supervisory concerns since 2012, we (1) analyzed the regulators’ policies and procedures for escalating supervisory concerns to enforcement actions, and (2) analyzed aggregate supervisory concern data from 2012 to 2016 for all institutions supervised by FDIC, the Federal Reserve, and OCC. We did not collect data on all the different types of supervisory concerns issued. In particular, we did not collect data on supervisory recommendations by FDIC and matters requiring immediate attention (MRIA) by the Federal Reserve. Therefore, our analysis of the data does not provide a complete representation of the status of supervisory concerns issued by the regulators. To examine trends, we requested that each regulator provide the data by risk category so that we could analyze whether certain risk areas generated more timely resolution of risk management-related supervisory concerns and whether supervisory concerns were elevated to enforcement actions. FDIC. Because of the current structure of FDIC’s data collection and storage systems, FDIC could not provide data on MRBA in a format that would have been easily analyzable for our purposes. Specifically, FDIC examiners enter summary information about MRBAs into the system with no categorization by examination or risk area. FDIC provided us two data sets—raw data downloaded from its ViSION system; and a data set sorted by topics, which was prepared by the FDIC Emerging Risks section and used for publication in FDIC’s Supervisory Insights newsletter. For large institutions, FDIC informed us that the data were not complete because MRBAs reflected in ViSION were those that remained open at the end of the year when the annual report of examination was issued and that MRBAs opened and closed during the examination cycle were not recorded in the system. Due to the limitations with the data and the inability to combine the data sets, some analyses were completed with the raw data set and others with the data set divided by topics. As a result, the analysis provides a general understanding of trends in FDIC supervisory concerns, rather than a rigorous trend analysis. Federal Reserve. We obtained data on MRAs issued to all Federal Reserve-supervised institutions from 2012 through 2016. The Federal Reserve has two systems for recording and tracking supervised institution data: the “C-SCAPE” platform for institutions with assets greater than $50 billion and all foreign banks, and the “INSite” platform for smaller community banks. Some of the MRA data were not categorized by supervisory concern and were assigned a “null” value. According to Federal Reserve staff, in 2012 the Federal Reserve migrated from a legacy tracking system to the current C-SCAPE platform. The MRA data contain both broad MRA categories and sub-categories for greater detail. For ease of explanation and analysis, the data under the sub-categories were consolidated under their larger categories. The number of MRAs uncategorized by supervisory concern did not present a significant obstacle to our analysis. The data on escalation of MRAs to MRIAs and enforcement actions were collected in a manner that made it difficult for us to determine the extent of escalation. Specifically, the glossary that was provided with the data stated that issues closed through the “transformation process” are marked “closed,” and are distinguished from other closed issues by indicating how they were closed (for example, transformed to MRA, transformed to MRIA, or transformed to provision). We determined that any results we produced regarding escalation would be unreliable given the lack of clarity around data collection methods. OCC. We obtained MRA data from OCC that included records opened from January 2012 through December 2016. OCC’s supervisory information system is Examiner View, in which examiners record, update, and view MRAs (among other things). For our purposes, OCC staff stated that we could use the data to count the number of concerns; however, analyzing the concerns by categories could have been problematic because of changes to the classification method that occurred in October 2014 and March 2017. As a result of the 2017 changes, OCC supervisory concern data are recorded in a four-level framework (examination area, category of concern, type, and topic) that allows for tracking of supervisory concerns at the MRA level and at the “concern” level. Before 2017, the information was classified differently. The newer data allow for enhanced trend analysis and risk identification. We were able to analyze OCC data to show the MRAs issued in 2012– 2016 by exam area. We also could show trends in risk management- specific exam areas, as well as the average time it took to close risk- management specific concerns. Furthermore, we obtained and analyzed data on MRAs that were escalated to enforcement actions. For all the regulators, we assessed the reliability of the data. First, we interviewed staff at each of the regulators who were knowledgeable about the data. We asked for the source of the data, how frequently it was updated, and about the controls in place to ensure the data were accurate and complete. Additionally, in assessing the reliability of the data, we reviewed internal reports and other documents prepared by the regulators. Specifically, for FDIC we reviewed management reports for each quarter of fiscal year 2017. For the Federal Reserve, we analyzed draft 2017 annual assessment letters, feedback from the Operating Committee of the Large Institution Supervision Coordinating Committee to dedicated supervisory teams, and other organizing documents. For OCC, we analyzed management reports to different oversight committees for calendar year 2017. While the data did not allow all of the analysis we had planned to complete, overall, we determined that the FDIC, Federal Reserve, and OCC data were reliable for purposes of showing general trends in the number of supervisory concerns, the time frames for closing supervisory concerns, and—additionally for OCC—the number of supervisory concerns escalated to enforcement actions. We conducted this performance audit from March 2017 to April 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Federal Banking Regulators’ Risk-Management Examination Policy and Procedure Documents We Reviewed This appendix lists the federal banking regulators’ policy and procedure documents included in our review. Federal Deposit Insurance Corporation Division of Risk Management Supervision Manual of Examination Policies – Basic Examination Concepts and Guidelines section (section 1.1), including relevant Financial Institution Letters and internal memorandums. Provides overview of the Federal Deposit Insurance Corporation (FDIC) bank examination process, including rationale for examinations; the Uniform Financial Institutions Rating System, also known as CAMELS (capital adequacy, asset quality, management, earnings, liquidity, and sensitivity to market risk); examination types; scheduling guidelines; and communication with management. Division of Supervision and Consumer Protection Risk Management Manual of Examination Policies – Management section (section 4.1), including relevant internal memorandums. Focuses on the management component of CAMELS ratings, with the main emphasis on the powers, responsibilities, and duties vested in bank directors. It also includes policies and procedures for identifying and assessing the influence of dominant bank officials. Division of Risk Management Supervision Manual of Examination Policies – Internal and Routine Controls section (section 4.2), including relevant internal memorandums. Discusses internal controls, internal control programs, management’s responsibilities, internal control and fraud review examination instructions, and includes a reference tool for examiners. Division of Risk Management Supervision Manual of Examination Policies – Informal Actions section (section 13.1) Identifies procedures for memorandums of understanding to address weak operating practices, deteriorating financial conditions, apparent violations of laws or regulations, or weak risk-management practices. Division of Risk Management Supervision Manual of Examination Policies – Formal Administrative Actions section (section 15.1) Identifies the statute and regulations that authorize the use of formal enforcement actions when necessary to reduce risks and address deficiencies, particularly when an insured state nonmember bank is rated 4 or 5 and evidence of unsafe or unsound practices is present. Division of Risk Management Supervision Manual of Examination Policies – Report of Examination Instructions section (section 16.1), including relevant Financial Institution Letters. Includes procedures for examiners to communicate supervisory recommendations (including matters requiring board attention and deviations from safety and soundness principles underlying policy statements) and identifies schedules for inclusion in reports of examination. Large Bank Supervision Procedures (internal manual), including relevant internal memorandum Describes procedures and processes (in three broad categories: planning, examination activities, and communication) for conducting continuous examination programs at state nonmember banks with total assets exceeding $10 billion. Supervisory Recommendations, Including Matters Requiring Board Attention (internal memorandum) Describes policies and procedures for scheduling supervisory recommendations (including matters requiring board attention) in reports of examination and for tracking bank management’s actions in response to these items after examinations. Pocket Guide for Directors and Statement Concerning the Responsibilities of Bank Directors and Officers The pocket guide describes FDIC’s expectations for boards of directors of institutions to carry out their duties. A second document, the statement, responds to concerns expressed by representatives of the banking industry and others regarding civil damage litigation risks to directors and officers of federally insured banks. Board of Governors of the Federal Reserve System Consolidated Supervision Framework for Large Financial Institutions (SR 12-17) Framework for consolidated supervision of large financial institutions with more than $10 billion in total assets. Bank Holding Company Supervision Manual Provides guidance to examiners as they conduct on-site inspections of bank holding companies and their nonbank subsidiaries. Provides guidance to examiners as they assess risk-management practices of state member banks, bank holding companies, and savings and loan holding companies (including insurance and commercial savings and loan holding companies) with less than $50 billion in total consolidated assets, and foreign banking organizations. Supervisory Considerations for the Communication of Supervisory Findings (SR 13-13/CA 13-10) Discusses the standard language the Federal Reserve uses to enhance focus on matters requiring attention and highlights supervisory expectations for corrective actions, Reserve Bank follow-up, and other supervisory considerations. Also defines matters requiring attention and matters requiring immediate attention and outlines procedures that safety- and-soundness and consumer compliance examiners will follow in presenting and communicating their supervisory findings. Framework for Risk-Focused Supervision of Large Complex Institutions, including relevant supervision and regulation letter (SR 97-24) Describes aspects of the Federal Reserve’s program to enhance the effectiveness of its supervisory processes for state member banks, bank holding companies, and the U.S. operations of foreign banking organizations. Rating the Adequacy of Risk Management Processes and Internal Controls at State Member Banks and Bank Holding Companies (SR 95-51) Directs examiners to assign separate rating for risk management to state member banks and bank holding companies with $50 billion or more in total assets, and highlights the importance of risk management as a facet of the supervisory process. Office of the Comptroller of the Currency Comptroller’s Handbook – Bank Supervision Process Includes explicatory materials on types of banks, supervision responsibilities, regulatory ratings, supervisory process, functional regulation, rating systems, and disclosure. Comptroller’s Handbook – Large Bank Supervision Outlines the supervisory process for large banks: the core assessment, risk assessment system, evaluation of bank internal control, and audits. Comptroller’s Handbook – Corporate and Risk Governance Focuses on management of a variety of risks and the roles and responsibilities of the board of directors and senior management, and provides relevant examination procedures. Comptroller’s Handbook – Internal and External Audits Addresses risks inherent in the audit function (which compromises both internal and external audit functions) and the audit function’s role in managing risks. Also addresses internal and external audit functions’ effect on risk-management supervisory expectations and the regulatory requirements for prudent risk management. Includes guidance and examination procedures to assist examiners in completing bank core assessments affected by audit functions. Comptroller’s Handbook – Internal Controls Discusses the characteristics of effective controls to assist examiners and bankers to assess the quality and effectiveness of internal control. Describes OCC’s supervisory process for internal control reviews and the roles and responsibilities of boards of directors and management. Enforcement Action Policy (Policies and Procedures Manual 5310-3), internal memorandum Describes policy for taking appropriate enforcement action in response to violations of law, rules, regulations, final agency orders, and unsafe or unsound practices and conditions. Violations of Laws and Regulations (Bulletin 2017-18) Describes updated policies and procedures on violations of laws and regulations and provides the agency with consistent terminology for communication, format, follow-up, analysis, documentation, and reporting of violations. Articulates the level and type of risk the agency will accept while conducting its mission. Matters Requiring Attention (Policies and Procedures Manual 5400- 11), internal memorandum Describes procedures for examiners to identify and aggregate supervisory concerns into matters requiring attention including criteria, communication, and follow-up of concerns. Also describes the relationship between matters requiring attention and interagency ratings, OCC’s risk-assessment system and enforcement actions. Includes examiner tools in the appendixes. Risk Management of New, Expanded, or Modified Bank Products and Services (Bulletin 2004-20, replaced by Bulletin 2017-43) Outlines the expectations for national banks’ management and boards to implement an effective risk-management process to manage risks associated with new, expanded, or modified bank products and services. Interagency Policies Guidance on Sound Incentive Compensation Policies 75 Fed. Reg. 36395 (June 25, 2010) Interagency statement on sound incentive compensation practices to banking organizations supervised by FDIC, the Board of Governors of the Federal Reserve System (Federal Reserve), and the Office of the Comptroller of the Currency (OCC). It is intended to assist banking organizations in designing and implementing incentive compensation arrangements and related policies and procedures that effectively consider potential risks and risk outcomes. Appendix III: GAO Questions for Evaluating How Federal Bank Examiners Applied Risk- Management Guidance for Large Depository This appendix lists the questions we used to determine how federal bank examiners applied their policies and procedures to assess management oversight of risk at large depository institutions. We found that each federal banking regulator has slight variation in its policies and procedures for oversight of management at large depository institutions. Therefore, we did not apply generally applicable criteria in our assessment; instead, we applied the specific policies and procedures used by each federal banking regulator. Federal Deposit Insurance Corporation: 1. To what extent did examiners assess board and management oversight? 2. To what extent did examiners assess the bank’s control environment, including whether management takes appropriate and timely action to address recommendations by auditors and regulatory authorities? 3. To what extent did examiners assess the bank’s risk assessment? 4. To what extent did examiners assess the bank’s control activities, to include determining if policies, procedures, and practices were adequate for the size, complexity, and risk profile of the bank and if management took appropriate steps to comply with laws and regulations? 5. To what extent did examiners assess the bank’s information and communication, to include adequacy of information systems to identify, capture, and report relevant internal and external information? 6. To what extent did examiners assess the bank’s systems in place to monitor risk arising from all major activities the bank is engaged in with respect to b. legal risk, and c. reputation risk? 7. In identifying matters requiring attention, did examiners consistently explain the rationale for the concern (whether the matter deviates from sound governance or internal controls and how it could adversely impact the condition of the institution)? 8. In communicating matters requiring attention, did examiners a. write in clear and concise language b. describe the deficient practices, operations, or financial condition, c. recommend actions the board should take to address the deficiency? 9. What steps did examiners take to follow up on matters requiring attention and verify completion? 10. To what extent did the examiner comment on how the bank accomplished compliance with enforcement actions or the reason why the bank is not in compliance with enforcement actions? Conclusions: To what extent did examiners follow agency risk- management guidance for this examination? To what extent do the conclusion memorandums link to the supervisory letter and report of examination? Board of Governors of the Federal Reserve System: 1. Within the context of the consolidated financial entity, to what extent did examiners assess the bank’s implementation of its corporate governance framework? 2. Within the context of the consolidated financial entity, to what extent did examiners assess management of the bank’s core business lines? 3. To what extent did the examiners assess the bank’s board and management for active oversight of the bank, to include the extent to which examiners a. assessed the adequacy of the bank directors’ fulfillment of their duties and responsibilities; and b. assessed bank management’s fulfillment of their duties and responsibilities? 4. To what extent did examiners assess the adequacy of the bank’s policies, procedures, and limits? 5. To what extent did examiners assess the adequacy of the bank’s risk monitoring and management information systems? 6. To what extent did examiners assess the adequacy of the bank’s internal controls? 7. To what extent did examiners assess the adequacy of the bank’s audit function, to include a. internal audit staff, c. internal audit function adequacy and effectiveness, d. external audit staff, and e. regulatory examinations? 8. How did examiners assess the Management rating for CAMELS? 9. In identifying matters requiring attention, did examiners consistently explain the rationale for the concern? 10. In communicating matters requiring attention, did examiners a. write in clear and concise language, b. prioritize based upon degree of importance, and c. focus on any significant matters that require attention? 11. To what extent did examiners follow-up on matters requiring attention and verify completion? 12. To what extent did the examiner comment on how the bank accomplished compliance with enforcement actions or the reason why the bank was not in compliance with enforcement actions? Conclusions: To what extent did examiners follow agency risk- management guidance for this examination? To what extent do the conclusion memorandums link to the supervisory letter and report of examination? Office of the Comptroller of the Currency: 1. To what extent did the examiners assess the quantity and quality of b. reputation risk, c. operational risk, and d. compliance risk? 2. To what extent did the examiners assess the bank’s internal controls, d. accounting information, communication, and e. self-assessment and monitoring? 3. To what extent did the examiners assess the bank’s audit function, b. audit management and processes, c. audit reporting, and d. internal audit staff? 4. How did examiners assess the Management rating for CAMELS? 5. In identifying matters requiring attention, did examiners consistently a. deviates from sound governance, internal control, or risk management principles, and has the potential to adversely affect the bank’s condition, including its financial performance or risk profile, if not addressed; b. results in substantive noncompliance with laws and regulations, enforcement actions, supervisory guidance, or conditions imposed in writing in connection with the approval of any application or other request by the bank; or c. describes an unsafe or unsound practice. An unsafe or unsound practice is generally any action, or lack of action, which is contrary to generally accepted standards of prudent operation, the possible consequences of which, if continued, would be abnormal risk or loss or damage to an institution, its shareholders, or the Deposit Insurance Fund? 6. In communicating matters requiring attention, did examiners a. describe the concern(s); b. identify the root cause(s) of the concern and contributing factors; c. describe potential consequence(s) or effects on the bank from d. describe supervisory expectations for corrective action(s); and e. document management’s commitment(s) to corrective action and include the time frame(s) and the person(s) responsible for corrective action? 7. In follow-up on matters requiring attention, did examiners consistently a. monitor the board and management’s progress implementing b. verify and validate the effectiveness of the board and management’s corrective actions; c. perform timely verification after receipt of the documentation or communication from the bank that the documentation is ready for review; d. meet, as necessary, with the bank’s board or management to discuss progress assessments and verification results; and e. deliver written interim communications to the board summarizing the findings of validation activity? 8. To what extent did examiners verify and validate bank actions to comply with enforcement actions? Conclusions: To what extent did examiners follow agency risk- management guidance for this examination? To what extent do the conclusion memorandums link to the supervisory letter and report of examination? Appendix IV: Comments from the Federal Deposit Insurance Corporation Appendix V: Comments from the Board of Governors of the Federal Reserve System Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Karen Tremba (Assistant Director), Philip Curtin (Analyst in Charge), Enyinnaya David Aja, Bethany Benitez, Rachel DeMarcus, M’Baye Diagne, Risto Laboski, Yola Lewis, Christine McGinty, Kirsten Noethen, David Payne, Amanda Prichard, Barbara Roesmann, Jena Sinkfield, and Farrah Stone, made key contributions to the report.
Why GAO Did This Study Weaknesses identified after the 2007–2009 financial crisis included management weaknesses at large depository institutions and the need for federal regulators (FDIC, Federal Reserve, and OCC) to address the deficiencies in a timely manner. Concerns remain that positive economic results of recent years could mask underlying risk-management deficiencies. This report examined (1) how consistent regulators' revised policies and procedures are with leading risk-management practices, (2) how they applied examination policies and procedures, and (3) trends in supervisory concern data since 2012 and how regulators tracked such data. GAO compared regulators' policies and procedures for oversight against leading practices; compared documents from selected bank examinations for 2014–2016 against regulator's risk-management examination procedures; reviewed aggregate supervisory concern data for 2012–2016; and interviewed regulators and industry representatives. What GAO Found Since 2009, federal banking regulators have revised policies and procedures for use by examiners in supervising depository institutions' management activities (such as those related to corporate governance and internal controls) and for identifying and communicating supervisory concerns. For example, regulators differentiated levels of severity for supervisory concerns and specified when to communicate them to boards of directors at the depository institutions. GAO found that the updated policies and procedures generally were consistent with leading risk-management practices, including federal internal control standards. Examination documents that GAO reviewed showed that examiners generally applied the regulators' updated policies and procedures to assess management oversight at large depository institutions. In particular, for the institutions GAO reviewed, the regulators communicated deficiencies before an institution's financial condition was affected, and followed up on supervisory concerns to determine progress in correcting weaknesses. However, practices for communicating supervisory concerns to institutions varied among regulators and some communications do not provide complete information that could help boards of directors monitor whether deficiencies are fully addressed by management. Written communications of supervisory concerns from the Federal Deposit Insurance Corporation (FDIC) and the Board of Governors of the Federal Reserve System (Federal Reserve) that GAO reviewed often lacked complete information about the cause of the concern and, for the Federal Reserve, also lacked information on the potential consequences of the concern, which in one instance led to an incomplete response by an institution. Communicating more complete information to boards of directors of institutions, such as the reason for a deficient activity or practice and its potential effect on the safety and soundness of operations, could help ensure more timely corrective actions. While supervisory concern data indicated continuing management weaknesses, regulators vary in how they track and use the data. Data on supervisory concerns, and regulators' internal reports based on the data, indicated that regulators frequently cited concerns about the ability of depository institution management to control and mitigate risk. However, FDIC examiners only record summary information about certain supervisory concerns and not detailed characteristics of concerns that would allow for more complete information. With more detailed information, FDIC management could better monitor whether emerging risks are resolved in a timely manner. In addition, the regulators vary in the nature and extent of data they collect on the escalation of supervisory concerns to enforcement actions. FDIC and the Office of the Comptroller of the Currency (OCC) have relatively detailed policies and procedures for escalation of supervisory concerns to enforcement actions, but the Federal Reserve does not. According to Federal Reserve staff, in practice they consider factors such as the institution's response to prior safety and soundness actions. But the Federal Reserve lacks specific and measurable guidelines for escalation of supervisory concerns, relying solely on the judgment or experience of examiners, their management, and Federal Reserve staff, which can result in inconsistent escalation practices. What GAO Recommends GAO recommends that FDIC and the Federal Reserve improve information in written communication of supervisory concerns; FDIC improve recording of supervisory concern data; and the Federal Reserve update guidelines for escalating supervisory concerns. FDIC disagreed with the first recommendation, stating its policies address the issue, but GAO found clarification is needed. FDIC agreed with the second recommendation. The Federal Reserve neither agreed nor disagreed with the recommendations.
gao_GAO-19-509
gao_GAO-19-509_0
Background A skilled acquisition workforce is vital to maintaining military readiness, increasing the department’s buying power, and achieving substantial long-term savings through activities such as systems engineering and contract administration. As of September 2018, DOD’s civilian acquisition workforce was comprised of about 157,000 civilian personnel (see figure 1). About 60 percent of DOD’s civilian acquisition workforce personnel held positions in 3 of 14 acquisition career fields: engineering, contracting, and life cycle logistics (see table 1). Prior Studies on DOD Acquisition Workforce Challenges We have previously found that DOD has faced various challenges in growing and sustaining its acquisition workforce, including challenges with hiring, recruiting, and retaining personnel. In December 2015, we found that over the past 20 years, DOD has significantly reduced and then subsequently increased the size of its acquisition workforce. During the 1990s, as defense budgets decreased, DOD reduced the size of its military and civilian acquisition workforce, and by the early 2000s it began relying more heavily on contractors to perform many acquisition support functions. According to DOD, between 1998 and 2008, the number of military and civilian personnel performing acquisition activities decreased 14 percent, from about 146,000 to about 126,000 personnel. Due to concerns about skill gaps within the workforce and growing reliance on contractors, the Secretary of Defense announced his intention in 2009 to rebalance the workforce mix. In 2010, DOD issued an acquisition workforce plan that specified DOD would add 20,000 military and civilian personnel to its acquisition workforce by fiscal year 2015. DOD subsequently increased the size of its military and civilian acquisition workforce by 21 percent between 2008 and 2015 to about 153,000 personnel, but did not accomplish growth goals set for certain priority career fields, such as engineering and contracting. DOD officials stated that the shortfalls were largely the result of high attrition rates, difficulty hiring qualified personnel, and budget constraints. In May 2018, we found that DOD’s Science and Technology Reinvention Laboratories (defense laboratories), which include acquisition workforce personnel, experienced challenges with delays with security clearances and human resource processing of personnel actions, which contributed to a lengthy hiring process. We also found that the delays made it difficult for defense laboratories to hire highly qualified candidates. Similarly, in June 2018, the Section 809 Panel identified DOD’s cumbersome hiring process as a challenge for shaping its acquisition workforce. The Section 809 Panel emphasized that these challenges undermine DOD’s ability to successfully recruit top candidates into the acquisition workforce. Most recently, in March 2019, we reported that DOD had not developed metrics to track progress associated with shaping the future acquisition workforce, such as workforce targets as a whole or by specific career fields. For example, we reported that DOD issued an updated acquisition workforce strategic plan in October 2016 which, among other things, assessed its current capacity and capability, and identified risks that DOD needed to manage to meet future needs. In addition, in September 2017, DOD issued a workforce rationalization plan. However, neither the October 2016 strategic plan nor the September 2017 workforce rationalization plan established specific size targets. We noted that without such metrics, DOD would not be able to demonstrate that its strategic workforce planning efforts and associated initiatives were successful, despite increasing the size of its acquisition workforce beyond its earlier target. Federal Hiring Process and Available Flexibilities DOD’s challenges with hiring civilian acquisition workforce personnel are not unique within the federal government. The traditional method of hiring for the federal government, also known as the competitive examining process, has been characterized by federal agencies as rigid and lacking in flexibility. The traditional hiring method generally requires agencies to, among other things, notify the public that the government will accept job applications for a position, screen applications against minimum qualification standards, apply selection priorities such as veterans’ preference, and assess applicants’ relative competencies or knowledge, skills, and abilities against job-related criteria to identify the most qualified applicants. In 2008, OPM established a roadmap for the traditional hiring method, including an 80-day time-to-hire goal (see figure 2). To address some of the human capital challenges federal agencies face, statutes have provided hiring, recruitment, and retention flexibilities that provide agencies with tools to help manage their workforces. Legislation has also provided hiring flexibilities exclusively to DOD for specified purposes, including hiring acquisition workforce personnel. Hiring Flexibilities Hiring flexibilities can help the government fill critical skills gaps or achieve certain public policy goals, such as employing veterans. As of September 2018, we identified 46 hiring flexibilities that DOD could use to hire civilian acquisition workforce personnel, including the following. DOD Direct Hire Authorities. These authorities help expedite the hiring process by allowing DOD to hire candidates without regard to certain provisions in Title 5, such as veterans’ preference and applicant rating and ranking. According to DOD officials, using direct hire authorities can reduce the time to hire personnel by nearly half as compared to the traditional hiring method. We identified 14 DOD direct hire authorities in effect as of fiscal year 2018 that DOD could use to hire civilian acquisition workforce personnel. For example, the “expedited hiring authority for certain defense acquisition workforce positions” (expedited hiring authority for acquisition positions) permits the Secretary of Defense to determine that a shortage of candidates or a critical hiring need exists for certain acquisition workforce positions, and to recruit and appoint qualified persons directly to those positions. For the purposes of the expedited hiring authority, in December 2015, the Secretary of Defense had designated 12 of the 14 acquisition career fields as critical or understaffed, including the engineering, contracting, and life cycle logistics career fields. DOD Civilian Acquisition Workforce Personnel Demonstration Project (AcqDemo) Hiring Authorities. According to HCI officials, about 19 percent of DOD’s civilian acquisition workforce personnel participate in the AcqDemo performance management system, an alternative to the General Schedule pay system. Hiring managers under AcqDemo may use AcqDemo-specific hiring flexibilities, such as direct hire appointments for the business and technical management professional career path, in addition to hiring flexibilities available DOD-wide. Veterans-Related Hiring Authorities. These authorities allow agencies to hire certain veterans without regard to certain provisions in Title 5. For example, agencies may appoint eligible veterans under the Veterans’ Recruitment Appointment authority without competition under limited circumstances or otherwise through excepted service hiring procedures. Pathways Programs. These programs promote employment opportunities in the federal government for students and recent graduates through an exception to the competitive hiring rules for certain positions in the federal workforce. Appendix I provides additional information on the hiring flexibilities that were available to DOD’s civilian acquisition workforce as of September 2018. Recruitment and Retention Flexibilities Sections of Title 5 outline recruitment and retention flexibilities that agencies can offer to prospective and current employees to help recruit and retain highly qualified personnel. Like other federal agencies, DOD can use these incentives to recruit and retain civilian personnel, including those in the acquisition workforce. Recruitment bonuses may be paid to a newly hired federal employee if the agency determines that the position would be difficult to fill in the absence of a bonus. Relocation bonuses may be paid to a current employee who must relocate for a position in a different geographic area if the agency determines that the position would be difficult to fill in the absence of a bonus. Retention bonuses may be paid to a current employee if the agency determines that the unusually high or unique qualifications of the employee or a special need of the agency for the employee’s services makes it essential to retain the employee who would likely leave federal service in the absence of such a bonus. Student loan repayments may be paid on behalf of a job candidate or a current employee to recruit or retain highly qualified personnel. The employees must sign a service agreement of at least 3 years with the agency that pays the loans. Federal agencies may pay up to $10,000 per employee per calendar year, totaling no more than $60,000 for any one employee. DOD can fund the four monetary incentives with the Defense Acquisition Workforce Development Fund (DAWDF)—a dedicated funding source for the recruitment, training, and retention of DOD’s acquisition personnel— as well as other sources, such as operations and maintenance appropriations. Appendix II provides additional information on the recruitment and retention flexibilities available to DOD’s civilian acquisition workforce as of September 2018. DOD Acquisition Workforce and Civilian Personnel Leaders Several offices within DOD play key roles in managing how the department uses hiring, recruitment, and retention flexibilities for the civilian acquisition workforce. For example, HCI oversees department- wide acquisition workforce strategic planning; DCPAS develops implementation guidance on how DOD flexibilities should be used; and civilian personnel centers track the extent to which flexibilities are used (see table 2). DOD Has Increased Its Use of Hiring, Recruitment, and Retention Flexibilities for Its Civilian Acquisition Workforce From fiscal year 2014 to 2018, DOD increased its use of hiring, recruitment, and retention flexibilities for its civilian acquisition workforce. During this period, DOD used hiring flexibilities for 90 percent of its approximately 44,000 civilian acquisition workforce hiring actions. This high usage rate came as USD (A&S), USD (P&R), and the military departments’ leadership encouraged their hiring managers and human resource specialists to use the hiring flexibilities to help reduce the length of the hiring process. Additionally, during this period, DOD’s human resource specialists issued guidance that helped address confusion about the requirements governing the hiring authorities. Currently, USD (P&R) is leading a DOD-wide effort to consolidate direct hire authorities in an attempt to simplify their application. During this 5-year period, DOD also increased its use of recruitment and retention flexibilities for the civilian acquisition workforce, increasing the dollar amount authorized for these flexibilities from $13.9 million in fiscal year 2014 to $33.7 million in fiscal year 2018. This increase came as DOD leadership emphasized the benefits of recruitment and retention flexibilities and oversaw concerted efforts to increase their usage through the dissemination of information to human resource specialists. DOD Used Hiring Flexibilities for Vast Majority of Civilian Acquisition Workforce Hires from Fiscal Years 2014 to 2018 We found that DOD used hiring flexibilities for about 90 percent of DOD’s approximately 44,000 civilian acquisition workforce hiring actions between fiscal years 2014 and 2018. Further, DOD increased its use of these flexibilities, which include direct hire authorities, from 80 percent in fiscal year 2014 to 95 percent in fiscal year 2018 (see figure 3). From fiscal year 2014 to 2018, DOD used the expedited hiring authority for acquisition positions more than any other direct hire authority for its civilian acquisition workforce. Congress enacted this authority in fiscal year 2009 and in fiscal year 2010, amended the authority to, among other things, allow hiring of all qualified applicants instead of only highly qualified applicants. Additionally, in November 2015, legislation eliminated the expedited hiring authority’s expiration date and made the authority permanent. Command officials told us that they used this authority often because it does not have as many requirements as other direct hire authorities and because of their familiarity with it. Nine of the 14 DOD direct hire authorities identified were not available for use until fiscal year 2017 because they were enacted after that year or DOD had not yet implemented the authority, either through memorandums or federal register notices (see table 3). Since 2015, USD (A&S), USD (P&R), and leadership of the military departments have encouraged the use of hiring flexibilities—particularly direct hire authorities—over the traditional method. From July 2015 to November 2017, USD (A&S) and USD (P&R) convened five joint acquisition and human resource summits to provide a recurring forum for discussing leading practices in sustaining the acquisition workforce, including the improved use of hiring flexibilities. In October 2016, USD (A&S) issued its current acquisition workforce strategic plan for DOD and used this document to encourage implementation of direct hire authorities as appropriate. In November 2017, senior leadership in the USD (P&R) office issued a Federal Register Notice that updated and consolidated AcqDemo’s rules and guidance, including introducing additional AcqDemo hiring flexibilities (see appendix I, table 8 for additional information on these flexibilities). In 2018, the Secretary of the Navy, the Assistant Secretary of the Army for Manpower and Reserve Affairs, and the Assistant Secretary of the Air Force for Manpower and Reserve Affairs each issued memorandums to their respective departments stating that the use of direct hire authorities be considered first in the hiring process, as appropriate. These memorandums note that direct hire authorities provide significant advantages in timeliness compared to the traditional hiring process, and encourage maximum use of direct hire authorities to the extent appropriate. In addition to DOD leadership emphasis, command officials credited DCPAS and the military departments’ civilian personnel centers for taking actions to help DOD increase its use of direct hire authorities. These officials explained that confusion among hiring managers and human resource specialists over the numerous requirements that apply to each direct hire authority constituted one of the main challenges that had previously limited DOD’s use of direct hire authorities. To illustrate the potential for confusion, table 4 presents some of the direct hire authority requirements a hiring manager would have to consider under two different hiring authorities. To help address the confusion stemming from the direct hire authorities’ numerous requirements, in 2017 and 2018, DCPAS and the personnel centers consolidated information on the available direct hire authorities and the requirements that govern each of them into concise and comprehensive guidance documents. As a result, command-level and personnel center officials told us that human resource specialists can now quickly find and compare available direct hire authorities to determine what may work best for their hiring needs. Factors That Contributed to Limiting the Use of Hiring Flexibilities We found that the military departments’ use of certain direct hire authorities was limited by the amount of time it took DOD leadership to implement some of the authorities. DCPAS officials told us that although Congress enacts direct hire authorities in legislation, DOD human resource personnel and hiring managers do not use the authorities until DOD and the components issue implementing guidance. We found that DOD implemented the 14 DOD direct hire authorities anywhere from 2 to 42 months after an authority’s enactment (see figure 4). In May 2018, we reported on the 30-month lapse between the enactment of the science, technology, engineering, and mathematics direct hire authority for students at the defense laboratories and DOD’s issuance of corresponding implementation guidance. Defense laboratory officials told us it took longer than anticipated to publish the federal register notice that allowed the laboratories to use the hiring authority, and they attributed the delays to coordination issues among relevant offices during the approval process. In December 2018, we found that the defense laboratories hired significantly fewer students than authorized because of the delays. To address the delays, in May 2018, we recommended that DOD establish and document time frames for its coordination process to help ensure the timely implementation of defense laboratory hiring authorities in the future. DOD concurred with our recommendation and identified actions the department plans to take to improve oversight and coordination of the defense laboratories’ hiring efforts. DOD acquisition workforce and human resource officials told us that they also did not use certain direct hire authorities as much from fiscal years 2014 to 2018 because the requirements associated with them made them harder to use. For example, according to DOD guidance documents we reviewed, most of the DOD direct hire authorities applicable to the civilian acquisition workforce have expiration dates or limits on the number of hires. Table 5 provides examples of requirements governing direct hire authorities that officials identified as making the authorities more difficult to use. Going forward, HCI and DCPAS officials told us that USD (P&R) is leading a DOD-wide effort to advise Congress on which direct hire authorities could be consolidated and which requirements could be eliminated. For example, HCI officials said that USD (P&R) recently provided Congress input on consolidating four cybersecurity-related authorities into one authority. DCPAS officials also told us they previously provided input to Congress on certain challenges hiring managers experienced in using some of the direct hire authorities. According to command officials, DCPAS recommended that Congress raise the limits on the number of personnel that could be hired under the defense laboratory direct hire authorities. DOD Increased Its Use of Recruitment and Retention Flexibilities from Fiscal Years 2014 to 2018 We found that DOD increased its use of recruitment and retention flexibilities from fiscal year 2014 to fiscal year 2018. We also examined two other issues related to recruitment and retention—post-employment restrictions on military personnel and authorities to remove civilian acquisition workforce employees for unacceptable performance. DOD officials did not identify either issue as a major challenge for managing the civilian acquisition workforce. Recruitment and Retention Flexibilities We found that DOD increased its use of recruitment bonuses, relocation bonuses, retention bonuses, and student loan repayments from $13.9 million in fiscal year 2014 to $33.7 million in fiscal year 2018 (see figure 5). As part of the increased total amount of funds authorized for recruitment and retention flexibilities, DOD had increased the number of awarded recruitment and retention flexibilities by approximately 140 percent between fiscal years 2014 to 2018 (see table 6). DOD leadership has emphasized the benefits of recruitment and retention flexibilities, which helped increase their use. For example, in DOD’s October 2016 acquisition workforce strategic plan, USD (A&S) stated that the acquisition workforce would increase the use of these flexibilities by leveraging DAWDF. Additionally, in November 2016, USD (A&S) and USD (P&R) held a joint acquisition and human resource summit, which highlighted efforts of an integrated product team established to expand the use recruitment and retention flexibilities. For example, the integrated product team developed a fact sheet to answer frequently asked questions about incentives from human resource specialists and hiring managers within the military departments. Officials from the commands and DACMs generally agreed that recruitment and retention flexibilities were useful tools in helping them recruit and retain acquisition workforce talent. To receive these monetary incentives, employees must enter into written service agreements to remain with the department for a specific period. DACM and defense agency officials, however, noted that retention bonuses were the least effective of the monetary recruitment and retention flexibilities. For example, Defense Contract Management Agency and Air Force officials told us they do not use retention bonuses as widely because they do not view them as effective tools in retaining talent. Defense Contract Management Agency officials explained that most of the personnel who leave their agency for other jobs go to other DOD organizations or federal agencies, and retention bonuses are generally used only for employees who are likely to leave federal service. The Air Force DACM’s representatives told us the Air Force decreased its use of DAWDF for retention bonuses as a result of a 2016 RAND Corporation study that found that private sector companies made minimal use of retention bonuses. According to this study, none of the 21 companies in RAND’s sample—among Fortune’s “100 best companies to work for”—identified retention bonuses as a primary tool to retain talent. Lastly, command and personnel center officials we interviewed also noted that a number of factors outside of monetary recruitment and retention flexibilities influence an employee’s decision to join or remain with DOD. These factors include the organization’s mission and work-life programs and policies. Post-Employment Restrictions DOD military personnel are subject to certain post-employment restrictions that could potentially dissuade them from seeking further employment with the department as civilian personnel, but DACM and command officials told us these restrictions have not significantly affected their ability to recruit new hires. For example, the “180-day rule” prevents DOD from appointing retired military personnel to civil service positions within 180 days of the military person’s retirement unless the appointment, which must be in the competitive service, is authorized by the Secretary of Defense or a designee, OPM approves the appointment, and the minimum rate of basic pay has been increased. DACM and command officials noted that retired military personnel could elect to work for private sector companies during the 180-day period. However, these officials did not cite post-employment restrictions as a major recruitment challenge for the civilian acquisition workforce and instead cited other challenges, such as limited resources dedicated to recruiting civilian personnel and hiring delays due to the security clearance process. Removal Authority DOD does not have specific statutory authority in Title 10, U.S. Code for removing civilian acquisition workforce personnel for unacceptable performance. However, DOD’s civilian employees are subject to a longer probationary period than other civilian federal employees, and DACM and command officials told us that removing underperforming staff is easier during a probationary period than when staff are permanently employed. Officials also noted that staff tend to leave on their own accord if they are not performing well within the department. DOD Does Not Regularly Monitor Hiring, Recruitment, and Retention Flexibilities or Assess Their Effectiveness DOD does not regularly monitor its use of hiring, recruitment, and retention flexibilities for its civilian acquisition workforce, and despite ongoing efforts, is not yet able to systematically assess the effectiveness of these flexibilities. HCI, the office responsible for DOD-wide acquisition workforce strategic planning, regularly monitors the overall health of the acquisition workforce, in part by reviewing and reporting statistics on workforce size, workforce gains and losses, and other workforce-related metrics on a quarterly basis. Further, as previously noted, DOD has increased its overall use of human capital flexibilities. However, HCI does not regularly monitor the military departments and defense components’ specific use of hiring, recruitment, and retention flexibilities. As a result, HCI is missing opportunities to identify variations in usage rates, and use that information to determine whether there are specific issues or challenges being encountered. For example, we found that the Air Force and Navy used direct hire authorities twice as often as the Army in fiscal year 2018. Further, while DOD leadership has emphasized that using hiring flexibilities improves DOD’s ability to recruit and hire high-quality talent in a timely manner, HCI is not yet able to assess how effective the hiring flexibilities have been in achieving these goals. This is because DCPAS has not yet developed a plan to consistently measure how long it takes to hire new personnel across the department. Similarly, DCPAS has not yet established metrics to assess the quality of the new personnel DOD hires. DCPAS has efforts underway to address these issues and plans to start using these metrics in 2019. HCI Does Not Regularly Monitor Usage of Hiring Flexibilities DOD policy states HCI should implement strategies and policies to help attract and retain acquisition workforce personnel. To this end, HCI officials told us they monitor the overall health of the acquisition workforce in various ways, including outreach to the DACMs on workforce challenges, as well as holding knowledge-sharing events, such as a May 2018 acquisition workforce human capital symposium. Additionally, HCI reviews and reports statistics on workforce size, workforce gains and losses, and other workforce-related metrics on a quarterly basis. For example, in its fiscal year 2019 first quarter assessment, HCI reported data on the current size of the acquisition workforce; acquisition workforce education and certification levels; and workforce gains, losses, retirement eligibility and attrition rates, among other things, both on a DOD-wide basis as well as by acquisition career field. HCI officials told us they use these data to identify potential or emerging workforce challenges. HCI officials noted that if they identify any issues, they further analyze data to identify the root cause of the issues. HCI officials acknowledged, however, that HCI does not regularly collect or review data on the defense components’ specific use of hiring, recruitment, and retention flexibilities as part of its quarterly assessments. HCI officials stated that they collect and review hiring flexibilities data on an as-needed basis, such as in preparation for DOD acquisition workforce governance forums, including senior steering board and workforce management group meetings, in which the use of the flexibilities will be on the agenda, or in response to Congressional requests. HCI officials also noted that because the use of hiring, recruitment, and retention flexibilities are made at the command level within the military departments, the military departments are better positioned to regularly monitor usage. However, the military departments are not in a position to identify variations in usage rates across DOD’s civilian acquisition workforce, which are significant. For example, we found that the Air Force and the Navy used direct hire authorities for 85 percent and 84 percent of their respective hiring actions in fiscal year 2018, while the Army used direct hire authorities for 42 percent of its hiring actions that year. Similarly, some career fields use the hiring flexibilities at higher rates than others. While hiring flexibilities comprised 95 percent of total civilian acquisition workforce hiring actions in fiscal year 2018, the auditing and purchasing career fields used hiring flexibilities for only 68 percent and 62 percent of their respective hiring actions that year. Without regularly monitoring usage rates for hiring flexibilities across the civilian acquisition workforce, HCI lacks visibility into these types of variations and opportunities to investigate and address them, as appropriate. Lastly, HCI focuses its efforts on those recruitment and retention flexibilities funded by DAWDF because HCI is responsible for DAWDF’s management. Based on DAWDF reports and DCPDS data, we found that the amount of dollars obligated for DAWDF-funded recruitment, retention, and recognition initiatives in 2017 was $15 million or about two-thirds of the total dollars authorized for the recruitment and retention flexibilities for DOD’s civilian acquisition workforce in fiscal year 2017. The remaining amount (about one-third) was funded by other sources, such as the military departments’ operations and maintenance appropriations, but is not included as part of HCI’s annual review. Since 2002, we have repeatedly found that agencies should strategically manage their use of human capital flexibilities—including hiring, recruitment, and retention flexibilities—to address human capital challenges. Additionally, federal internal control standards state that an agency’s management should obtain relevant data on a timely basis to effectively monitor activities to achieve objectives. Based on these standards, in May 2018, we recommended that DOD’s defense laboratories routinely monitor data on its use of hiring authorities. DOD concurred with our recommendation and planned to determine the appropriate data to be collected and establish routine reporting requirements. Because HCI is not regularly reviewing hiring flexibility usage, it may be similarly missing opportunities to help identify challenges, inconsistencies, or needed improvements in using the flexibilities. DOD Cannot Yet Accurately Report on How Long It Takes to Hire New Personnel or the Quality of New Hires DOD leaders have repeatedly emphasized that hiring flexibilities— particularly direct hire authorities—can help the department hire high- quality talent in a more timely manner. We have previously found that time-to-hire and quality-of-hire are useful metrics that help agencies evaluate their hiring efforts, which can include the use of hiring flexibilities. To this end, DCPAS collects and reports time-to-hire data to measure DOD’s progress in improving hiring practices. For example, according to DCPAS, from fiscal year 2016 through 2018, DOD took an average of 127 days to hire civilian personnel under the traditional hiring method compared to an average of 110 days when using DOD direct hire authorities. DCPAS also noted variations in time-to-hire across the direct hire authorities, reporting that DOD took anywhere from 77 to 111 days to hire civilian personnel using the 14 DOD direct hire authorities applicable to the civilian acquisition workforce. For the expedited hiring authority for acquisition positions—the direct hire authority used most frequently to hire civilian acquisition workforce personnel—DCPAS reported an average time-to-hire of 106 days from fiscal year 2016 through 2018. While these time-to-hire metrics could be helpful in determining which direct hire authorities most effectively expedite the hiring process, HCI officials told us they do not use these metrics to inform management decisions for the civilian acquisition workforce because they are not yet consistently measured. DCPAS officials explained that the military departments and their major commands developed their own approaches for inputting and reporting time-to-hire data based on their individual needs and data systems. HCI and DCPAS officials acknowledged that this resulted in different ways to record and track the data, which in turn prevented meaningful comparisons between the time-to-hire metrics produced by each of the components. According to HCI and DCPAS officials, this difference, in part, is attributable to the variation in how DOD personnel input certain data. For example, one human resource specialist may initiate a request for a personnel action—which is generally the starting date for time-to-hire metrics—on the day the hiring manager submits a job description, while another human resource specialist may initiate a request for personnel action after the job announcement has been posted publicly. Moreover, our analysis of DCPAS data for all DOD civilian personnel hires from fiscal year 2014 to 2018 shows that about 36,000 of 420,000 personnel actions, or about 9 percent, were initiated on or after the individuals’ start dates, producing a zero or negative time-to-hire figure. DCPAS officials told us they omit these figures when they report time-to-hire metrics. Until time-to-hire metrics are consistently measured DOD-wide, HCI will not be able to use this data to assess which direct hire authorities have most effectively expedited the hiring process, which DOD components have been the most successful in using these authorities, or identify potential issues in using these authorities. In September 2018, to address inconsistent time-to-hire methodologies across DOD, we recommended that the DOD Chief Management Officer require that all DOD human resource providers adopt consistent time-to- hire measures. DOD concurred with our recommendation, and in June 2019, DCPAS officials told us they were in the process of developing a plan to implement consistent time-to-hire metrics across the department. DCPAS officials anticipate completing the plan by July 2019 and will start implementation after DOD leadership approves this plan. HCI officials told us that they plan to use the time-to-hire metrics to assess the civilian acquisition workforce’s hiring efforts, including the use of flexibilities, when the metrics are comparable. Similarly, HCI officials told us that they cannot systematically assess quality-of-hire across the civilian acquisition workforce because DCPAS has not developed guidance that outlines how quality-of-hire should be measured. DOD’s June 2018 civilian human capital operating plan outlines an initiative to improve the quality of civilian hires, among other things. As part of this initiative, DCPAS is to establish quality-of-hire metrics using data collected from an OPM hiring satisfaction survey tool. Using the OPM survey, DOD’s hiring managers are to rate the performance of new employees 6 months after they are hired. DCPAS officials stated that various DOD components have used the survey since 2011, but acknowledged hiring managers completed the survey for only 1 percent of all DOD hires in fiscal year 2018. In March 2019, USD (P&R) leadership issued a memorandum to DOD human capital offices encouraging wider implementation of the survey, including outlining roles and responsibilities and milestones for implementation. Starting in fiscal year 2020, USD (P&R) plans to set quality-of-hire goals using the fiscal year 2019 survey results and incorporating these into future civilian human capital operating plans. HCI officials told us that they plan to use the quality-of-hire metrics to evaluate the civilian acquisition workforce’s hiring efforts, including the use of flexibilities, once DCPAS completes its efforts. Conclusions Congress has provided DOD with a number of hiring, recruitment, and retention flexibilities to help the department manage its acquisition workforce. DOD leadership has encouraged the use of these flexibilities across the department in recent years, and usage has increased significantly since 2014. However, HCI does not regularly monitor defense components’ use of hiring, recruitment, and retention flexibilities for their civilian acquisition workforce to identify challenges, inconsistencies, or needed improvements in using these tools. As a result, HCI may be missing opportunities to develop strategies or inform efforts aimed at improving the usage of these flexibilities. Recommendation for Executive Action The Secretary of Defense should ensure that the Director of Human Capital Initiatives regularly monitors usage of hiring, recruitment, and retention flexibilities for the civilian acquisition workforce—across the military departments and acquisition career fields—to help develop strategies or inform efforts aimed at improving the usage of these flexibilities. (Recommendation 1) Agency Comments We provided a draft of this report to DOD for review and comment. DOD provided written comments, which are reprinted in appendix VII, and concurred with our recommendation. In concurring with our recommendation, DOD stated it would provide guidance to DOD components to monitor usage of flexibilities and provide the results to HCI at least annually. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, the Air Force, and the Navy; the Under Secretary of Defense for Acquisition and Sustainment; the Under Secretary of Defense for Personnel and Readiness; the Director of the Defense Civilian Personnel Advisory Service, and the Director of Human Capital Initiatives. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or dinapolit@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Appendix I: Key Hiring Flexibilities Available to the Department of Defense’s Civilian Acquisition Workforce Sections of Title 5 of the U.S. Code include, among other things, requirements that agencies must follow to hire personnel, such as those associated with the competitive examining hiring authority. Competitive examining has been the traditional method of hiring for the federal government since 1978. The traditional hiring method requires agencies to notify the public that the government will accept applications for a job; screen applications against minimum qualification standards; apply selection priorities such as veterans’ preference, where applicable; and assess applicants’ relative competencies or knowledge, skills, and abilities against job-related criteria to identify the most qualified applicants. Hiring flexibilities were established beyond the traditional hiring method to expedite the hiring process and achieve certain public policy goals. We identified 46 hiring flexibilities available to the Department of Defense’s (DOD) civilian acquisition workforce as of September 2018—3 specific to the DOD acquisition workforce; 6 specific to DOD’s Civilian Acquisition Workforce Personnel Demonstration Project; 20 available DOD-wide, including its acquisition workforce; and 17 available government-wide. Further, of the 46 hiring flexibilities, 14 are DOD direct hire authorities provided in statute that we have identified as being directly applicable to the DOD civilian acquisition workforce—3 of which are specific to the DOD acquisition workforce and 11 of which are available DOD-wide. Tables 7 through 10 provide additional information on each of the 46 hiring flexibilities and denote the 14 DOD direct hire authorities and their legal authorities. Appendix II: Key Recruitment and Retention Flexibilities Available to the Department of Defense’s Civilian Acquisition Workforce Recruitment and retention flexibilities assist federal agencies in attracting and retaining employees who possess unusually high or unique qualifications, or who fill essential needs for the agencies. Additionally, they allow agencies more control over compensation and are intended to help government compete with the private sector. We identified nine recruitment and retention flexibilities available to the Department of Defense’s (DOD) civilian acquisition workforce as of fiscal year 2018— four monetary incentives and five work-life balance policies and programs (see tables 11 and 12). Appendix III: Objectives, Scope, and Methodology Section 843 of the National Defense Authorization Act for Fiscal Year 2018 included a provision for us to review and report on the effectiveness of hiring and retention flexibilities for the Department of Defense’s (DOD) acquisition workforce, with a focus on its civilian acquisition workforce, including (a) the extent to which DOD experiences challenges with recruitment and retention of the acquisition workforce, such as post-employment restrictions; (b) a description of the hiring and retention flexibilities available to DOD to fill civilian acquisition positions and the extent to which DOD has used the flexibilities available to it to target critical or understaffed career fields; (c) the extent to which DOD has the necessary data and metrics on its use of hiring and retention flexibilities for the civilian acquisition workforce to strategically manage the use of such flexibilities; (d) an identification of the factors that affect the use of hiring and retention flexibilities for the civilian acquisition workforce; (e) recommendations for any necessary changes to the hiring and retention flexibilities available to DOD to fill civilian acquisition positions; and (f) a description of the flexibilities available to DOD to remove underperforming members of the acquisition workforce and the extent to which any such flexibilities are used. This report: (1) provides information on DOD’s use of available hiring, recruitment, and retention flexibilities for its civilian acquisition workforce personnel from fiscal years 2014 to 2018; and (2) determines the extent to which DOD has monitored and assessed its use of hiring, recruitment, and retention flexibilities for its civilian acquisition workforce. In doing so, the report addresses elements (a) through (f) identified above. To examine DOD’s use of available hiring, recruitment, and retention flexibilities for its civilian acquisition workforce personnel from fiscal years 2014 to 2018, we reviewed relevant statutes, reports, and DOD policies and guidance to identify hiring, recruitment, and retention flexibilities available to DOD’s civilian acquisition workforce. Based on our review, we identified the following hiring authorities: competitive examination, which we refer to as “the traditional hiring method,” and 46 alternatives to the traditional hiring method, which we refer to as “hiring flexibilities” for the purposes of our review. Appendix I provides additional information on these 46 hiring flexibilities. We also identified four monetary incentives and five work-life balance programs that DOD can use to recruit and retain civilian acquisition workforce personnel. We scoped our analysis to the four monetary incentives DOD can use to recruit and retain civilian acquisition workforce personnel—(1) recruitment bonuses, (2) retention bonuses, (3) relocation bonuses, and (4) student loan repayments—and collectively refer to these four incentives as “recruitment and retention flexibilities” for the purposes of our review. We focused our review on the four government-wide monetary flexibilities with personnel data in the Defense Civilian Personnel Data System (DCPDS), DOD’s central repository for civilian personnel transactions data, and required for submission to the Defense Manpower Data Center. Appendix II provides additional information on these four recruitment and retention flexibilities. We also analyzed personnel data from DCPDS. We obtained DCPDS data on hiring actions from the Office of the Under Secretary of Defense (USD) for Personnel and Readiness (P&R) – Defense Manpower Data Center. We obtained DCPDS data on dollars authorized for recruitment and retention flexibilities from USD (P&R) – Defense Civilian Personnel Advisory Service (DCPAS). We also obtained acquisition workforce data for fiscal years 2014 through 2018 from DOD’s DataMart, a central repository of acquisition workforce data, from the Defense Manpower Data Center. We analyzed the DataMart data to determine which DOD civilian personnel were in DOD’s acquisition workforce at the end of each fiscal year, the military department or organization in which these personnel worked, and the career fields in which these personnel held positions. For our analysis of hiring flexibilities, we included all hiring actions for the DOD civilian acquisition workforce with effective dates from fiscal year 2014 through 2018, except for actions with legal authority codes designated as transfers. We did not include hiring actions designated as transfers because they include hiring actions between military departments as well as transfers from outside of DOD. We excluded all of these transfer hiring actions because the data did not include enough information for us to distinguish between internal and external transfers. We identified 44,291 hiring actions for this 5-year period, and used the legal authority code data fields for each hiring action to determine the type of hiring authority or flexibility that DOD used. We analyzed DOD’s usage of hiring flexibilities from fiscal years 2014 through 2018 across each of DOD’s 14 acquisition career fields and the military departments. Of the hiring flexibilities, we focused our analysis on DOD direct hire authorities because they comprised the single largest category of hiring authorities used by the DOD civilian acquisition workforce for hiring actions from fiscal year 2014 through 2018—26,385 of 44,291 DOD hiring actions or 60 percent. DCPDS, however, did not include enough information for us to determine which specific direct hire authority DOD used for each hiring action. For these actions, the human resource specialists manually entered the details of the specific type of DOD direct hire authority they used in DCPDS. To determine the type of DOD direct hire authority used, two analysts independently reviewed each description and identified the appropriate DOD direct hire authority. For 360 of the 26,385 the hiring actions (or 1.4 percent), the descriptions did not contain enough information for us to determine the specific DOD direct hire authority used. For the purposes of our analysis, we established three categories of hiring actions based on the DOD’s designations in DCPDS (see table 13). For our analysis of recruitment and retention flexibilities, we included all actions authorizing recruitment bonuses, retention bonuses, relocation bonuses, and student loan repayments for the DOD civilian acquisition workforce from fiscal year 2014 through 2018. We identified 13,643 authorization actions. We used the award amount data field for each authorization action to determine the amount of dollars authorized for these four types of incentives. We analyzed DOD’s usage of the recruitment and retention flexibilities from fiscal years 2014 through 2018 across each of DOD’s 14 acquisition career fields. We assessed data reliability by electronically testing these data, reviewing relevant data standards and guidance, and interviewing DCPAS and Defense Manpower Data Center officials. We determined that the data were sufficiently reliable for the purposes of reporting the frequency with which DOD’s civilian acquisition workforce used hiring, recruitment, and retention flexibilities for fiscal years 2014 through 2018. We also identified factors that affected DOD’s use of hiring, recruitment, and retention flexibilities for its civilian acquisition workforce by reviewing DCPAS and military departments’ policies and guidance for using human capital flexibilities, including implementation of 14 DOD direct hire authorities provided in statute, and efforts by DCPAS to improve DOD’s use of the flexibilities. To determine the extent to which DOD has monitored and assessed its use of hiring, recruitment, and retention flexibilities for its civilian acquisition workforce, we reviewed acquisition workforce human capital plans from the Office of Human Capital Initiatives (HCI) within USD for Acquisition and Sustainment (A&S); acquisition workforce plans from the Air Force, the Army, and the Navy; and data and metrics collected by HCI and DOD’s four Directors for Acquisition Career Management (DACM)— one for each of the military departments and a fourth for the defense agencies and field activities outside the military departments. We assessed DOD’s efforts against our key practices for effectively managing human resource flexibilities and federal internal control standards, including that management should use quality information to achieve the entity’s objectives. We also reviewed reports by the Advisory Panel on Streamlining and Codifying Acquisition Regulations—commonly referred to as the Section 809 Panel after the legislative provision that required the Secretary of Defense to establish an advisory panel on streamlining acquisition regulations—and interviewed Section 809 Panel commissioners to supplement our analysis. For both objectives, we interviewed officials from HCI, the office responsible for DOD-wide acquisition workforce DCPAS, the office responsible for developing DOD’s civilian human resources policies and programs; the Defense Manpower Data Center, the office responsible for collecting and maintaining DOD’s civilian personnel data; the Directors for Acquisition Career Management (DACM) for each military department and the Fourth Estate, which is responsible for the 30 defense agencies and field activities outside the military departments; the Air Force Personnel Center; Army’s Civilian Human Resources Agency; Navy’s Office of Civilian Human Resources; and the command within each military department that had the largest number of civilian acquisition workforce personnel in fiscal year 2018: Air Force Materiel Command, Army Combat Capabilities Development Command, and Naval Sea Systems Command. We also interviewed officials from the Defense Contract Management Agency, which had the largest number of civilian acquisition personnel of the other defense agencies with acquisition workforce personnel. Collectively, these four organizations comprised about 38 percent of DOD’s civilian acquisition workforce in fiscal year 2018. We also interviewed personnel from the Office of Personnel Management (OPM), which is responsible for developing and promulgating government-wide human capital policies; and personnel from the Society for Human Resource Management, the world’s largest human resources membership group, who were familiar with metrics used by the private sector to monitor hiring and retention efforts. We conducted this performance audit from August 2018 to August 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix IV: The Department of Defense’s Usage of Hiring Flexibilities for the Acquisition Workforce by Career Field Section 843 of the National Defense Authorization Act for Fiscal Year 2018 included, among other things, a provision for us to review the extent to which the Department of Defense (DOD) has used hiring flexibilities available to it to target critical or understaffed career fields. In its December 2015 memo on using the expedited hiring authority for certain defense acquisition workforce positions, DOD designated 12 of the 14 acquisition workforce career fields as shortage or critical needs categories. We identified 44,291 DOD hiring actions from fiscal year 2014 to 2018 for the civilian acquisition workforce, and categorized them as (1) traditional hiring method actions, (2) actions using DOD direct hire authorities, or (3) other hiring flexibilities (see appendix I for additional information on these hiring flexibilities, including DOD direct hire authorities). We further categorized the hiring actions by DOD’s 14 acquisition workforce career fields and ordered the career fields by total number of hiring actions in fiscal year 2018. Figures 6, 7, 8, and 9 provide data on the use of hiring flexibilities for each of the 14 acquisition workforce career fields. Appendix V: The Department of Defense’s Usage of Hiring Flexibilities for the Acquisition Workforce by Military Department We identified 44,291 Department of Defense (DOD) hiring actions from fiscal year 2014 to 2018 for the civilian acquisition workforce, and categorized them as (1) traditional hiring method actions, (2) actions using DOD direct hire authorities, or (3) other hiring flexibilities (see appendix I for additional information on these hiring flexibilities, including DOD direct hire authorities). We further categorized the hiring actions by military departments and defense agencies. Figure 10 provides data on the use of hiring flexibilities for each of the military departments and the Fourth Estate, which is responsible for the 30 defense agencies and field activities outside the military departments. Appendix VI: The Department of Defense’s Usage of Recruitment and Retention Flexibilities for the Acquisition Workforce Section 843 of the National Defense Authorization Act for Fiscal Year 2018 included, among other things, a provision for us to review the extent to which the Department of Defense (DOD) has used retention flexibilities available to it to target critical or understaffed career fields. In its December 2015 memo on using the expedited hiring authority for certain defense acquisition workforce positions, DOD designated 12 of the 14 acquisition workforce career fields as shortage or critical needs categories. We identified $123.9 million authorized in recruitment and retention flexibilities for DOD’s civilian acquisition workforce from fiscal year 2014 to 2018, and categorized them as (1) recruitment bonuses, (2) relocation bonuses, (3) retention bonuses, and (4) student loan repayments (see appendix II for additional information on these recruitment and retention flexibilities). We further categorized the recruitment and retention flexibilities by DOD’s 14 acquisition workforce career fields and ordered the career fields by total dollars authorized by DOD. See figures 11 through 14 below. Appendix VII: Comments from the Department of Defense Appendix VIII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Nathan Tranquilli (Assistant Director), Claire Li (Analyst-in-Charge), TyAnn Lee, and Ashley Rawson made key contributions to this report. Lorraine Ettaro, Christopher Falcone, Lori Fields, Cynthia Grant, Laura Greifner, and Sylvia Schatz also contributed to this report.
Why GAO Did This Study DOD spends over $300 billion annually on contracts for products and services such as major weapons systems and military support services. By awarding and overseeing these contracts, DOD's acquisition workforce plays a critical role in maximizing DOD's buying power. DOD has increased the size of its acquisition workforce in recent years, but has also faced a number of challenges hiring and retaining personnel. DOD has a number of human capital flexibilities that help DOD in hiring, recruiting, and retaining its civilian acquisition workforce. The National Defense Authorization Act for Fiscal Year 2018 included a provision for GAO to review DOD's implementation of human capital flexibilities for its acquisition workforce. This report: (1) provides information on DOD's use of human capital flexibilities and (2) determines the extent to which DOD has monitored and assessed its use of these flexibilities. GAO reviewed relevant statutes, and DOD policies, guidance, and acquisition workforce plans; analyzed DOD's fiscal year 2014-2018 civilian acquisition workforce personnel data; and interviewed DOD officials. What GAO Found The Department of Defense (DOD) has used human capital flexibilities extensively to hire, recruit, and retain its civilian acquisition workforce. Since 2014, usage rates for hiring flexibilities—alternatives to the traditional, competitive hiring process—have generally increased. DOD leadership has encouraged its hiring personnel to use these flexibilities, such as direct hire authorities, to reduce the length of the hiring process. From fiscal year 2014 to 2018, DOD used hiring flexibilities for 90 percent of its approximately 44,000 civilian acquisition workforce hiring actions (see figure). DOD also increased its use of recruitment and retention flexibilities for its civilian acquisition workforce, increasing the dollar amount authorized from $13.9 million in fiscal year 2014 to $33.7 million in fiscal year 2018. This increase came as DOD leadership emphasized the benefits of these flexibilities, and oversaw concerted efforts to increase their usage through the dissemination of information to human resource specialists. While usage of human capital flexibilities has increased, DOD's Office of Human Capital Initiatives (HCI), which is responsible for DOD-wide acquisition workforce strategic planning, does not regularly monitor or assess how the department uses these flexibilities. HCI regularly monitors the overall health of the acquisition workforce, including by reviewing workforce metrics on a quarterly basis, but does not regularly monitor the military departments' use of human capital flexibilities. For example, GAO found the Air Force and Navy used direct hire authorities twice as often as the Army in fiscal year 2018. Without efforts to gain such insights through monitoring, HCI may be missing opportunities to identify challenges, inconsistencies, or needed improvements in using these tools. With regard to assessing the use of human capital flexibilities, HCI intends to study how long it takes to hire personnel when using the flexibilities. According to DOD officials, this analysis can begin following development of a plan to ensure that defense components consistently collect data on hiring timeframes. DOD officials said they expect to issue this plan in 2019. What GAO Recommends GAO recommends that HCI regularly monitors DOD's use of human capital flexibilities for its civilian acquisition workforce to help identify challenges, inconsistencies, or needed improvements in using these tools. DOD concurred with the recommendation.
gao_GAO-19-475T
gao_GAO-19-475T_0
DHS Has Made Important Progress in Strengthening Its Management, but Considerable Work Remains DHS Has Met 3 of 5 Criteria for Removal from the High-Risk List DHS’s efforts to strengthen and integrate its acquisition, IT, financial, and human capital management functions have resulted in the department meeting 3 out of 5 criteria for removal from the High-Risk List—leadership commitment, action planning, and monitoring progress. DHS has partially met the remaining two criteria—capacity and demonstrated, sustained progress, as shown in figure 1. With regard to leadership commitment, DHS’s top leadership, including the Secretary and Deputy Secretary of Homeland Security, has continued to demonstrate commitment and support for addressing the department’s management challenges. They have also taken actions to institutionalize this commitment to help ensure the long-term success of the department’s efforts. One such effort is the Under Secretary for Management’s Integrated Priorities initiative to strengthen the integration of DHS’s business operations across the department. During monthly leadership meetings with the Under Secretary for Management, the department’s Chief Executive Officers have been providing status updates on their respective actions to address this high-risk designation. Furthermore, top DHS leaders, such as the Under Secretary for Management and the department’s Chief Executive Officers, routinely meet with GAO management to discuss progress on high-risk areas. With regard to having an action plan and monitoring effectiveness, in January 2011, DHS produced its first Integrated Strategy for High-Risk Management and has issued 14 updated versions, most recently in September 2018. The September 2018 strategy describes DHS’s progress to-date, planned corrective actions to further strengthen its management functions, and includes performance measures to monitor key management initiatives. DHS’s Management Directorate leads this ongoing effort and DHS’s strategy and approach, if effectively implemented and sustained, provides a path for DHS to be removed from our High-Risk List. DHS has partially met the criteria for capacity but needs to make additional progress identifying and allocating resources in certain areas— namely acquisition, IT, and financial management—to fully demonstrate its capacity. DHS has analyzed components’ acquisition program staffing assessments but has yet to conduct an in-depth analysis across components or develop a plan to address any gaps. With regard to IT staffing, DHS has not fully identified or reported to Congress or the Office of Personnel Management (OPM) on its department-wide cybersecurity specialty areas of critical needs, such as cybersecurity management or incident response, as required by law. Additionally, DHS’s financial statement auditor has identified several capacity-related issues, including resource limitations and inadequate management and staff training, as causes for the material weaknesses reported. The final criterion is demonstrated progress, which remains partially met. In 2010, we identified, and DHS agreed, that achieving 30 specific outcomes in the areas of acquisition management, IT management, financial management, human capital management, and management integration would be critical to addressing the department’s management challenges. As such, these 30 outcomes became the key criteria by which we gauge DHS’s demonstrated progress. We reported in March 2019 that DHS has fully addressed 17 of the 30 needed outcomes, mostly addressed four, partially addressed six, and initiated actions to address the remaining three, as shown in table 1. In the last 2 years, DHS has made particular progress in the areas of human capital and IT management. Specifically, since 2017 DHS has taken steps to fully address 4 outcomes. The department fully addressed two key human capital outcomes by (1) demonstrating that components are basing hiring decisions and promotions on human capital competencies and (2) strengthening employee engagement efforts. In addition, in the last 2 years DHS has fully addressed two IT outcomes by (1) providing ongoing oversight and support to troubled IT investments to help improve their cost, schedule, and performance; and (2) demonstrating significant progress in implementing its IT strategic workforce planning initiative. Important progress and remaining work in all of the five key areas include: Acquisition management. DHS continues to face challenges in funding its acquisition portfolio. In May 2018, we found that recent enhancements to DHS’s acquisition management, resource allocation, and requirements policies largely reflect key portfolio management practices. However, we also found that of the 24 major acquisition programs we assessed with approved schedule and cost goals, 10 were on track to meet those goals during 2017—a decrease from 2016. In addition, we found that DHS’s portfolio of major acquisition programs was not affordable from fiscal years 2018 to 2022 because the planned costs exceeded the planned budget. DHS has taken steps to strengthen acquisition requirements development across the department, such as reestablishing the Joint Requirements Council in June 2014 to review and validate DHS acquisition requirements. However, opportunities remain to further strengthen DHS’s acquisition process by, for example, using the Joint Requirements Council to (1) identify overlapping or common requirements and (2) make recommendations to senior leadership to help ensure that DHS uses its finite investment resources wisely and maintains a balanced portfolio of investments that combine near-term operational improvements with long-term strategic planning. IT management. DHS has updated its approach for managing its portfolios of IT investments across all components. As part of the revised approach, the department is using its capital planning and investment control process and the Joint Requirements Council to assess IT investments across the department on an ongoing basis. For example, as part of its capital planning process for the fiscal year 2020 budget, the Office of the Chief Information Officer worked with the components to assess each major IT investment to ensure alignment with DHS’s functional portfolios, and to identify opportunities to share capabilities across components. This updated approach should enable DHS to identify potentially duplicative investments and opportunities for consolidating investments, as well as reduce component-specific investments. Additionally, DHS has continued to take steps to enhance its information security program. In November 2018, the department’s financial statement auditor reported that DHS had made progress in correcting its prior year IT security weaknesses. However, for the 15th consecutive year, the auditor designated deficiencies in IT systems controls as a material weakness for financial reporting purposes. Work also remains in implementing our six open recommendations concerning DHS’s cybersecurity workforce assessment requirements. DHS also faces challenges in fulfilling its pivotal role in government- wide cybersecurity efforts, as identified in our Ensuring the Cybersecurity of the Nation high-risk area. DHS has established the National Cybersecurity and Communications Integration Center, which functions as the 24/7 cyber monitoring, incident response, and management center for the federal civilian government. However, DHS has continued to be challenged in measuring how the center is performing its functions in accordance with mandated implementing principles. Financial management. DHS received a clean audit opinion on its financial statements for 6 consecutive years—fiscal years 2013 to 2018. However, in fiscal year 2018, its auditor reported two material weaknesses in the areas of financial reporting and information technology controls and financial systems, as well as instances of non-compliance with laws and regulations. These deficiencies hamper DHS’s ability to provide reasonable assurance that its financial reporting is reliable and the department is in compliance with applicable laws and regulations. Further, DHS components’ financial management systems and business processes need to be modernized; the current systems affect the department’s ability to have ready access to reliable information for informed decision-making. As we reported in 2017, DHS officials have faced various challenges in their efforts to address this—lack of sufficient resources, aggressive schedule, complex requirements, and increased costs. Effectively modernizing financial management systems for the Coast Guard, Federal Emergency Management Agency, and Immigration and Customs Enforcement would help address DHS’s risk in this area. Human capital management. DHS has continued to strengthen its employee engagement efforts by implementing our 2012 recommendation to establish metrics of success within components’ action plans for addressing its employee satisfaction problems. Further, DHS has conducted audits to better ensure components are basing hiring decisions and promotions on human capital competencies. OPM’s 2018 Federal Employee Viewpoint Survey data showed that in the past 2 years, DHS’s score on the Employee Engagement Index increased by 4 points—from 56 in 2016 to 60 in 2018—which was 1 point more than the government wide increase over the same period. While this improvement is notable, DHS’s 2018 score ranked 20th among 20 large and very large federal agencies. Increasing employee engagement and morale is critical to strengthening DHS’s mission and management functions. Management integration. Since 2015, DHS has focused its efforts to address crosscutting management challenges through the establishment and monitoring of its Integrated Priorities initiative. The department updated these priorities in September 2017. Each priority includes goals, objectives, and measurable action plans that are discussed at monthly leadership meetings led by senior DHS officials, including the Under Secretary for Management. DHS needs to continue to demonstrate sustainable progress integrating its management functions within and across the department. What Remains to be Done In closing, it is clear that significant effort is required to build and integrate a department as large and complex as DHS, which has grown to more than 240,000 employees and approximately $74 billion in budget authority. Continued progress for this high-risk area depends primarily on addressing the remaining outcomes. In the coming years, DHS needs to continue implementing its Integrated Strategy for High-Risk Management to show measurable, sustainable progress in implementing corrective actions and achieving outcomes. In doing so, it remains important for DHS to maintain its current level of top leadership support and sustained commitment to ensure continued progress in executing its corrective actions through completion; continue to identify the people and resources necessary to make progress towards achieving outcomes, work to mitigate shortfalls and prioritize initiatives as needed, and communicate to senior leadership critical resource gaps; continue to implement its plan for addressing this high-risk area and periodically provide assessments of its progress to us and Congress; closely track and independently validate the effectiveness and sustainability of its corrective actions, and make midcourse adjustments as needed; and make continued progress in achieving the 13 outcomes it has not fully addressed and demonstrate that systems, personnel, and policies are in place to ensure that progress can be sustained over time. We will continue to monitor DHS’s efforts in this high-risk area to determine if the outcomes are achieved and sustained over the long term. Madam Chairwoman Torres Small, Ranking Member Crenshaw, and Members of the Subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. GAO Contacts and Staff Acknowledgments If you or your staff members have any questions about this testimony, please contact me at (404) 679-1875 or curriec@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals making key contributions to this work include Claudia Becker, Assistant Director; Imoni Hampton, Analyst-in-Charge; Michele Fejfar, Melissa Greenaway, James Lawson, and Tom Lombardi. Key contributors for the previous work that this is based on are listed in each product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study GAO has regularly reported on government operations identified as high-risk because of their increased vulnerability to fraud, waste, abuse, and mismanagement, or the need for transformation to address economic, efficiency, or effectiveness challenges. In 2003, shortly after the department was formed, we designated Implementing and Transforming DHS as a high risk area to the federal government because DHS had to transform 22 agencies into one department, and failure to address associated risks could have serious consequences for U.S. national security. In 2013, we reported that although challenges remained for DHS, the department had made considerable progress. As a result, we narrowed the scope of the high-risk area to focus on strengthening DHS management functions (human capital, acquisition, financial management, and information technology). discusses DHS's progress and remaining actions needed to strengthen and integrate its management functions. This statement discusses DHS’s progress and remaining actions needed to strengthen and integrate its management functions. This statement is based on our 2019 high-risk update and other reports issued from February 2017 through March 2019. Among other things, GAO analyzed DHS strategies and other documents related to the department's efforts to address its high-risk areas. What GAO Found As GAO reported in its 2019 high-risk update, the Department of Homeland Security (DHS) has continued its efforts to strengthen and integrate its acquisition, information technology, financial, and human capital management functions. As a result, the department has continued to meet three out of five criteria for removal from GAO's High-Risk List (leadership commitment, action plan, and monitoring) and partially meet the remaining two criteria (capacity and demonstrated progress). With regard to leadership commitment, DHS's top leadership has continued to demonstrate support for addressing the department's management challenges through, for example, its Integrated Priorities initiative to strengthen the integration of DHS's business operations across the department. Additionally, DHS has established an action plan for addressing the high-risk area and has issued 14 updated versions since 2011.This action plan also demonstrates DHS's ongoing monitoring of these efforts as it describes DHS's progress to-date and planned corrective actions. The two key areas where additional work is needed are DHS's capacity and demonstrated progress. With regard to capacity, DHS needs to make additional progress identifying and allocating resources in the areas of acquisition, information technology, and financial management. With regard to demonstrated progress, DHS should show the ability to achieve sustained improvement across 30 outcomes that GAO identified and DHS agreed were needed to address the high-risk area. GAO found in its 2019 high-risk update that DHS fully addressed 17 of these outcomes, while work remains to fully address the remaining 13. DHS has made some progress in recent years regarding human capital and information technology outcomes, but needs to continue implementing its action plan to show measurable, sustainable progress in achieving the 13 outcomes not yet fully addressed. What GAO Recommends Since 2003, GAO has made about 2,800 recommendations to DHS to strengthen its management efforts, among other things. DHS has implemented more than 75 percent of these recommendations which have strengthened program management and performance measurement.
gao_GAO-19-437
gao_GAO-19-437_0
Background The Commercial Space Transportation Industry Space transportation is the movement of objects, such as satellites and vehicles carrying cargo, scientific payloads, or passengers, to or from space. In the United States, commercial space transportation is carried out using orbital and suborbital launch vehicles owned and operated by private companies. Key parties involved in commercial space transportation activities include: The commercial launch provider—the entity that conducts the launch of a vehicle and the payload it carries. The launch customer—the entity that pays the launch provider to carry a payload into space. Customers include the U.S. government— which has not operated its own launch vehicles since the retirement of the Space Shuttle in 2011 and primarily relies on commercial launch providers to, among other things, resupply the International Space Station, launch satellites, and carry out national security and defense missions. Customers also include private companies, such as satellite owners, and researchers. The launch site operator—the entity that hosts the launch (or reentry, or both) of the launch vehicle from its launch site. Almost all launch site operators are either commercial launch providers or state or municipal government entities. The U.S. share of the global commercial space transportation market has grown in recent years. For example, according to FAA, 64 percent of the 33 worldwide commercial orbital launches in 2017 occurred at U.S. launch sites, up from about 48 percent in 2014. Commercial launch providers currently use, and are developing for future use, a variety of vehicles to launch payloads. Historically, launch providers have carried payloads into orbit using vertically launched expendable launch vehicles—those vehicles that launch only once. In more recent years, a launch provider, SpaceX, has introduced launch vehicles that can be reused for multiple launches, such as Falcon 9 and Falcon Heavy, where one part or all of the launch vehicle returns to a landing pad, either on land or on a converted barge offshore, after the payload is launched into orbit. Commercial launch providers are also moving toward reusable suborbital launch vehicles, some intended for human space tourism. These vehicles include horizontal hybrid suborbital launch vehicles, such as Virgin Galactic’s SpaceShipTwo, and vertical reusable suborbital launch vehicles, such as Blue Origin’s New Shepard. Figure 1 depicts examples of expendable and reusable vertical launch vehicles. Launch site infrastructure, and those who own and operate it, also varies across individual launch sites. The type of infrastructure and its design depends on the type of operations that the launch site supports. For example, some launch sites may have a launch pad for vertical launches but not a runway for horizontal launches; others may have infrastructure specifically to support launch vehicle reentry operations. While many different types and designs exist, figure 2 below shows a few examples of major pieces of launch site infrastructure. AST’s Roles and Organizational Structure Within FAA, AST is responsible for regulatory oversight of the commercial space transportation industry. AST’s primary means of oversight is licensing or permitting commercial launch and reentry vehicle operations and non-federal launch sites, as well as conducting safety inspections of licensed launch providers and site operators. AST is organized into three management and support offices, including the Office of the Associate Administrator, and five operational divisions—responsible for the majority of AST’s primary mission areas, such as licensing and overseeing launches. In addition, the FAA Reauthorization Act of 2018, signed into law in October 2018, requires that AST develop an Office of Spaceports. According to FAA officials, as of May 2019, the size and design of this office have not yet been finalized. AST’s workforce size is expected to increase to help accommodate anticipated growth in the industry and AST’s workload (see table 1). As of February 2019, AST had 104 full-time equivalent positions and an operations budget of about $25 million—an increase of 25 full-time equivalent positions and about $8 million since fiscal year 2015. Launch Licensing Regulations FAA requires launch providers conducting a launch or reentry within U.S. borders to obtain a license or permit, as well as those conducting a launch or reentry abroad, if the launch provider is a U.S. entity. FAA considers a commercial launch to be one in which the contract for the main payload’s launch was open to international competition or the launch was privately financed without government support. FAA also requires, with some exceptions, a site operator’s license, which authorizes an entity, such as a state or local government, to host commercial space launch operations from a specific launch site. FAA is to conduct safety inspections of licensed commercial space transportation launch operations, which involves monitoring of pre-operational, operational, and post operational activities. In February 2018, the National Space Council recommended that DOT update the regulations on launch and re-entry licensing to better accommodate changes that have occurred in the industry. The White House subsequently directed DOT to publish a proposed rule by February 1, 2019, with a revised framework that allows more flexibility in how companies can meet the regulatory requirements. DOT published a notice of proposed rulemaking for the revisions to its licensing regulations in April 2019. Funding for Infrastructure at Active U.S. Commercial Launch Sites Has Shifted from Federal to State, Local, and Private Sources Federally Funded Construction Around the mid-20th century, the federal government began constructing the infrastructure that supports the majority of commercial orbital space launches today. The Department of Defense (DOD) constructed launch sites to support ballistic missile testing and satellite launches, including sites that are now home to Cape Canaveral Air Force Station in Florida and Vandenberg Air Force Base in California. Those sites conducted their first test launches in 1950 and 1958, respectively. The National Aeronautics and Space Administration (NASA) was created in 1958, and began acquiring land adjacent to Cape Canaveral Air Force Station in 1962 to support its human spaceflight lunar program; this land is now home to the Kennedy Space Center. In recent years, nearly all FAA-licensed launches in the United States occurred at three federal ranges, which were originally built by the federal government (see fig. 3). All 61 of the FAA-licensed commercial orbital launches from 2015 through 2018 occurred at launch sites that are on or co-located with federal ranges, with 44 of the 61 launches taking place at Cape Canaveral Air Force Station and Kennedy Space Center (collectively referred to as “Cape Canaveral”). In addition, one of the 11 licensed commercial suborbital launches occurred at a launch site co- located with a federal range. While the federal government made the initial infrastructure investment at federal ranges, the launch complexes used for commercial launch operations at these sites are now operated under use agreements by non-federal entities, such as state governments or commercial launch providers. For example, four of the launch complexes at Cape Canaveral are operated by commercial launch providers, while two others are operated by the State of Florida. Two other federal ranges have launch pads that are also operated by non-federal entities—Vandenberg Air Force Base in California and the Mid-Atlantic Regional Spaceport, which is co-located with NASA’s Wallops Flight Facility in Virginia. The Air Force and NASA generally still have responsibility for maintaining common-use infrastructure—that is, infrastructure that may be shared by multiple users, such as access roads and fuel pipelines. As part of the operators’ use agreements (the details of which vary depending on the launch site and launch site operator), however, funding for improvements to infrastructure used solely by that site operator is generally left to the site operator. This arrangement is in part because the infrastructure improvements are necessary to support the unique needs of specific commercial launch vehicles using those sites. At another launch site, the federal government followed a different infrastructure investment model. In the 1990s, the Air Force partnered with the state of Alaska to help fund the construction of a state-owned site to support federal government launches and missile testing rather than constructing a new federal range. This site, known as the Pacific Spaceport Complex – Alaska, conducted its first government launch in 1998. Major infrastructure includes two launch pads with shared vehicle integration and transfer facilities. According to spaceport officials at this site, in addition to government launches, Alaska Aerospace, a state entity that operates the site, has contracts with three commercial launch providers, which anticipate conducting commercial orbital launches there in the future. Appendix II provides additional information on launch sites co-located with federal ranges, as well as funding sources and characteristics for other U.S. commercial launch sites. State and Local Government Funding While the federal government has not directly funded the construction of infrastructure at launch sites in recent years, state and local governments have done so. According to interviews we conducted and our review of publicly available documents of state-government entities that were formed to promote space-related development, state and local governments are investing in infrastructure to obtain the economic benefits of attracting space-related businesses to their areas. In two cases, state governments became operators of launch sites co-located with federal ranges and invested in infrastructure improvements at those sites to support commercial orbital launch vehicles. The Commonwealth of Virginia—through Virginia Commercial Space Flight Authority, an independent state entity created in part to develop and promote Virginia’s commercial space transportation industry— invested $90 million in improvements to a launch pad at the Mid- Atlantic Regional Spaceport. This represented a share of the total costs, which were shared by Northrop Grumman Innovation Systems, a commercial launch provider that has an agreement to use the pad for commercial launches, including cargo resupply missions to the International Space Station. The State of Florida—through Space Florida, an independent special district that serves the state’s space-related needs—has provided over $140 million in infrastructure investments. Those investments upgraded launch pads and the supporting infrastructure at Cape Canaveral, as well as provided grants matched by commercial launch providers for improvements to infrastructure used by those providers. In other cases, state and local governments have invested in wholly new commercial launch sites or are adapting existing airport infrastructure to use as launch sites. According to these launch site operators, these sites are currently used for suborbital launches but could support orbital launches in the future. The state of New Mexico funded the construction of the commercial launch site known as Spaceport America through $225 million in state appropriations and local taxes in two counties. The state also has a 20-year lease agreement with Virgin Galactic, which plans to conduct commercial suborbital space tourism launches from the site. This launch site, with its 12,000-foot-by-200-foot runway, hosted one FAA- licensed suborbital test launch in 2018. In California, the Mojave Air and Space Port (Mojave) is a general aviation airport that obtained an FAA license to conduct commercial suborbital launches in 2004. In addition to continuing its general aviation operations, Mojave currently provides a runway and mission preparation area to commercial launch providers testing vehicles designed for orbital and suborbital launches. This site hosted three FAA-licensed suborbital test flights in 2018. According to a representative from Mojave, the site generally funds infrastructure maintenance with rents and user fees, while launch providers build their own facilities. In July 2018, Mojave also received a $1.4 million grant through FAA’s Airport Improvement Program for the purpose of extending an airport taxiway. According to a Mojave representative, the location of the taxiway extension will be available for hangar development by both aviation and commercial space users on a first- come, first-serve basis. The project was completed in April 2019. Private Funds Commercial launch providers fund infrastructure improvements at existing launch sites—both co-located with federal ranges and elsewhere—to ensure the sites are tailored to their unique launch vehicles. For example, under its agreements to use launch pads at the federal ranges at Cape Canaveral and Vandenberg Air Force Base, SpaceX representatives told us they invested “hundreds of millions” of dollars in new infrastructure and infrastructure improvements, such as constructing new liquid fuel lines and improving launch pad cooling systems. According to SpaceX representatives, the company made these investments to support the specific needs of its launch vehicles and the rapid pace at which it is currently launching. Virgin Galactic and Stratolaunch—two other commercial launch providers developing suborbital and orbital launch vehicles, respectively—funded the construction of hangars and testing facilities for their launch vehicles at Mojave Air and Space Port. Three of the seven commercial launch providers that we spoke with constructed or are currently constructing new launch sites for their exclusive use. Representatives from two of them said doing so allows them to schedule launches without having to compete with other launch providers at existing launch sites. Two of these commercial launch providers also told us they had not received any government funding for these sites, while the third told us it had received some support from the state government where the site is located. As the commercial space transportation industry continues to evolve, it may lead to more investments in launch sites that are not currently supporting commercial orbital launches. For example, some commercial launch providers are developing launch vehicles consisting of a rocket launched from an airplane in flight, enabling launches from runways rather than launch pads. This could change how and which entities fund launch site infrastructure. Launch Customers in Our Review Consider the Launch Provider’s Capabilities and Price, among Other Factors, When Deciding Where to Launch Commercial space transportation is a global industry. We identified seven countries, including the United States, that have launch providers with the capability to support an orbital launch of a commercial payload (see fig. 4). In 2017, 7 of the 22 FAA-licensed launches conducted in the United States contained a payload from a non-U.S. launch customer, including several communications satellite operators and one civilian space agency, according to FAA. Similarly, some U.S. launch customers we interviewed said they have used non-U.S. launch providers. According to representatives of the seven domestic and non-U.S. companies we interviewed that use launch services for placing their products into Earth orbit or other trajectories, several factors influence their selection of a launch provider. Many of these representatives acknowledged that as part of their business decision, a prerequisite is that the launch provider’s vehicle and launch site must have the capabilities to meet the customer’s mission requirements, such as having the capability to bring the payload to the desired orbit at the desired time. That capability, in turn, depends on factors such as the lift capacity of a provider’s launch vehicle—which dictates the maximum weight the vehicle can carry—and the geographic locations of its launch sites. For example, launch vehicles operating from sites closer to the equator can place payloads into certain orbits using less fuel due to Earth’s rotational velocity. The direction a launch vehicle can travel from a launch site also affects the orbits into which the vehicle can most efficiently place a payload. For example, Vandenberg Air Force Base in California—which allows launch vehicles to travel west over the Pacific Ocean—is more efficient for certain orbits, while Cape Canaveral—which allows vehicles to travel east over the Atlantic Ocean—is more efficient for others. Beyond selecting a launch provider that has capabilities to meet a launch customer’s mission requirements, six of the seven launch customers we spoke with said the price of a launch is a key deciding factor. For example, a representative from an international satellite operations company told us that the company achieved significant savings by procuring a series of launches from its selected provider. According to the representative, using a different launch provider would have cost almost twice as much—a price that would have forced the company to delay its launch plans. According to data published in FAA’s Annual Compendium of Commercial Space Transportation: 2018, there is wide variation in the commercial price of launches worldwide, ranging from an estimated $62 million to $178 million per launch. The exact price paid for many launches is considered proprietary by both launch customers and commercial launch companies, and is therefore not reported publicly. Moreover, price can be affected by the size and weight of the payload, the intended orbit being reached, and other mission-related factors. As a result, direct comparison of launch prices is difficult. In addition to price, a launch provider’s availability and reliability are also key factors, according to launch customers we spoke with. Six of the launch customers we spoke with mentioned availability as a key factor, which is the launch customer’s ability to reserve a place on the launch provider’s launch schedule. For example, a representative from a domestic small satellite operations company said it can be difficult to find available launches in the United States because the company relies on sharing launches with larger payloads, and few U.S. launches travel to the company’s desired orbit. As a result, the company has procured launch services from Indian and Russian launch providers. Five launch customers mentioned reliability—generally a launch vehicle’s history of successful launches—as a key factor, in part due to the financial impact of a failed launch. For example, a representative from a non-U.S.-based satellite operations company said that in the event of a failed launch, insurance would generally cover the cost of the lost payload, but not lost revenue that would have been generated by the payload in orbit. Some launch customers noted that choosing a launch provider is a complex decision, and that the key factors they consider can be interdependent. For example, the representative from the non-U.S.-based satellite operations company said that while a launch provider may offer a lower price on a less reliable vehicle, the lack of reliability could increase the customer’s payload insurance costs, effectively increasing the launch price. A representative from a company seeking to launch into deep space told us they would only consider a provider that is not only reliable but also has years of successful operations and a proven business plan. DOT Published a Proposed Rule in April 2019 but Related Rulemaking Activities Affect When Regulatory Changes Will Be in Full Effect FAA Accelerated Plans to Streamline Regulations to Respond to a Presidential Directive According to FAA officials, FAA has been considering changes in its licensing regulations since 2015 and recently has accelerated these efforts. Dating back to 2015, according to FAA officials, FAA had been taking an iterative approach by first making “quick wins”—that is, making administrative changes or straightforward regulatory revisions—with a long-term goal of fully consolidating and streamlining the regulations over a period of several years. FAA’s approach changed, however, when in May 2018, a Presidential Directive was issued that addressed both the timing and content of FAA’s regulatory updates. The directive contained a deadline to publish a proposed regulation for public comment by February 1, 2019. It also directed the Secretary of Transportation to replace the current prescriptive regulations for commercial space launch licensing—in which a certain technology or action is required—with a regulatory framework that is performance-based—in which applicants have flexibility in how they achieve required outcomes, such as a specific level of safety. In response to this directive, DOT published a notice of proposed rulemaking (NPRM) in April 2019 to solicit comments on a proposed rule that will incorporate performance-based requirements. According to FAA officials, they had planned for the NPRM to be published by February 1st, 2019, consistent with the deadline in the directive, but the publication was delayed due to the lapse in DOT’s appropriations that took place in early 2019. A timeline of key actions related to launch licensing regulation is shown in figure 5 below. The preamble of the NPRM states that the proposed rule intends to satisfy the requirements of the Presidential Directive, including consolidating and revising multiple regulatory parts to apply a single set of licensing and safety regulations across several types of operations and vehicles, and replacing prescriptive regulations with performance-based rules. The preamble further states that these changes will give industry greater flexibility to develop means of compliance that maximize their business objectives while maintaining public safety. The proposed rule also seeks to address recommendations made by an Aviation Rulemaking Committee (ARC) that was created in March 2018 as a forum for industry to discuss procedures and requirements for launch and reentry licensing. For example: The ARC recommended that FAA propose rules to eliminate potentially duplicative requirements for launches at federal ranges. Currently, launch providers at federal ranges are subject to FAA’s requirements in addition to those of the range operator (NASA or the Air Force), which may be duplicative of each other. The preamble to the NPRM states that, while FAA has not included language to eliminate duplicative approvals, FAA would continue to work with the appropriate agencies to streamline launch and reentry requirements at ranges and federal facilities. The ARC also recommended more flexibility in licensing such that a single license structure could accommodate a variety of vehicle types and launch or re-entry sites. The preamble states that the proposed rule would, among other actions, eliminate the current limitation specifying a launch license covers only one launch site. Completing Other Related Rulemaking Activities and Finalizing Guidance Will Affect When Applicants Operate under the Revised Regulation As part of the rulemaking process, FAA must comply with a number of requirements before the final rule can be issued. FAA is statutorily required to provide a period of time to solicit public comments on the proposed regulation. FAA must then reasonably respond to public comments submitted on the NPRM and determine whether any changes to the proposed rule may be required as a result of the comments. Some changes made in response to comments would allow AST to proceed with publication of the final regulation. However, major changes not contemplated in the NPRM could necessitate a supplemental NPRM, which could affect the timing of the final regulation’s publication. FAA provided 60 days after publication in the Federal Register for the public comment period. And, while officials told us that they plan to work toward publishing the final rule by the end of 2019, the schedule was affected by DOT’s lapse in appropriations. They also noted that the quantity and content of the public comments and the time and resources required to respond to them will influence that date. Officials estimate that the public comments could number in the thousands. Further, there is a lack of industry consensus in some areas. For example, according to the cover letter accompanying the final ARC report, the report did not include specific recommendations that were agreed upon by all participants. Almost half of the industry stakeholders that participated in the ARC and provided comments on the ARC final report (8 of 19) did not fully concur with the report. Industry stakeholders disagreed on issues such as the requirements for testing flight safety systems, which would be considered as part of the licensing process. The lack of consensus among ARC participants suggests that the NPRM may also generate significantly different perspectives. Furthermore, FAA officials emphasized that the NPRM addresses a highly complex and technical issue, using a wholly revised performance- based regulatory framework, an approach that could affect implementation timelines. We found in the past that the complexity of the issues addressed by rulemakings is a major factor influencing the time needed to issue a regulation. FAA officials told us they intend to complete other related activities that support the rule, such as finalizing guidance documents to provide transparency and help ensure that licensing applicants understand the new requirements. Such guidance may, for example, provide examples of how to comply with the new performance-based requirements. FAA also intends to implement new administrative tools to help AST review licensing applications more quickly. Specifically: Guidance: FAA released a number of draft guidance documents in the form of Advisory Circulars with the NPRM. These Advisory Circulars cover a range of topics, such as providing ways for applicants to comply with requirements for flight safety analysis and lightning hazard mitigation, and provide at least one way an applicant could demonstrate compliance with each performance-based requirement in the proposed rule. FAA officials told us that they plan to publish these Advisory Circulars in final form simultaneous with publishing the final regulation. Through the ARC process, FAA sought input from industry on the standards that should be used to demonstrate compliance with the performance-based regulations. In the long term, however, FAA told us that they are encouraging the industry to develop voluntary consensus standards that the FAA could then accept as an acceptable way of demonstrating compliance. Administrative Tools: FAA officials said they are in the early stages of looking at ways to reduce the administrative burden on FAA and licensing applicants during the licensing process. For example, FAA officials told us that in 2019 they will be examining ways to automate and streamline the licensing process. FAA officials told us that they would like to implement a system whereby applicants, for the first time, would submit applications electronically to an FAA-sponsored system rather than by hard copy or attachments to an email. According to the preamble of the NPRM, FAA’s proposal would allow an applicant to submit its application by email as a link to a secure server, and would remove the requirement that an application be in a format that cannot be altered. In addition to easing the burden of developing paper applications, FAA officials told us they envision that an electronic system would enable both FAA and industry to view the application during the application process and more easily communicate about its progress. AST Has Taken Steps to Better Understand Current Workforce Needs, but Understanding of Future Needs Is Limited by a Lack of Information In recent years, AST has improved some aspects of how it determines its workforce needs. Our work on strategic workforce planning underscores the importance of determining both current and future workforce needs and identifying potential gaps in employee skills. The improvements made to date provide AST with greater insight into the optimal number of people currently needed in certain positions. However, these improvements do not improve AST’s ability to systematically assess the workforce needs of its management and support offices, nor does AST project its future workforce needs. Moreover, AST has yet to collect information on staff skills and competencies that would enable it to identify potential gaps in those skills, gaps that further limit AST’s ability to effectively and efficiently align its available staff resources with current and future workloads. AST Has Improved Measurement and Analysis of Workforce Needs, but Only for Part of Its Office and within Its 2- Year Budget Cycle To assist FAA decision makers in understanding and meeting AST’s staffing needs, AST developed and annually updates a 5-year workforce plan for its office. The current plan—covering the period from 2018 through 2022—indicates that AST’s approach for workforce planning has a 5-year time frame. However, the plan discusses immediate workforce and resource needs in general terms. One of the key principles we identified in our prior work on effective strategic workforce planning is the importance of determining the workforce needs that are critical to achieving an organization’s current and future programmatic goals. Such a determination of workforce needs should include both the optimal number of staff needed in specific positions and the required skillsets and levels of expertise for staff. Since 2016, AST has taken several steps to better understand how it uses its staff resources in carrying out its mission to license and oversee space launch operations. The majority of AST’s operations budget— about 75 percent in fiscal year 2018—was used to fund salaries and related expenses. AST now comprehensively monitors and measures staff time spent on specific activities and measures and tracks the volume of its work—information it can use to better understand workforce needs. AST officials told us that these steps facilitate more informed decision-making about the number of staff needed in specific positions for the next budget cycle. However, these steps do not provide the information AST needs to determine the optimal size and composition of its entire workforce or enable it to project workforce needs sufficiently into the future. Revised Timecard System AST launched a revised timecard system in June 2016 to more comprehensively account for staff time spent on specific activities. According to AST officials and our review of relevant documentation, including a list of revised time codes, the revised system allows staff to record hours worked on individual tasks, such as launch observations or consultations with launch companies prior to application submission (i.e., pre-application consultation), training, and leave. Time codes were revised for all AST staff—that is, staff in its five operational divisions, management office, and two support offices (see fig. 6)—to account for all major tasks they perform. AST officials told us that the new timecard data, in combination with workload metrics, can help inform its current workforce needs. For its five operational divisions, AST officials have developed and continue to refine a set of workload metrics, which, along with other data, enable AST to identify the resources that are used to carry out key AST activities, such as licensing and overseeing launches. These metrics track the number of work activities (e.g., regulatory waivers issued or safety inspections conducted) that are ongoing or were completed over a certain time period. For example, in fiscal year 2018, AST was engaged in pre- application consultations with about 23 commercial launch providers and was evaluating more than 16 license applications on average per month. Officials analyze these metrics in combination with timecard data to determine the number of staff hours and average number of days spent completing specific activities. For example, between March and August 2017, FAA officials reported that for each ongoing project, staff spent an average of about 60 hours per month on pre-application consultations. Officials plan to use the results of this analysis in the fiscal years 2021– 2022 budget cycle to help estimate the number of staff currently needed in specific positions within its five operational divisions. However, with regard to its management and two support offices— which represent about one-third of AST’s total staff—AST has not yet developed workload metrics. Staff in AST’s management and support offices are responsible for overseeing research and development; advising and assisting other offices on technical matters; coordinating and liaising with international entities and other federal agencies; as well as performing other support operations, such as budget and financial planning. Officials told us that although they would like to develop these metrics, they put the effort on hold because of competing priorities within AST, such as updating its licensing regulations. Officials said that they had first focused on better understanding the workforce needs of the operational divisions, which have responsibility for the majority of AST’s primary mission areas, such as licensing and overseeing launches. In discussing this approach AST officials stated that recent budget constraints have limited their ability to address all of their current identified workforce needs, which, according to their most recent workforce plan, are in nearly all areas of their office. As a result, officials said that they use their limited number of authorized positions to fill their most immediate workforce needs, typically in the operational divisions. However, without workload metrics that would allow AST to determine the number of staff needed for its workload regardless of what office or division, it is difficult for AST to determine the appropriate number and composition of staff to most effectively carry out its statutory priorities and help ensure that it uses its limited resources in the most efficient way. In addition, AST officials told us that they recognize that past hiring decisions and balance of workload among staff may not have been fully aligned with AST’s statutory priorities and that the composition and ratio of staff may no longer be appropriate given the evolution of the industry and the revised regulatory structure under way. As a result, officials stated that in the coming months they intend to take a fresh look at the organization of the Office of Commercial Space Transportation as a whole to better balance the needs of the industry with the organizational requirements. In addition to developing an Office of Spaceports, as required by the FAA Reauthorization Act of 2018, officials told us that they will consider re-organizing the offices and divisions, as well as the workload and staff currently within them. Workload Projections AST also has taken steps to improve its ability to estimate its workload for a 2-year budget cycle, which, according to AST officials, will help them determine and justify near-term workforce needs. Specifically, from the new workload metrics discussed above, AST officials told us they had identified five key activities that best reflect historical workload trends and that officials then plan to combine with their assumptions about how the industry will evolve over the next 2 years. Officials told us that they plan to use this approach for the first time in the fiscal years 2021–2022 budget cycle. In past budget cycles, AST relied primarily on the projected number of launches to estimate its workload; this number, officials noted, is the most important factor but resulted in an incomplete reflection of the five operational divisions’ workload. For example, officials told us that the workload of its operational divisions encompasses a range of activities leading up to a launch that would not be captured in its workload estimates if AST only looked at the number of launches. Now, under their planned approach, AST officials said that they will better account for the full range of regulatory activities and the timeline of its licensing process. While planned improvements to AST’s workload estimates better account for the full range of AST’s regulatory activities, limiting these estimates to the 2-year budget cycle reduces AST’s ability to anticipate and respond to emerging workforce needs. AST recognizes the importance of longer- term workforce planning by developing and annually updating a 5-year workforce plan. Also, as noted above, key principles for effective strategic workforce planning emphasize the importance of forward-thinking planning to help organizations align their workforce to meet future programmatic goals. According to AST officials, they estimate the workload for 2 years in part because it is intended to help them identify and justify workforce needs during the 2-year budget process, as well as prioritize addressing immediate workforce needs. Officials also said that substantial uncertainty surrounds longer-term industry forecasts, and consequently, any assessment of longer-term workforce needs. For example, they pointed to a number of factors that lead to the unpredictability of how the industry will evolve, including the variable pace at which new launch companies progress and the future of the commercial suborbital launch sector, particularly the nascent space launch tourism industry. They also noted that a launch vehicle accident or other risks could affect the industry’s rate of growth. In our prior work, we have discussed some approaches used by other agencies to help assess future workforce needs when faced with uncertainties. One approach involves scenario planning, in which a federal agency operating in a changing environment used a range of scenarios, each of which represented different future environments that the agency may face, to help predict how the scope and volume of its activities might change in each scenario. For AST, such an approach could entail developing a range of workload projections based on different industry and regulatory environments that it thinks it may face, along with associated workforce management strategies to address those environments. AST officials said that they were considering projecting their workload estimates further into the future and intend to work with FAA’s Office of Aviation Policy and Plans—the office that helps develop FAA’s 20-year aerospace industry forecasts—to leverage that office’s forecasting expertise. However, AST has not established a timeline with milestones or formally committed to conducting longer-term workload projections. Longer-term workload projections may be particularly beneficial to AST to help make well-timed decisions about hiring and training staff and to help ensure AST has qualified staff available when they are needed. For example, according to officials, it can take a few years for systems safety engineers to be trained and have the sufficient experience to lead projects. Further, AST officials told us that hiring technically qualified personnel, including positions that require considerable training and experience to be a fully functioning employee, is challenging. Without an understanding of its projected workload beyond a budget cycle, AST will be limited in its ability to effectively and strategically plan for its longer- term workforce needs and take action when the opportunity arises. As such, AST remains at risk of not having the right number of staff in the right positions to keep pace with and respond to changes in the commercial space transportation industry. AST Lacks Information to Identify Gaps in Staff Skills and Competencies Our prior work on strategic workforce planning underscores the importance for organizations to determine the skills and competencies that are critical to successfully achieving their current and future missions and goals. Once the necessary skills and competencies have been identified, key principles for effective strategic workforce planning call for an organization to identify—and subsequently develop strategies to address—gaps between the skills and competencies needed and those that its workforce has. Those gaps should include both current skills gaps (i.e., skills that its workforce currently needs but does not possess) and emerging skills gaps (i.e., skills that its workforce may need in the future but does not possess). Further, according to federal Standards for Internal Control, an organization’s management should ensure that the workforce skills necessary to achieve programmatic goals are continually assessed. This step is especially important as changes in national security, technology, budget constraints, long-term fiscal challenges, and other factors may occur in the environment within which federal agencies operate. AST, however, does not currently collect the information needed for it to conduct a skills gap analysis. Rather, AST has a basic understanding of the skills and competencies of its workforce. For example, its current workforce plan includes the following information on AST’s workforce: Level of education—the percentage and number of employees having attained bachelor’s, master’s, and doctorate degrees. Occupation—the percentage and number of employees in mission- critical occupations (e.g., aerospace engineers). Age—the percentage and number of employees by age range. Tenure—the average number of years employees have been in their current position and employed by FAA. Retirement eligibility—the number of employees who will be eligible to retire each year during the 5-year period of the staffing plan. AST officials acknowledged that the workforce information it currently collects is insufficient to allow them to systematically identify gaps in specific staff skills or competencies—such as expertise in flight safety analysis or launch vehicle propulsion—needed for evaluating certain launch license applications. Officials told us that they do prioritize filling positions, through hiring or contracting, that address the organization’s most immediate needs. However, this strategy focuses on positions, as opposed to identifying specific skills or competencies within those positions. AST officials told us that they are planning to develop and annually administer to staff and managers a skills assessment survey that would collect information about the specific skills and competencies that individual staff currently possess. Officials told us that the results of the survey would allow them to assess the current skills of AST’s workforce and in combination with other information, such as expected attrition and retirement rates, help identify current and emerging skills gaps. In July 2018, officials told us that they plan to complete the survey and administer it in time for inclusion in their workforce plan for fiscal years 2019–2023, estimated to be issued in April or May 2019. However, officials subsequently stated that their survey plans have been delayed for multiple reasons, including DOT’s lapse in appropriations. Accordingly, as of May 2019, AST had neither developed a draft of the skills assessment survey, nor established a formal timeline for finalizing it or a plan for periodically administering the survey. Furthermore, officials told us that they are currently negotiating with the union’s bargaining unit to gain approval to administer a survey that does not maintain anonymity to non-management staff. They said that if they cannot obtain the bargaining unit’s approval, they will need to develop an alternative plan because they do not believe that collecting anonymous data on staff skills would allow them to identify skills gaps for these staff. Officials told us that they also intend to include in the survey skills and competencies that may be needed in the future. They stated that they did not know for certain if or how they would identify what those new skills might be, but that they are considering soliciting feedback from industry stakeholders, such as through FAA’s Commercial Space Transportation Advisory Committee, to help identify any future competencies that may be needed as a result of the evolution in the industry. Without systematic information on specific skills and competencies of its entire workforce, AST lacks reasonable assurance that its current workforce possesses the requisite skills and competencies and may not be able to efficiently identify opportunities to move staff within AST to help address identified skills gaps. And, ultimately, AST may not be prepared to make strategic decisions on how to address emerging skills gaps and align its staff to achieve future programmatic goals, such as identifying and acquiring potential new skills and competencies needed under a revised regulatory structure. FAA Is Exploring Technological and Procedural Solutions to More Efficiently Accommodate Commercial Space Operations FAA’s Current Approach to Accommodating Launch and Reentry Operations Results in Inefficiencies for Airspace Users and FAA FAA officials and representatives from the commercial space and aviation industries we met with agree that FAA’s current approach to accommodating commercial space launch and reentry operations into the National Airspace System (NAS) is inefficient. FAA has the responsibility for ensuring the safe and efficient use of the NAS, a limited national resource, for and by all users, including commercial and business airlines and commercial launch providers, among others. To this end, according to FAA officials and documents describing operational procedures and risk evaluation, FAA takes measures during a commercial space operation aimed at preventing fatalities, injuries, and property damage, and ensuring that nothing interferes with the launch vehicle’s operations. FAA’s current approach, as described in documents that explain how FAA mitigates risk to people and property during a space launch, is to close the airspace around a commercial launch operation—in some cases hundreds of square miles for several hours—to other airspace users, such as commercial airlines. Prior to launch, FAA establishes the size and duration of the airspace closure, also known as an aircraft hazard area, and, days ahead, notifies potentially affected airspace users about the upcoming closure. FAA calculates the size and boundaries of the aircraft hazard area generally based on the risk to life and property posed by a launch vehicle’s expected trajectory, as well as potential trajectories in the case of a vehicle’s failure and the subsequent paths of falling debris. The duration of the closure is generally dependent on the period of time in which the launch or reentry is expected to occur—known as a launch window—which varies by the type of launch or reentry vehicle, among other things. The aircraft hazard area extends from sea level up to unlimited height, and generally does not change in size or shape during the entirety of the launch window (see fig. 7). According to FAA officials, the designated aircraft hazard areas are larger and remain in effect longer than may actually be needed to ensure public safety. For example, according to FAA officials and launch documentation, to protect public safety, the duration of an airspace closure is always longer than the launch window. In fact, in some cases, the airspace closure may be scheduled for more than 3 hours, which is substantially longer than the time typically required for space launch and reentry operations from Cape Canaveral (about 30 minutes). FAA officials explained that they are not able to monitor or respond to dynamic circumstances associated with space launch vehicles in the NAS in real- time. As a result, FAA closes the airspace for when and where it is potentially—rather than actually—hazardous. FAA officials told us that the agency’s approach to date for accommodating space launch operations into the NAS has helped ensure public safety during launches. For instance, during fiscal years 1989 through 2018, FAA reported that it licensed 357 launches or reentries, and in this time there were no fatalities, serious injuries, or significant property damage to the uninvolved public. However, according to FAA officials and research, FAA’s approach creates inefficiencies in how the airspace around launch operations is used—such as causing flight delays for commercial airlines. FAA officials and commercial space industry representatives said it also makes scheduling these operations more challenging for launch providers, and affects FAA’s operational efficiency. The effects on each of these groups are described below. Commercial airlines. FAA has estimated that, in fiscal year 2017, about 1,200 commercial airline flights were directly affected—that is, rerouted or delayed—around 22 space launch operations, resulting in an estimated 39,000 additional miles flown. The majority of these miles were flown in proximity to Cape Canaveral in Florida, which hosted the majority of domestic launches that year. FAA further estimated that, of the 15 space launches from January to October 2018 around Florida where airspace tends to be busy due to the high volume of commercial airline traffic along the East Coast, an average of 60 aircraft per launch were directly affected. For all commercial launch sites, FAA estimates that the number of directly affected aircraft ranged up to 153 for an individual launch with an average of fewer than 10 aircraft per launch outside of the Florida area. According to FAA officials, these estimates are based on historical data on the number of aircraft that typically fly through that area at the time of the airspace closure. Because launches can be delayed by hours or days for reasons such as unforeseen weather conditions or technological issues, airlines and other affected airspace users may face challenges when attempting to plan around a launch to avoid flight reroutings and delays. Representatives of a major airline trade association told us that the spread of launch activity beyond Cape Canaveral, as well as the development of new launch vehicles, has heightened their concerns about inefficiencies in how airspace around launch operations is used. Launch providers. The size and duration of aircraft hazard areas can make it difficult for FAA to find time slots to accommodate commercial space launches because of its responsibility to ensure the efficient use of the national airspace, a limited resource. All the launch providers we spoke with that had conducted launches at U.S. commercial launch sites said they have been able to find suitable launch windows that met with FAA approval. However, one launch provider told us of an occasion when FAA had denied the originally requested launch date and time because it fell within a time of unusually congested airspace. In addition, more than half of the launch providers told us that they anticipate challenges obtaining approval for a requested launch date or time in the future. FAA. In addition to effects on NAS users, FAA officials told us that FAA itself also experiences operational inefficiencies in managing air traffic during launches. This inefficiency is, in part, because FAA’s current policies and procedures were developed for aircraft operations and either have not yet been fully adapted for commercial space operations, or a relevant policy or process is missing altogether. For example, FAA’s current procedures for launch providers and FAA to follow when they request, schedule, and conduct launches require different FAA facilities to negotiate unique agreements for each separately licensed operation or activity. This process can be time- consuming. For example, one launch provider told us that it took 1½ years to finalize minor changes to a letter of agreement. As we discuss later, FAA is taking steps to standardize these letters. FAA Aims to Increase Efficiency of Launch Integration through New Technologies, Procedures, and Industry Coordination According to FAA documentation and officials we spoke to, FAA aims in the long term to increase utilization of the NAS by integrating launch vehicle operations into the NAS with other users, rather than its current approach of segregating launch and reentry operations through airspace closures. Specifically in 2011, FAA began identifying actions it could take and developing plans to address challenges associated with closing portions of the airspace during launch operations. It did so in light of the increasing frequency of commercial space launch and reentry operations and the spread of operations to new locations. According to FAA officials, the actions and plans continue to evolve as FAA learns more and reacts to anticipated changes in the commercial space transportation industry. Further, officials told us that FAA’s vision for full integration of commercial space launch operations cannot be defined by a single solution or an end goal because the demands of these operations on the NAS are constantly changing. Consequently, FAA officials said that full integration of commercial space operations into the NAS will reflect a collection of visions or approaches that improve predictability and efficiency while maintaining safety. For example, according to FAA documents and officials we spoke to, FAA’s approach for experimental launches will always be to close the airspace around the launch to other users. In contrast, FAA may develop standards for some launch vehicles, such as hybrid launch vehicles with repeated successful operations, which specify a safe distance and duration of separation in the airspace. FAA has two key internal documents to help guide the development and implementation of its actions as it seeks to better integrate commercial space launches and reentry operations into the NAS and reduce FAA’s operational inefficiencies. A concept of operations: FAA officials expect to finalize a concept of operations in 2019, which will provide a long-term, high-level vision for FAA’s efforts to efficiently integrate commercial space operations. According to FAA officials, it will describe, among other things, FAA’s existing approach to and associated shortfalls in accommodating commercial space operations, as well as proposed tools, policies, and procedures to address those shortfalls. According to FAA officials, it also will inform FAA’s current and future efforts to identify needs for new or modified technologies, tools, procedures, and policies. Roadmap for the Integration of Space Operations in the National Airspace System (Roadmap): This document serves as a planning and tracking tool for FAA’s operational arm—the Air Traffic Organization—to use as it seeks to more efficiently manage the airspace during commercial space launch and reentry operations while maintaining safety. It identifies, prioritizes, and tracks the specific changes needed to begin addressing the related shortfalls that FAA officials told us will be discussed in the concept of operations. According to the Roadmap, some of the activities are exploratory, and FAA expects that new activities will be identified and added to the development schedule as FAA continues to work with stakeholders to determine how best to manage the airspace, and conceptualizes and develops key technologies. The first Roadmap was released in November 2016, and, according to FAA officials, FAA plans to update it annually. FAA officials told us they expect to release the third and most recent version in 2019. The activities it identifies are divided into: short-range (to have been completed in calendar year 2018); mid-range (through 2022); and long-range (through 2023 and beyond) time frames, during which FAA plans to develop and incorporate new technologies, policies, processes, and regulations. In completing the actions needed to implement the approaches outlined in the Roadmap, FAA officials told us that they are actively working with FAA’s Performance Analysis Directorate to develop a set of metrics to measure the progress and effectiveness of its actions. Officials also highlighted that because the demands of commercial space operations on the NAS are constantly changing, as noted above, there is no defined end goal. To this end, the purpose of any metrics officials develop will be to help determine if their actions are helping increase efficiency while maintaining safety, not measure their progress toward a goal of full airspace integration. FAA officials told us they plan to have a set of metrics completed by early 2019. Some of these metrics will likely use currently available data, such as the number of aircraft rerouted and how many additional miles rerouted aircraft fly, while others are still being identified. Further, FAA officials told us that FAA coordinates actions related to commercial space integration through an interagency working group established in 2015. The group meets monthly and members include officials from across FAA lines of business, as well as other federal agencies, including the Department of Defense. The Roadmap shows that FAA’s actions to better integrate commercial space launch and reentry operations into the NAS include, but are not limited to: developing new technologies; updating and assessing needed changes to policies, procedures, and coordinating with aviation- and space-industry stakeholders. Technology FAA’s technology efforts are related primarily to collecting real-time data on a launch vehicle’s position and path, automatically generating the required aircraft hazard area, and integrating those data into the existing structure of the air traffic control systems. As a result, FAA officials said that FAA may ultimately be able to dynamically change the size and duration of the aircraft hazard area in some types of launches, thereby reducing the amount and duration of airspace closed to other users. In the short term, FAA is assessing how existing air-traffic control technologies and procedures could be used to help reduce the effects of launches on other NAS users. According to an FAA official, for example, four initiatives currently used to manage air traffic during other airspace constraints could potentially be used during space launch operations. One initiative would enable air traffic controllers to strategically control the number of flights approaching the aircraft hazard area so that if these flights were in the hazard area at the time of a launch vehicle failure, controllers could still clear the area quickly enough to protect public safety. This FAA official told us that if they decide to pursue these initiatives, they hope to complete some of the necessary steps to do so by summer 2019. For potential use in the longer-term, FAA is piloting prototypes of two key technologies by running them alongside existing air-traffic control systems during selected launches, thereby testing their capabilities without their being fully operational. The Space Data Integrator (SDI) is designed to receive real-time data on launch vehicle position and movement and display real-time aircraft hazard areas to enable improved situational awareness. FAA officials told us that, as FAA is assessing approaches to shift from static to more dynamic hazard area calculation capability, initial SDI capabilities will likely be deployed in advance of more integrated and improved real-time hazard area generation capabilities. In addition, FAA officials told us that they are exploring alternative acquisition strategies that could enable partial system implementation for the technology by 2022. Because FAA has not made a final investment decision, the date of system-wide implementation of SDI is unknown. According to FAA officials, the Hazard Risk Assessment and Management (HRAM) tool, if pursued, is intended to help automatically communicate SDI data to air traffic control systems and, in the future, to present air traffic controllers with information that would allow them to decide how to best manage the airspace. Officials also said that HRAM involves modifying an existing air traffic management tool, currently has very limited capabilities, and is still only under consideration as a possible approach. Over the next year these officials plan to work on some of the tool’s components, assess what types of data are valuable to air traffic controllers, and determine whether to continue developing this technology or consider alternative technologies. Policies, Procedures, and Regulations According to the Roadmap, FAA has identified policies and standard operating procedures that need to be created or updated to enable it to better manage the operating environment during space launches. Actions taken to date include, for example: developing training materials to inform air traffic personnel about commercial space operations in the NAS; developing a high-level strategy for integrated space vehicle operations going forward; and standardizing the terms of reference for commercial space operations for use by FAA, NASA, and DOD. In addition, according to the Roadmap, FAA plans to standardize some letters of agreement—the document specifying procedures that a launch provider and FAA use to request, schedule, and conduct launches. Officials said they hope to issue documentation of these changes by September 2019. FAA officials told us that these changes will result in letter of agreement templates for use by FAA. FAA officials said FAA also plans to continue reviewing its regulations, policies, and procedures to identify other areas that need updating or entirely new language. Industry Coordination FAA is taking steps to foster coordination between commercial space and aviation industries to help develop and increase buy-in for new and revised approaches to improve the efficiency of the national airspace for all users. Most notably, in November 2017, FAA chartered an aviation rulemaking committee to examine the issue of equitable airspace access among various users. Committee members include a mix of commercial space transportation and aviation industry representatives. Topics being addressed include identifying potential criteria that FAA may use when considering competing user priorities for airspace, as well as potential tools that could help mitigate the effects on other airspace users during launch operations. FAA officials told us that the committee anticipates issuing a report and recommendations to FAA in April 2019, and some members of the committee highlighted that the meetings benefited their understanding of other users’ unique needs; economic benefits; and experiences with regard to integrating space operations. Also, an FAA official said the agency has sponsored four “Industry Days” events since 2014 for the commercial space industry. At each event, multiple FAA offices discussed their roles and responsibilities associated with space launches and answered questions from industry. For the first time, at its 2018 event, FAA invited aviation industry representatives to encourage continued dialogue between the commercial space and aviation industries. FAA officials also noted that they solicited ideas on priority actions from participants and are currently reviewing those ideas to help inform their next steps. Separately, FAA expanded the membership of its Commercial Space Transportation Advisory Committee to include representatives of the aviation industry in addition to the commercial space transportation industry to foster further dialogue between these groups. Conclusions The commercial space transportation industry provides a service that has become essential to many aspects of government, business, and society. The capability to launch payloads into space enables national security missions, mobile communications, and scientific research, among many other applications. AST’s role as a regulator of commercial space launch providers is fundamental to the continued safe growth of the industry. With the anticipated growth and potential organizational restructuring of AST, as well as the evolution of the commercial space transportation industry, it is vital that AST ensure that the size, composition, and skills of its workforce are aligned with its projected workload, based on anticipated future mission and programmatic goals. AST’s workforce plan states that AST needs additional staff in nearly all areas. However, current budget and long-term fiscal pressures heighten the need for agencies to strategically manage their workforce, a process that includes making strategic decisions about how and where to prioritize limited resources. AST does not have a complete understanding of its current and projected workload, nor does it know the number of staff and types of staff skills and competencies necessary to meet those workload needs. Without this information, AST risks managing its workforce reactively to a rapidly changing environment instead of strategically planning for the future. Recommendations for Executive Action We are making the following four recommendations to FAA: 1. The Associate Administrator of AST should develop workload metrics that encompass the whole office and that would allow AST to determine an appropriate workforce size and composition. (Recommendation 1) 2. The Associate Administrator of AST should establish a timeline for finalizing workload projections that extend beyond the 2-year budget cycle and that include an approach for addressing uncertainty. (Recommendation 2) 3. The Associate Administrator of AST should ensure that its skills assessment survey collects information from staff on skills and competencies in those areas that are both currently needed and may be needed in the future. (Recommendation 3) 4. The Associate Administrator of AST should develop and document a plan for periodically assessing whether staff possess the necessary skills and competencies to achieve programmatic goals, such as annually administering a skills assessment survey. (Recommendation 4) Agency Comments We provided a draft of this product to DOT and NASA for review and comment. In its written comments reproduced in appendix III, DOT concurred with our recommendations. DOT and NASA also provided technical comments that we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, DOT, NASA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at 202-512-2834 or KrauseH@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives for this report were to: (1) describe how the construction of infrastructure at selected U.S. commercial launch sites has been funded; (2) describe key factors that influence where orbital launches occur; (3) summarize actions the Federal Aviation Administration (FAA) has taken to streamline its commercial space launch regulations; (4) examine how well-positioned FAA’s Office of Commercial Space Transportation (AST) is to determine its current and future workforce needs; and (5) identify actions FAA is taking to better integrate commercial space launch operations into the National Airspace System (NAS). The scope of this report focuses on topics related to FAA’s oversight of the U.S. commercial space transportation industry. Therefore, the report does not discuss launch indemnification and the safety of human spaceflight, or examine international outer space treaty obligations. For all objectives, we reviewed relevant statutes, regulations, and directives governing FAA’s oversight of the U.S. commercial space transportation industry. In addition, we interviewed AST officials and conducted semi-structured interviews with all seven commercial space launch providers that had conducted an FAA-licensed launch operation as of January 2018. To describe how infrastructure at selected commercial launch sites has been funded, we first identified, through review of FAA information on launch site operator licenses and launch licenses, all U.S. commercial launch sites—those that have an FAA site operator license to conduct commercial launch operations and those that may not have a site operator license but have hosted FAA-licensed launch operations. From these 15 identified U.S. commercial launch sites, we selected 9 for review because the launch site has hosted FAA-licensed launch operations between January 1, 2015, and December 31, 2018. We reviewed relevant publicly-available documents, such as launch sites’ business plans, user guides, and other planning documents related to U.S. commercial launch sites. We interviewed the eight launch site operators of the nine selected launch sites. The perspectives of the selected launch site operators are not generalizable to those of all launch site operators; however, the information obtained provides a balanced and informed perspective on the topics discussed. In addition, we interviewed members of the Commercial Spaceflight Federation’s working group on commercial launch sites. See table 2 for a full list of entities interviewed. To describe key factors influencing where orbital launches occur, we reviewed data from FAA’s 2018 Annual Compendium of Commercial Space Transportation as well as FAA data on recent launches within the United States. We interviewed representatives from seven launch customers, selected based on the following criteria: The company is not a government entity. The company’s payload was commercial, as documented in FAA’s commercial space launch compendiums. The customer had multiple launches in 2016 and 2017, with at least one of those launches occurring in 2017. The customer has had at least one launch in the United States that was licensed by FAA. Among the companies that met these criteria, we chose our final selections to have a mix of the following characteristics: domestic and non-U.S. companies, those that had launched exclusively at one launch site versus multiple launch sites, and those that are involved in traditional space activities, such as satellite communications companies and remote-sensing companies and those that are pursuing non-traditional space activities, such as asteroid mining and satellite servicing. The perspectives of the selected launch customers are not generalizable to those of all launch customers; however, the information obtained provides a balanced and informed perspective on the topics discussed. To summarize actions FAA is taking to streamline its commercial space launch regulations, we reviewed relevant statutes, regulations, and FAA guidance. We also reviewed FAA’s documents related to the rulemaking, including its schedule of rulemaking activities and the Streamlined Launch and Reentry Licensing Requirements notice of proposed rulemaking issued in April 2019, and reviewed and analyzed the Streamlined Launch and Reentry Licensing Requirements Aviation Rulemaking Committee final report. We interviewed FAA officials and representatives of the Commercial Spaceflight Federation about FAA’s ongoing and planned actions related to the rulemaking. Finally, we reviewed the minutes from the June 2018 meeting and attended the October 2018 meeting of the Commercial Space Transportation Advisory Committee, in which FAA officials and industry representatives discussed FAA’s actions on the rulemaking. To examine how well-positioned AST is to make strategic decisions about its current and future workforce needs, we reviewed FAA documents, including its budget justification and workforce plans from the past 3 years. We also reviewed FAA’s year-end reports on its workload metrics from fiscal years 2017 and 2018, and portions of FAA’s preliminary labor analyses using its revised timecard data and workload metrics. We identified key principles on effective strategic workforce planning from our previous work to use as criteria to assess FAA’s actions. We interviewed AST officials about their plans and actions to improve its workforce planning and assessed those actions against the identified key principles for effective strategic workforce planning. We focused our analysis on those principles that are related to determining current and future workforce needs. To identify actions FAA is taking to better integrate commercial space launch operations into the National Airspace System, we reviewed and analyzed relevant FAA documents, including a document that discusses FAA’s vision for integrating commercial space transportation operations into the NAS and the Roadmap for the Integration of Space Operations in the National Airspace System. In addition, we interviewed FAA officials within AST, Air Traffic Organization, and the Office of NextGen regarding their ongoing and planned actions for improving the integration of commercial space transportation operations into the NAS. We also interviewed industry stakeholders to obtain perspectives on this topic. These stakeholders included representatives from Airlines for America, a trade association for the U.S. airline industry, and from launch providers. Finally, we attended an FAA-sponsored industry conference in October 2018 on FAA’s airspace integration efforts. We conducted this performance audit from July 2017 to May 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Selected Characteristics and Capabilities of U.S. Commercial Launch Sites Table 3 shows selected characteristics and capabilities of U.S. commercial launch sites included in our review of infrastructure funding. Table 4 includes other U.S. commercial launch sites that did not have FAA-licensed activity from 2015 to 2018 and were not included in our review of infrastructure funding. Appendix III: Comments from the Department of Transportation Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Heather Halliwell (Assistant Director); Gretchen Snoey (Analyst-in-Charge); Namita Bhatia Sabharwal; Giny Cheong; Gerald L. Dillingham; Camilo Flores; Joshua Garties; Richard Hung; Delwen Jones; Elke Kolodinski; Maureen Luna Long; Malika Rice; Travis Schwartz; and Andrew Stavisky made key contributions to this report.
Why GAO Did This Study The commercial space transportation industry provides launch services that enable national-security and commercial satellites, among other things, to be sent into orbit for government and private customers. Continued growth and evolution in the industry is expected as reliance on space-based applications increases. AST is charged with overseeing the industry, including licensing and monitoring launch vehicle operations. GAO was asked to review developments in this industry. This report (1) describes FAA's actions to integrate commercial space launches into the national airspace and (2) examines how well-positioned AST is to determine its current and future workforce needs, among other objectives. GAO reviewed relevant statutes, regulations, and FAA guidance; compared FAA's workforce management efforts to key principles for effective workforce planning; and interviewed FAA officials and U.S. commercial launch providers that had conducted an FAA-licensed launch as of January 2018, among other industry stakeholders. What GAO Found The Office of Commercial Space Transportation (AST) within the Federal Aviation Administration (FAA), in collaboration with other FAA offices, is taking a range of actions, such as testing new technologies, to improve how efficiently FAA integrates space vehicle launch operations into the national airspace. According to FAA officials, the amount of airspace that FAA closes to other airspace users is larger and remains closed longer than may be needed to ensure public safety. To help remedy this situation, FAA is piloting prototype technologies that would collect launch vehicles' location data in real-time and transmit them to air traffic controllers. Officials said the earliest these technologies could be implemented would be 2022. In March 2019, FAA published an announcement seeking interest from industry on partnering with FAA to further develop the technologies. Meanwhile, FAA is assessing how existing air traffic control technologies could be used to help reduce the effects of launches on other airspace users. Since 2016, AST has taken steps to improve how it determines its current workforce needs to carry out its mission including licensing commercial launch vehicle operations. These steps include more comprehensively monitoring staff time spent on specific activities and measuring the volume of the staff's work. While AST officials told us that AST is planning to continue to improve its workforce-planning efforts, GAO found that some aspects of AST's efforts fall short of key principles of strategic workforce planning. Such principles underscore the importance of determining both current and future workforce needs and identifying potential gaps in employee skills. For example: AST does not project its workload beyond a 2-year budget cycle, limiting its ability to effectively and strategically plan for its longer-term workforce needs. According to officials, it can take a few years for engineers with certain skills to be trained and have sufficient experience to lead projects. Further, AST officials told GAO that hiring technically qualified personnel, including positions that require considerable training and experience to be a fully functioning employee, is challenging. AST officials said that they are considering projecting their workload estimates further into the future, but they have neither formally committed to doing so nor established a timeline with milestones. AST officials acknowledged that the information AST currently collects on the skills of its staff is not sufficient to allow them to identify gaps between the skills and competencies needed and those that its workforce currently possesses or may need in the future, such as expertise in flight safety analysis. AST officials told GAO that they plan to develop a tool that could collect information annually from staff and managers about the specific skills and competencies that individual staff currently possess. As of May 2019, however, AST had neither developed a draft of the tool nor established a timeline for finalizing it. Without this information, AST lacks reasonable assurance that its current workforce possesses the requisite skills and competencies, and AST may not be best positioned to proactively determine how to align its staff to carry out its mission. What GAO Recommends GAO is making four recommendations on workforce planning to AST, including that AST establish a timeline for finalizing longer-term workload projections and that AST ensure that it collects information from staff on skills and competencies in those areas that are currently needed and may be needed in the future. AST concurred with the recommendations.
gao_GAO-19-307
gao_GAO-19-307_0
Background U.S. Export Control System The U.S. government implements an export control system to manage risks associated with exporting sensitive items and ensure that legitimate trade can still occur. The export control system is governed by a complex set of laws, regulations, and processes that multiple federal agencies administer to ensure compliance. State and Commerce each play a role in the U.S. export control system. Historically, State has controlled the export of military items, known as defense articles and services, while Commerce has controlled the export of less sensitive items with both military and commercial applications, known as dual-use items. In addition to firearms, artillery, and ammunition, State controls the export of items such as tanks, fighter aircraft, missiles, and military training, which it lists on the U.S. Munitions List (USML). Commerce controls the export of dual-use items such as computers, radars, and telecommunications equipment, which it lists on the Commerce Control List (CCL). State and Commerce both control the export of items within their jurisdictions by requiring a license or other authorization to export a controlled item; vetting the parties associated with export transactions; monitoring the end-use of exports and other compliance activities; and supporting law enforcement agencies’ investigations of possible violations of export control laws and regulations. Generally, unless a license exemption applies, exporters submit a license application to State if their items are controlled on the USML or to Commerce if they are controlled on the CCL to receive export approval. As part of the application review process, State and Commerce consult with other agencies, including DOD. Additionally, offices within Commerce, DHS, and the Department of Justice (DOJ) investigate potential violations of export control laws and regulations, and conduct enforcement activities. State and Commerce Export Control Lists Items identified on the State and Commerce export control lists are subject to different laws and regulations. The Arms Export Control Act of 1976, as amended, (AECA) provides the statutory authority to control the export of defense articles and services, which the President delegated to the Secretary of State. State’s International Traffic in Arms Regulations (ITAR) implement this authority and identify the specific types of items subject to control in the USML. The USML is comprised of 21 categories of items, each with multiple sub-categories, encompassing defense items such as firearms, missiles, and aircraft. Firearms, artillery, and ammunition represent the first three categories of the USML (see table 1). Additional information on the 21 categories of the USML is presented in appendix II. Within State, the Directorate of Defense Trade Controls (DDTC) is responsible for implementing controls on the commercial export of these items. The Export Control Reform Act of 2018 (ECRA) provides the statutory authority for Commerce to control the export of less sensitive military items, dual-use items, and basic commercial items. Commerce’s Export Administration Regulations (EAR), which contain the CCL, implement this authority. The CCL classifies less sensitive military items, dual-use items, and basic commercial items in 10 categories, such as Nuclear & Miscellaneous, Electronics, and Telecommunications, and in 5 product groups. Appendix II shows the 10 categories and five groups of the CCL. Commerce’s Bureau of Industry and Security (BIS) is responsible for implementing these export controls (see table 2 for a summary of the legal and regulatory frameworks for State’s and Commerce’s export controls). Proposed Transfer of Certain Firearms from State to Commerce Jurisdiction In May 2018, State and Commerce published proposed rules in the Federal Register to request public comments on the proposed transfer of certain items in USML Categories I, II, and III (firearms, artillery, and ammunition) to the CCL. According to State and Commerce’s proposed rules, the purpose of the transfer is to limit the items that State controls to those that provide the United States with a critical military or intelligence advantage or, in the case of weapons, are inherently for military end use. According to the proposed rules, items that do not meet these criteria would be removed from State’s export control jurisdiction and moved to Commerce’s jurisdiction. The proposed rules state that some, but not all, of the firearms, artillery, and ammunition currently controlled for export by State would transfer to Commerce control. The items proposed for transfer to the CCL include non-automatic and semi-automatic firearms up to .50 caliber, and non-automatic shotguns with a barrel length less than 18 inches; as well as parts, components, accessories, attachments, and ammunition for these firearms and shotguns, among other items. According to the proposed rules, if finalized, State would continue to control fully-automatic firearms, shotguns, and modern artillery; silencers, components, parts, and accessories specially designed for automatic firearms and shotguns; and specific types of ammunition, including ammunition for automatic firearms. The proposed rules would also make a variety of conforming changes to the USML and CCL to accommodate the transferred items. The proposed transfer of firearms, artillery, and ammunition is part of an ongoing effort to reform the export control lists by reviewing the USML categories and transferring certain items considered less sensitive to the CCL. Since the export control reform initiative was first announced in 2010 with the objective of modernizing the export control system, State and Commerce have finalized various rulemakings that transferred certain items from USML Categories IV through XXI to Commerce’s control. Firearms, artillery, and ammunition are the last three USML categories proposed to undergo regulatory changes under export control reform. In accordance with the AECA, the President must notify Congress of items proposed for removal from the USML and describe the nature of any controls to be imposed on the items, and may not remove the items until 30 days after providing such notice. State and Commerce published the proposed rules in the Federal Register on May 24, 2018, opening a 45-day public comment period that ended on July 9, 2018. After reviewing public comments, State and Commerce submitted final rules to the Office of Management and Budget for regulatory review on November 7, 2018. The required 30-day congressional notification period pursuant to the AECA began on February 4, 2019, according to a State official. State Reviewed about 69,000 Export License Applications Valued at up to $45.4 Billion for Firearms, Artillery, and Ammunition in Fiscal Years 2013-2017 State reviewed 68,690 export license applications for firearms, artillery, and ammunition with a potential value of up to $45.4 billion during fiscal years 2013 to 2017. The number of export license applications for firearms, artillery, and ammunition remained relatively constant from fiscal years 2013 to 2017, averaging 13,738 annually, even as the total number of licenses reviewed by State declined as the export control reform process transferred items from State to Commerce control (see fig. 1). Firearms, artillery, and ammunition increased from about 16 percent of all license applications reviewed by State in fiscal year 2013 to about 36 percent in 2017. State processes export license applications for permanent exports, temporary exports and imports, and certain types of agreements. During fiscal years 2013 to 2017, about 91 percent of export license applications for firearms, artillery, and ammunition were for permanent exports, about 8 percent for temporary exports and imports, and about 2 percent for agreements. State can take various actions on the export license applications it receives, including approving the license, approving with conditions, returning without action, and denying the license. For fiscal years 2013- 2017, State approved 87 percent of the number of export license applications for firearms, artillery, and ammunition, returned without action 12 percent, and denied 1 percent. State can approve an application but place conditions on the export license, such as limiting the validity period or prohibiting certain types of intermediaries in the export transaction. State can also return without action export license applications that are missing information or that it is otherwise unable to review, and can deny, revoke, suspend, or amend a license for foreign policy or national security reasons. About Two-Thirds of Category I-III Export License Applications Were for Firearms in Fiscal Years 2013-2017 About two-thirds of the export license applications for firearms, artillery, and ammunition that State reviewed during fiscal years 2013-2017 were for firearms and related items controlled under Category I of the USML (see fig. 2). Of the applications for these items, about 57 percent involved non-automatic or semi-automatic firearms—most of which are proposed to transfer to the CCL under Commerce control—and about 4 percent involved fully-automatic firearms—which would remain on the USML under State control. The remainder of export license applications for Category I items included other types of firearms such as combat shotguns, firearm attachments such as silencers and riflescopes, firearm parts and components, and technical data and defense services related to these items. The proposed rules state that some of these items would transfer to Commerce control while others would remain under State control. As shown in figure 2, export license applications for Category II artillery were about 5 percent of all Category I-III license applications from fiscal years 2013 through 2017. According to State, under the proposed rules, modern artillery, their ammunition, and certain related parts and components would remain under State’s control. Category III ammunition represented about 21 percent of the Category I-III export license applications. As stated in the State and Commerce proposed rules, USML Category III would be revised to specifically list the ammunition that it controls, which would include ammunition that has only or primarily military applications. Generally, ammunition used in the non- automatic and semi-automatic firearms that are proposed to transfer to Commerce control would also transfer. About 8 percent of the export license applications involved items controlled in more than one category of USML Categories I, II, and III, which are shown as “Multiple” in figure 2. Volume of Category I-III Export License Applications Varied by Geographic Region of End-User in Fiscal Years 2013-2017 In fiscal years 2013 to 2017, 32 percent of license applications for the export of firearms, artillery, and ammunition were intended for end-users in countries in Europe and Eurasia, 29 percent to the Western Hemisphere, 24 percent to East Asia and the Pacific, 7 percent to the Near East, 3 percent to Africa, 3 percent to South and Central Asia, and 2 percent to multiple countries (see fig. 3). Export license applications for firearms, artillery, and ammunition during fiscal years 2013 to 2017 included applications for end-users spanning 189 countries and territories, yet the top 20 countries represented about 70 percent of the total number of applications (see fig. 4). State and Commerce Export Controls Have Several Different Requirements, Including for Registration, Licensing, End-Use Monitoring, and Congressional Notification State’s and Commerce’s export controls are guided by different laws, regulations, or policies that have several different requirements for registration, licensing, end-use monitoring, congressional notification, public reporting, and enforcement. The AECA requires manufacturers, exporters, and brokers of items on the USML to register with State whereas there is no registration requirement in the law for manufacturers, exporters, and brokers of items on the CCL under Commerce’s jurisdiction. Differences also exist in how State and Commerce screen export license applications and in their license requirements. For example, State and Commerce rely on different internal watch lists to screen applicants. In addition, according to Commerce, certain exports that currently require a State license would not require a Commerce license once transferred to Commerce’s jurisdiction. State and Commerce also conduct end-use monitoring of selected controlled exports differently. For example, State relies primarily on embassy staff to conduct end-use checks and Commerce relies primarily on several export control officers based overseas for this responsibility. In addition, congressional notification and public reporting requirements that under current law apply to firearms on the USML would not be applicable if they are transferred to the CCL. Finally, there are some differences in enforcement of export control laws, such as different maximum fines for civil violations, depending on whether the item is controlled by the ITAR under State’s jurisdiction or controlled by the EAR under Commerce’s jurisdiction. The Law Requires Registration for Items on the USML but Not for Items on the CCL The AECA requires manufacturers, exporters, and brokers of defense articles or services listed on the USML to register annually with State’s Directorate of Defense Trade Controls (DDTC) whereas there is no requirement in the law for registration for manufacturers, exporters, and brokers of items on the CCL. State reported having 13,083 registrants across all 21 USML categories in fiscal year 2017. Registration, which requires a fee payment of at least $2,250 per year, is generally a precondition for obtaining a State export license, unless State grants an exception to a manufacturer or exporter, or a broker is eligible for an exemption. According to a State document, registration provides important information on the identity and location of defense companies and conveys management responsibility for compliance with export control laws. Those registering must disclose any foreign ownership or affiliations and certify that they have not been indicted, otherwise charged with, or convicted of export control violations and other crimes. Manufacturers and exporters whose entire product line transfers to the CCL would no longer have to register, according to Commerce’s proposed rule, while those that manufacture or export any items that remain on the USML, would continue to register with DDTC. Differences Exist in State and Commerce Applicant Screening Processes and License Requirements Both Agencies Review Export License Applications Using an Interagency Process State’s and Commerce’s processes for reviewing export license applications involve opportunities for other Departments to review applications. While DDTC has primary responsibility for reviewing State’s commercial export license applications, other bureaus within State, as well as DOD, also review certain applications, depending on the defense article, defense service, or the destination country. Commerce export license applications also involve an interagency review that includes State, DOD, and the Department of Energy, depending on the item to be exported. Both departments have a process for resolving disagreements among the reviewing bureaus or agencies on the disposition of the application. According to State officials, as part of the interagency review process for Commerce licenses, State has generally reviewed applications for items that have previously moved from the USML to the CCL and would continue to do so for items that would transfer to the CCL under the proposed rules. Moreover, DOD officials told us that DOD intends to review Commerce export license applications for these items during the interagency review process, if the proposed transfer is implemented. This would represent a change from DOD’s current practice to generally not review State’s firearms license applications. DOD officials told us that if the proposed rules are finalized, they believed it is prudent to begin reviewing Commerce license applications for items that would transfer under the proposed rules, at least initially. State and Commerce Use Different Watch Lists to Screen Parties to the Export Transaction State and Commerce each maintain their own internal watch lists to screen all parties identified on license applications. A watch list match would trigger further review of the license and ultimately can result in a denial of the license in some cases. State and Commerce also use watch lists as a means of targeting transactions for possible end-use checks to verify legitimacy of end-users of controlled exports. Both departments’ watch lists include any derogatory information they collect internally from their past screening and end-use monitoring of licenses. For example, if information is identified raising questions about the legitimacy of a party to a license during the application review, that information would be used to update the watch list to inform future license application reviews. State’s and Commerce’s watch lists also include information from automated databases maintained by other U.S. agencies as well as information from law enforcement agencies and the intelligence community. State’s watch list contains over 200,000 entries, including sensitive details related to ongoing and previous law enforcement activities, according to State officials. According to Commerce officials, because State has been responsible for export controls of firearms, artillery, and ammunition, its internal watch list is also more likely than Commerce’s to include derogatory information collected from past screening and end-use monitoring related to exports of these items. However, Commerce does not have access to State’s watch list, according to State and Commerce officials. These officials noted that a Commerce licensing officer can ask State to screen an applicant with State’s watch list on a case-by-case basis, although such checks are not done routinely. State and Commerce officials told us that, in anticipation of the transfer of firearms, artillery, and ammunition to Commerce’s responsibility, the two departments are engaged in ongoing discussions to potentially share State’s watch list with Commerce. According to State officials, these discussions involve determining which specific watch list information Commerce would need and State is able to share, depending on the source of the information. State and Commerce also have to resolve the sharing and updating of information using different information technology infrastructures, according to department officials. As of February 2019, the departments had not reached agreement or established a documented process to achieve the goal of sharing watch list information before implementation of the proposed transfer would occur, according to State and Commerce officials. Information sharing is supported by a policy statement included in the ECRA. The statement says that among other factors, the “export control system must ensure that it is transparent, predictable, and timely, has the flexibility to be adapted to address new threats in the future, and allows seamless access to and sharing of export control information among all relevant United States national security and foreign policy agencies.” Without access to State’s watch list, if the proposed rules are finalized, Commerce may lack critical information needed to effectively screen license applicants for firearms and related exports and target possible cases for end-use monitoring to ensure that these exports are used as intended and by legitimate end-users. Both Agencies Screen License Applications for Human Rights Concerns but Statutory Prohibition Applies Differently Both State and Commerce screen license applications for human rights concerns, but the federal law that prohibits exports to the governments of certain foreign countries on human rights grounds applies differently to items under State’s jurisdiction than under Commerce’s. Under Section 502B of the Foreign Assistance Act of 1961, as amended, in general, “no security assistance may be provided to any country the government of which engages in a consistent pattern of gross violations of internationally recognized human rights.” For this provision, “security assistance” is defined in part as any license in effect with respect to the export to or for the armed forces, police, intelligence, or other internal security forces of a foreign country of (1) any defense articles or defense services licensed for export under section 38 of the AECA, or (2) items listed under the 600 series of the CCL. Licenses under Commerce’s jurisdiction generally may not be issued for items defined as “crime control and detection instruments and equipment” to a country, the government of which engages in a consistent pattern of gross violations of internationally recognized human rights. For items under Commerce’s jurisdiction, the Commerce proposed rule specifies that concern for human rights is a regulatory reason for denying a license for firearms and ammunition under Commerce’s Export Administration Regulations (EAR). Within State, the Bureau of Democracy, Human Rights and Labor (DRL) is primarily responsible for screening export license applications to ensure that exports do not involve parties with human rights concerns. According to DRL officials, the bureau reviews applications for exports to specific countries where human rights concerns exist and prioritizes applications for firearms exports because they are often associated with human rights abuses committed by government police and military units. The officials noted, however, that State rarely denies an export license based solely on human rights concerns. If firearms are transferred to Commerce’s responsibility, DRL will continue to have the primary role in screening license applications for human rights as part of the Commerce-led interagency review process, according to DRL officials. For Commerce license applications, however, State’s position would be weighed together with the positions of Commerce, DOD, and Energy, according to Commerce officials. By contrast, for State export license applications, State alone makes the final determination, according to State officials. State Has Different Requirements than Commerce for End-Users to Certify They Will Not Re-Export Certain Licensed Exports State and Commerce have different end-user certification requirements. State’s export control regulations require that for certain items, applicants provide a written certification from end-users that they will not re-export, resell, or otherwise dispose of the commodity outside of the country listed on the license. This requirement generally applies to all items on the USML that are designated as Significant Military Equipment, including firearms, and ammunition. In contrast, Commerce generally does not require end-user certification for items on the CCL but does require it when it has not verified the legitimacy of end-users and may also impose this requirement on a case-by-case basis. Written end-user certification provides additional assurance and accountability that end-users will comply with the terms and conditions of the license, according to State officials. It also is a deterrent and provides documentary evidence that can be later used in court, if necessary, according to an official from Immigration and Customs Enforcement (ICE). The Law Requires Disclosure of Political Contributions, Fees and Commissions for Items on the USML but Not for Items on the CCL The AECA states that the Secretary of State shall require reporting on political contributions, gifts, commissions, and fees paid or offered, or agreed to be paid by any person in connection with a commercial sale of an item listed on the USML to or for the armed forces of a foreign country or an international organization. State’s export control regulations also require license applicants to disclose certain payment of political contributions, fees, and commissions for certain sales of defense articles and defense services. This requirement applies to exports of $500,000 or more. Applicants must report political contributions in an aggregate amount of $5,000 or more and paid fees or commissions in an aggregate amount of $100,000 or more. Applicants must provide a letter to DDTC containing specific information about the sale, including the amounts of political contributions, fees, or commissions paid, and the name and nationality of each recipient. The disclosures are intended to ensure that purchases made by foreign governments of U.S. defense articles are based on merit without improper influence. Failure of applicants to comply with these disclosure requirements can result in additional oversight measures and civil penalties. According to an ICE official, this disclosure information may provide valuable information in criminal or civil matters. There is no requirement in the law for these disclosures for items listed on the CCL and Commerce licenses do not require these disclosures. Therefore, this information would no longer be collected as part of the licensing process for firearms, artillery, and ammunition that are proposed for transfer to the CCL, according to Commerce officials. According to Commerce, Certain Exports That Require a State License Would Not Require a Commerce License if Transferred to the CCL Consistent with export control regulations, there are several circumstances in which some exports proposed for transfer that currently require State licenses would either require fewer or no Commerce licenses if the proposed rules are finalized, according to Commerce. Multiple end-users on one license. State requires licenses to be limited to only one end-user, while Commerce allows multiple end-users on a single license. The applicant for a State export license must provide a purchase order documenting the proposed export to a single end-user and an additional license would be required for each additional end-user. According to Commerce officials, a Commerce license can have multiple end-users associated with a particular consignee, reducing the total number of licenses for which the applicant must apply. Technical data and defense services. State requires licenses for defense services and technical data whereas Commerce’s export controls do not generally apply to defense services and apply to technical data more narrowly than State. State’s regulations define defense services as “the furnishing of assistance (including training) to foreign persons … in the design, development, engineering, manufacture, production, assembly, testing, repair, maintenance, modification, operation, demilitarization, destruction, processing or use of defense articles.” State’s definition of defense services also includes military training of foreign units and forces including publications, training exercises, and military advice. State’s definition of technical data includes information, such as blueprints, drawings, or instructions. Commerce’s export control regulations generally do not apply to services. For example, training in the basic operation of a firearm controlled by Commerce would not be subject to export controls, according to State officials. In addition, Commerce’s regulations do not control technology or software, if it is “available to the public without restrictions.” For example, Commerce officials told us that Commerce would not require an export license for the posting of instructions for 3D printing of firearms on the internet, if they were publicly available without restrictions. Minimum level of U.S.-origin content. Items subject to State’s controls require a license when they are incorporated into a foreign-made product regardless of the percentage of controlled U.S. content in that product. Commerce does not require a license for items when they are incorporated into foreign-made items unless the controlled U.S.-origin content of a foreign-made product exceeds the applicable minimum percentage which, according to Commerce officials, may be 10 or 25 percent, depending on the destination. This minimum level of U.S.-origin content is referred to as “de minimis treatment.” Commerce’s proposed rule states that de minimis treatment in Commerce’s regulations would apply for all foreign-made items proposed for transfer to the CCL, unless they are being exported to a country that is subject to a United States arms embargo, in which case there would be no minimum threshold for U.S.-origin content. License exceptions. State regulations contain some country-based license exceptions, including for exports to Canada and, more narrowly, to Australia and the United Kingdom whereas Commerce has several different license exceptions under its regulations. For example, Commerce regulations have the “Strategic Trade Authorization” (STA) exception that permits exports of certain items to countries determined to be low risk, which includes NATO partners and other close allies, of which 37 are eligible for a broader STA authorization and seven are eligible for a much narrower STA authorization. Commerce’s proposed rule specifies that it would revise Commerce’s regulations to make firearms and most parts, components, accessories, and attachments ineligible for the STA license exception. However, Commerce estimates that 450 to 650 license applications per year involving certain eligible items would still be authorized under STA exceptions if the proposed rules are finalized. Commerce also has a “Limited Value Shipment” exception, which is available for proposed exports of certain less sensitive firearms parts and components with a value of $500 or less per shipment based on the actual selling price or fair market value. Commerce’s proposed rule specifies that this exception would only be available for certain parts, components, and accessories and attachments for firearms; complete firearms would be ineligible for this exception. State offers a similar exemption but only for licenses with a value of $100 or less, based on the wholesale price. State and Commerce Both Conduct End-Use Monitoring of Selected Controlled Exports but Differences Exist State and Commerce Both Implement End-Use Monitoring Programs State and Commerce both conduct end-use monitoring to verify the reliability of foreign end-users and legitimacy of proposed transactions and to provide reasonable assurance of compliance with the terms of the license and proper use of the licensed items. State recommends that end-use checks involve a site visit whenever possible, while Commerce policy requires that the end-use check include a physical verification on-site with a party to the transaction, according to Commerce officials. State and Commerce also apply their own means of risk-based targeting to select the licenses or exports that will undergo end-use monitoring, however, similarities exist involving selection criteria. For example, State and Commerce may target transactions that involve unfamiliar foreign parties, unusual shipping routes, or derogatory information from watch lists, according to the departments. The number of end-use checks conducted by State averaged about 1.3 percent of its license applications, and those conducted by Commerce averaged about 3.3 percent of its applications from fiscal years 2013-2017. State and Commerce end-use checks may result in either “favorable” or “unfavorable” findings. Commerce may also categorize an end-use check as “unverified.” An “unfavorable” or “unverified” result occurs if the end- use check cannot verify information in the license or reveals facts that are inconsistent with the license. For either State or Commerce, an unfavorable end-use check can lead to denying applications, revoking licenses, removing parties from licenses, updating the watch list, or making referrals to U.S. law enforcement agencies for investigation, according to a State report and Commerce officials. State closed 166 of 766, or 22 percent, of end-use monitoring cases as “unfavorable” in fiscal years 2013-2017 for firearms, artillery, and ammunition licenses. State’s three most common reasons for an unfavorable finding for end-use checks for firearms, artillery, and ammunition were derogatory information on a foreign party, inability to confirm order or receipt of goods, and involvement of an unlicensed party. State relies on U.S. embassy or consulate staff in the country or countries involved in the transaction to conduct its end-use checks. Commerce relies primarily on Export Control Officers (ECOs) positioned overseas to conduct end-use checks. ECOs conducted an average of about 60 percent of Commerce’s end-use checks per year from fiscal years 2013 to 2017. According to Commerce officials, Commerce had a total of nine ECO positions in Beijing, Dubai, Frankfurt, Hong Kong, Istanbul, New Delhi, and Singapore, as of October 2018 (see fig. 5). Six of these nine positions were filled as of this date. The ECOs have areas of responsibility covering multiple countries within their geographic region. For the remaining 40 percent of end-use checks, Commerce relied primarily on its “Sentinel Program” in which BIS special agents based in domestic field offices, along with other responsibilities, travel to destination countries not covered by ECOs to conduct end-use checks. In addition, a small percentage of Commerce’s end-use checks are conducted by Foreign Commercial Service officers or other personnel stationed at U.S. embassies, according to Commerce officials. State conducted 766 end-use checks for firearms, artillery, and ammunition in fiscal years 2013-2017 with the largest share, over 40 percent, in the Western Hemisphere (see fig. 6). None of Commerce’s overseas ECO positions are located in this region nor do any cover it within their areas of responsibility. According to Commerce officials, the number and locations of end-use checks for firearms, artillery, and ammunition, if these items are transferred to the CCL, will depend on how exports of these items factor into the department’s existing targeting criteria. To the extent that Commerce needs to conduct end-use checks for these items in the Western Hemisphere, Commerce officials told us that they plan to cover these checks via the Sentinel Program and, where necessary, through checks by Foreign Commercial Service Officers. The officials noted that they plan to reassess their end-use monitoring efforts after items are transferred to the CCL if the proposed rules are finalized. End-use checks include pre-license checks in support of the license application review or post-shipment verifications after the license has been approved and items have shipped. As shown in figure 7, more than 50 percent of State’s end-use checks specifically for firearms, artillery, and ammunition licenses from fiscal years 2013 to 2017 were pre-license checks. Conversely, about 90 percent of Commerce’s end- use checks for all items subject to the EAR for this period were post- shipment verifications. Commerce noted that it conducts mostly post- shipment verifications because it controls a higher share than State of items that are exported without a license. State Is Required by Law to Notify Congress of Certain Export License Applications for Firearms, Artillery, and Ammunition While Commerce Is Not The AECA requires State to notify Congress before State can approve certain export licenses for firearms, artillery, and ammunition. These notification requirements depend on the proposed export value and type of export, among other factors. For example, the AECA requires State to notify Congress of proposed licenses for the export of USML Category I firearms in the amount of $1 million or more. Additionally, State must notify Congress of proposed licenses for commercial agreements that involve the overseas manufacture of certain USML items, including many firearms, artillery, and ammunition items, regardless of the proposed value. During fiscal years 2013 to 2017, State identified 240 export license applications involving firearms, artillery, and ammunition that required congressional notification, totaling approximately $2.5 billion. Additionally, State identified 41 license applications for commercial technical assistance or manufacturing license agreements involving the overseas manufacture of firearms, artillery, and ammunition that required congressional notification, totaling approximately $5.7 billion. According to State and Commerce officials, these congressional notification requirements would no longer apply to firearms, artillery, and ammunition that move from State’s to Commerce’s export control responsibility because the requirements apply specifically to USML controlled items. The proposed rule transferring firearms to Commerce’s responsibility does not revise Commerce’s export control regulations to add a congressional notification requirement for firearms, according to Commerce officials. State Is Required by Law to Publicly Report More Details on Controlled Exports than Commerce The Foreign Assistance Act, as amended, requires State to report to Congress annually on military assistance and military exports to the governments of each foreign country and international organization and specifies that the report include “a statement of the aggregate dollar value and quantity of semiautomatic assault weapons, or spare parts for such weapons.” The Act also requires that State post all unclassified information from this report on the internet. To comply with this requirement, State posts an annual report that includes the aggregate dollar value and quantity of defense articles and services, by USML category, licensed to each foreign country and international organization, as well as data on the actual shipments occurring during the fiscal year. The report also includes an appendix that breaks out exports specifically for the USML sub-category I(a), which includes non-automatic and semi- automatic firearms, and sub category I(h), which includes firearms components, parts, accessories, and attachments. This reporting requirement only applies to exports of items on the USML, which are licensed by State under the AECA, but does not apply to exports controlled by Commerce. This information on exports, by country, would no longer be available for firearms and other items from Categories I-III of the USML after they are transferred to the CCL if the proposed rules are finalized, according to Commerce officials. Some Differences Exist between Export Control Enforcement of Items Controlled by State and Commerce The statutory penalties available for criminal violations of export control laws are the same regardless of whether the items are on the USML and controlled by State or on the CCL and controlled by Commerce. Criminal violations may result in fines up to $1 million and prison terms up to 20 years, or both. Under the AECA, civil violations of State’s export controls may result in a fine of up to $500,000 but, according to State officials, can be much higher based on inflation under the Federal Civil Penalties Inflation Adjustment Act of 1990, as amended. State told us that actual civil penalties for civil violations in 2018 ranged from $824,959 to $1,134,602. By contrast, the ECRA set the penalty for civil violations of Commerce’s export controls at up to $300,000 or twice the value of the transaction that is the basis of the violation, whichever is of greater value. According to Commerce officials, this can substantially increase the monetary penalty for civil violations. Criminal violations of either State’s or Commerce’s export control laws may result in prohibiting the violator from involvement in future exports of controlled items. The AECA also precludes the issuance of State licenses to persons convicted of violating certain federal laws, such as the Foreign Corrupt Practices Act. Similarly, Commerce can deny the export privileges, including the ability to obtain a license, of companies and individuals for a period of 10 years from the date of conviction for violating certain federal laws. This prohibition can be expanded to include other related parties, such as those connected with the denied person by virtue of affiliation, ownership, or control. Agencies with responsibility for export control enforcement can vary depending on whether items are controlled by State or Commerce. According to DHS officials, ICE has jurisdiction to investigate potential export control violations and U.S. Customs and Border Protection has primary enforcement responsibility for export control violations at the border, seaports, and airports. The Federal Bureau of Investigation (FBI) can also investigate these cases involving items controlled by either State or Commerce. According to Commerce, the Office of Export Enforcement in BIS has over 100 special agents in U.S.-based field offices authorized to investigate potential violations of Commerce’s export control laws. These investigative resources would be available, in addition to DHS and FBI, to address illegal firearms trafficking if the proposed transfer is implemented, according to Commerce officials. Proposed Rules, If Finalized, Would Reduce State’s and Increase Commerce’s Licensing Volume, but Extent of the Resource Impact on These Agencies Is Unknown According to State, the Proposed Transfer Would Impact Resources from State Fee Collections to an Uncertain Extent State expects to lose revenue from registration fees if the proposed transfer of firearms, artillery, and ammunition to Commerce is implemented. State estimates in its proposed rule that the transfer would result in about 10,000 fewer license applications per year for Category I- III items—a reduction of about 26 percent from the 38,862 applications that State processed in fiscal year 2017. State estimates a recurring annual registration fee revenue loss of about $2.5 million, according to its proposed rule. State officials told us, if the proposed rules become final, there would be additional revenue declines from an uncertain drop in the number of registrants that State cannot estimate. They explained that because many manufacturers and exporters would likely be involved in items controlled by State as well as Commerce, they would still need to register with State. Others involved only in items moving to Commerce would no longer have to register with State. For example, according to State officials, a manufacturer of both semi-automatic weapons that the proposed rules identify for transfer to the CCL and fully automatic weapons that would stay on the USML would still be required to register with State, if the proposed rules are finalized. State officials noted that the decline in the number of license applications resulting from previous transfers of items from the USML to the CCL has not produced a proportional decline in registration revenue. According to data provided by State, registration revenue has dropped less than 25 percent from about $47 million in fiscal year 2013 to about $36 million in fiscal year 2017, while the number of export license applications has dropped more than 50 percent from about 83,000 to almost 39,000. With the decline in license workload that State expects would result if the proposed rules are finalized, State officials told us that four contractors currently responsible for reviewing licenses for firearms and ammunition in DDTC could be moved to other teams with vacancies in order to review licenses for other controlled items. On the other hand, State’s Bureau of International Security and Nonproliferation (ISN), which has lead responsibility at State for reviewing Commerce licenses for items transferring from the USML to the CCL, expects to see an increase in its workload. An ISN official told us his bureau could potentially need an additional 2.5 full-time equivalent staff to review items transferred to the CCL as part of Commerce’s interagency review process. Commerce Officials Believe They Have Sufficient Resources to Handle an Increase in Workload Resulting from the Proposed Transfer Commerce estimates in its proposed rule that it would gain 6,000 additional license applications from the proposed transfer—an increase of about 18 percent above the 34,142 license applications it reviewed in fiscal year 2017. Commerce officials told us that the increased workload to review license applications will also create more work for some related activities. For example, Commerce expects the number of investigative leads and export enforcement investigations to include more firearms- related actions. However, Commerce officials told us they have not estimated the magnitude of these changes. Commerce officials told us they believe they have enough resources to absorb the increase in workload. They noted that they have flexibility to shift license review staff to meet demand created by the additional licenses, if necessary. In addition, BIS received an 18 percent increase in full-time equivalent staff positions, from 367 to 432, in fiscal year 2018. This increase was in response to workload demands created by previous transfers of items from the USML to the CCL, according to Commerce officials. Commerce officials told us that they will continue to assess workload data after the proposed transfer is implemented to determine whether they have adequate staff levels to meet increased workload demands. Conclusions If finalized, the proposed rules to transfer certain firearms, artillery, and ammunition from Categories I-III of the USML to the CCL would apply Commerce’s export control system to these items instead of State’s. However, critical information needed to effectively screen applicants and target licenses for end-use monitoring may be unavailable to Commerce unless State shares its watch list data. Further, because State has been responsible for export controls of firearms, artillery, and ammunition, its watch list is more likely than Commerce’s to include derogatory information collected from past screening and end-use monitoring related to exports of these items, according to Commerce officials. While State and Commerce officials said that they have held discussions regarding how to share relevant information from their internal watch lists, as of February 2019, they had not reached any agreement on how to share watch lists if the proposed rules are finalized. Without such an agreement or process to share State’s watch list, Commerce may lack critical information needed to ensure that items proposed for transfer are used as intended and by legitimate end-users. Recommendations for Executive Action We are making a total of two recommendations, including one to State and one to Commerce. If responsibility for controlling the exports of certain firearms, artillery, and ammunition is transferred from State to Commerce, the Secretary of State should ensure that the Under Secretary of State for Arms Control and International Security Affairs develops a process for sharing State’s internal watch list with Commerce to enhance oversight of these items. (Recommendation 1) If responsibility for controlling the exports of certain firearms, artillery, and ammunition is transferred from State to Commerce, the Secretary of Commerce should ensure that the Under Secretary of Commerce for Industry and Security develops a process for receiving State’s internal watch list and integrating it into Commerce’s licensing review process to enhance oversight of these items. (Recommendation 2) Agency Comments We provided a draft of this report to State, Commerce, DOD, DHS, and DOJ for review and comment. In their written comments, reproduced in appendixes III and IV, State and Commerce agreed with our recommendations. Commerce provided some minor revisions to the recommendation, which we incorporated. DOD, DHS, and DOJ did not provide written comments. In addition, State, Commerce, and DOJ provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretaries of State, Commerce, Defense, and Homeland Security; and the Attorney General of the United States. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612 or gianopoulosk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology Our objectives were to assess (1) the volume and value of commercial export license applications State Department (State) reviewed for firearms, artillery, and ammunition—Categories I-III of the U.S. Munitions List (USML)—in fiscal years 2013-2017, (2) how certain export controls differ between State and Commerce, and (3) what is known about the resource implications for State and Commerce due to the proposed transfer. To assess the volume and value of export license applications for USML Category I-III firearms, artillery, and ammunition that State reviewed during fiscal years 2013 to 2017, we obtained data from the interagency export licensing database, USXPORTS. USXPORTS is the system of record for all munitions and dual-use export license applications and adjudications, and is maintained by the Defense Technology Security Administration, within the Department of Defense (DOD). The data on USXPORTS originates from private companies applying for export licenses which, in the case of munitions, State is responsible for adjudicating. The agencies use this database to review and adjudicate applications, and also to report back to the applicants. We interviewed officials from State’s Directorate of Defense Trade Controls (DDTC) in State’s Bureau of Political and Military Affairs to understand the data and identify any limitations on how we use them. We analyzed the data to describe the number and reported value of export license applications, the USML items in the applications, and the reported destination country, among other characteristics. We assessed these data and found them to be sufficiently reliable for the purpose of conducting these analyses, but recognized that approved applications may not necessarily result in actual exports. We also noted some minor data limitations in our report, such as the fact that amendments to export license applications are not associated with destination countries. We did not independently audit the underlying data submitted to DDTC by private companies. To analyze how certain export controls differ between State and Commerce, we reviewed the departments’ proposed rules, relevant laws and regulations, agency guidance, and annual reports related to State’s and Commerce’s export controls. We also interviewed officials from Commerce’s Bureau of Industry and Security; DDTC; State’s Bureau of Democracy, Human Rights and Labor; State’s Bureau of International Security and Nonproliferation; Immigration and Customs Enforcement and U.S. Customs and Border Protection in the Department of Homeland Security; and the Defense Technology Security Administration. We sought to present differences between State’s and Commerce’s export controls that are potentially relevant for items proposed for transfer from the USML to the CCL, rather than every possible distinction between the two departments’ export control systems. To describe the number of export license applications for firearms, artillery, and ammunition that required congressional notification, we reviewed the licensing data from the USXPORTS database. To describe the end-use monitoring conducted on exports of firearms, artillery, and ammunition, we extracted data from State’s Defense Trade Application database and interviewed agency officials to understand the data. We analyzed the data by the number of checks per year, the proportion of pre-license checks to post- shipment checks, the countries where the checks were conducted, and the outcome of the checks. We assessed these data and found them to be sufficiently reliable for these purposes. To assess what is known about the resource implications for State and Commerce due to the proposed transfer, we held discussions with State and Commerce officials, and reviewed annual budget documents and other agency reports. To better understand State’s estimated reduction of 10,000 license applications per year and Commerce’s estimated gain of 6,000 licenses that would result from the proposed transfer of items from the USML to the CCL, we reviewed State’s fiscal year 2013-2017 export license data and the proposed rules. We also discussed the estimates with agency officials. Commerce officials told us that their estimate was fairly broad, based on State’s estimate and their knowledge and experience of differences between the two agencies’ license requirements that account for the difference between the two estimates. We were not able to independently assess the accuracy of either estimate because the license data we collected from State were not disaggregated to identify which items on license applications would be transferring to the CCL under the proposed rules and which would be staying on the USML. Each State license application can involve multiple items across multiple USML Sub-Categories. We also reviewed the number of full-time equivalent staff responsible for export control activities and State’s annual revenue from registration fees paid by manufacturers, exporters, and brokers involved in items on the USML. We discussed State’s registration data with agency officials and while we assessed these data as sufficiently reliable for descriptive purposes, we also determined that these data could not be used to generate reliable estimates about the resource implications for the Department of State because there was no clear pattern in the relationship between applications, registrants, and revenue in the data provided. We conducted this performance audit from February 2018 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: The U.S. Munitions List and the Commerce Control List Defense articles and services subject to export controls under the Department of State’s jurisdiction are listed in the 21 categories of the United States Munitions List (USML). Table 3 shows the 21 USML categories and the dates of rule changes under export control reform that transferred certain items within these categories to the Commerce Control List (CCL). The CCL is divided into ten broad categories and each category is further subdivided into five product groups (see table 4). Appendix III: Comments from the Department of State Appendix IV: Comments from the Department of Commerce Appendix V: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the individual named above, Drew Lindsey (Assistant Director), Howard Cott (Analyst in Charge), Ashley Alley, Martin de Alteriis, Neil Doherty, Adam Peterson, and Aldo Salerno made significant contributions to this report.
Why GAO Did This Study The U.S. government implements an export control system to manage risks associated with exporting sensitive items while facilitating legitimate trade. State currently controls the export of most firearms, artillery, and ammunition. Regulatory changes proposed by State and Commerce would transfer this responsibility for many of these items to Commerce, which implements export controls under different legal and regulatory authorities. The proposed changes are part of a larger export control reform effort since 2010 to transfer control of less sensitive items from State to Commerce. GAO was asked to review the proposed changes to export controls of firearms, artillery, and ammunition. This report assesses (1) the volume and value of commercial export license applications State reviewed for these items in fiscal years 2013-2017, (2) how certain export controls differ between State and Commerce, and (3) what is known about the resource implications for State and Commerce due to the proposed transfer. GAO reviewed the proposed rules and related laws and regulations; analyzed data and documents related to licensing, end-use monitoring, and staff resources; and interviewed agency officials. What GAO Found The Department of State (State) reviewed approximately 69,000 commercial export license applications for firearms, artillery, and ammunition valued at up to $45.4 billion during fiscal years 2013 to 2017. About two-thirds of these applications were for firearms, and the majority involved the export of non-automatic and semi-automatic firearms, which are among the items proposed for transfer from State to Department of Commerce (Commerce) control. GAO identified several differences in Commerce's and State's export controls including those related to registration, licensing, end-use monitoring, and congressional notification that, according to the agencies, would apply to firearms, artillery, and ammunition proposed for transfer. Some of these differences are due to varying requirements in applicable laws and regulations. For example, the law requires manufacturers, exporters, and brokers to register with State for items controlled by State but not for items controlled by Commerce. Additionally, while Commerce and State both screen parties to licenses against relevant watch lists, Commerce officials said they do not have direct access to State's internal watch list, which contains derogatory information from past screening of licenses for firearms, artillery, and ammunition exports. State and Commerce officials stated that, while they have held some discussions, they have not established a process for sharing watch list information. Without access to State's watch list, Commerce may lack critical information to effectively screen parties to exports of firearms and related items. State and Commerce also both have end-use monitoring programs to confirm the legitimacy of end-users but some differences exist. For example, State relies on embassy staff to conduct end-use monitoring whereas Commerce relies primarily on several officers positioned overseas specifically for this purpose. In addition, a statutory requirement to notify Congress of proposed firearms exports over $1 million would no longer apply to firearms that transfer from State to Commerce, according to Commerce officials. According to the proposed rules and agency officials, the proposed transfer, if finalized, would result in a decline in licenses and revenues for State and an increase in licenses for Commerce, but the precise extent of these changes is unknown. State estimates that the transfer would result in a decline in revenue from registration fees but officials stated it is difficult to predict the extent of this decline. Commerce officials stated that they expected their licensing and enforcement workload to increase as a result of the transfer, if finalized, but they believe they have sufficient staff resources available to absorb the increase. What GAO Recommends GAO recommends that if the proposed regulatory changes become final, State and Commerce develop a process for sharing State's internal watch list with Commerce to enhance oversight of firearms, artillery, and ammunition exports. State and Commerce agreed with GAO's recommendations.
gao_GAO-20-372
gao_GAO-20-372_0
Background Public Health Agency Roles in Infectious Disease Outbreaks and Response In the United States, HHS is the lead federal agency responsible for public health. Its responsibilities include preparing for, mitigating, responding to, and recovering from public health emergencies. Within HHS, ASPR and CDC prepare for and respond to infectious disease outbreaks. ASPR leads and coordinates national preparedness and response to outbreaks in the United States. It also coordinates and supports advanced research and development, manufacturing, and procurement and deployment of medical countermeasures, such as vaccines, drugs, therapies, and diagnostic tools that can be used in the event of a potential public health emergency to protect the public from harm. CDC monitors and responds to outbreaks by, among other things, studying the link between infection and health; monitoring and reporting cases of infection; and providing guidance to the public, travelers, and health care providers. During public health emergencies, CDC may operate an Emergency Operations Center (EOC) for monitoring and coordinating its response to emergencies—including infectious disease outbreaks of Ebola, Zika, and pandemic influenza—in the United States and abroad. The EOC staff helps with directing specific incident operations; acquiring, coordinating, and delivering resources to incident sites; and sharing incident information with the public. Other agencies perform additional work related to infectious diseases. For example, FDA monitors and protects the blood supply, and NIH makes grant awards that support research related to diseases and modeling. ASPR, CDC, and FDA have different approaches to modeling. In the cases of Zika, Ebola, and pandemic influenza, CDC and ASPR are two key agencies that conduct federal infectious disease modeling efforts. As of February 2020, ASPR had a centralized modeling unit staffed by about nine people, who are a mix of federal and contract employees, according to ASPR officials. At CDC, however, modeling is decentralized and integrated into the individual centers that make up the agency. Some staff work full time on modeling, while others spend part of their time on other tasks. In addition, some of CDC’s modeling efforts are conducted externally. According to CDC, approximately 70 staff members participated in modeling studies, as of October 2018. Of those staff, CDC’s Health Economics and Modeling Unit employed about 10 modelers who have worked on Ebola and other diseases. For Zika, CDC officials responding to Zika said most modeling work was done by one modeler in CDC’s Division of Vector-Borne Diseases, a part of the National Center for Emerging and Zoonotic Infectious Diseases. CDC influenza officials said influenza modeling is conducted by six or seven members of CDC’s Influenza Division. Agency infectious disease modeling activities are not limited to Ebola, Zika, or pandemic influenza. Agency efforts to protect the nation from disasters and emergencies can be organized into two elements: preparedness and response. Infectious disease modeling is one tool used to inform a wide range of decisions related to outbreak preparedness and in response to an outbreak. In the context of infectious disease outbreaks, ASPR and CDC perform work on preparedness and response. For example, ASPR leads the Public Health Emergency Medical Countermeasures Enterprise (PHEMCE), an interagency group that helps develop medical countermeasures—FDA- regulated products including drugs, or devices that may be used in the event of a potential public health emergency to protect the public from harm. CDC may activate its EOC to assist with the response during an outbreak. For example, during the 2014-2016 West Africa Ebola outbreak, CDC activated its EOC in July 2014 to help coordinate activities. CDC personnel were deployed to West Africa to assist with response efforts, including surveillance, data management, and laboratory testing. Infectious Disease Outbreaks Since the 1980’s, emerging infectious diseases have resulted in more recurrent disease outbreaks, causing an increasing number of human infections. Emerging infectious diseases have at least one of the following characteristics: they are newly recognized, have emerged in new areas, are newly affecting many more individuals, or have developed new attributes. Some of these diseases—including Ebola and Zika—are zoonotic pathogens, meaning they spread from animals to humans. Zoonotic pathogens can be carried from an animal to a human by another animal, such as a mosquito, chicken, or bat, which is known as a vector. Such pathogens sicken approximately 1 billion people annually. Ebola According to the World Health Organization, Ebola causes an acute, serious illness, which is often fatal if untreated. Ebola is introduced into human populations through close contact with the blood and other bodily fluids of infected animals. Humans spread Ebola through direct contact with the bodily fluids of infected individuals or objects contaminated with these fluids. Ebola symptoms include fever, muscle pain, vomiting, diarrhea, impaired kidney and liver functioning, and, in some cases, internal and external bleeding. There have been five Ebola outbreaks since 2014, including the 2014-2016 West Africa outbreak which caused more than 28,600 cases and 11,325 deaths. Since 2018, there has been an ongoing outbreak in the Democratic Republic of the Congo. Figure 1 provides a timeline of Ebola outbreaks since 2014. Zika Zika is a virus that is primarily transmitted through mosquito bites. It can cause symptoms such as fever, rash, conjunctivitis (red eyes), and joint and muscle pain. It can also be transmitted from mother to child during pregnancy, or around the time of birth, or from person to person through sexual contact or blood transfusion. Many infected people do not have symptoms or will only experience mild symptoms. The Zika outbreak that began in 2015 affected individuals infected with the virus in ways that had not been seen with previous outbreaks of the disease. Specifically, during the 2015-2016 outbreak, Zika infection in pregnant women was linked to microcephaly and other severe brain defects, according to CDC. CDC officials said this was the first time in more than 50 years that an infectious pathogen has been identified as the cause of birth defects. Zika was also linked to other problems, such as miscarriage, stillbirth, and Guillain-Barré syndrome, an uncommon disorder affecting the nervous system. In the Western Hemisphere, the first cases of locally- transmitted Zika were confirmed in Brazil in May 2015. In December 2015, locally-transmitted Zika was reported in Puerto Rico. On January 22, 2016, CDC activated its Emergency Operations Center to respond to outbreaks of Zika occurring in the Americas and to increased reports of birth defects and Guillain-Barré syndrome in areas affected by Zika. Within the continental United States, the first locally-transmitted cases were confirmed in Florida in June 2016. The World Health Organization declared Zika a Public Health Emergency of International Concern from February to November 2016. Pandemic Influenza In the spring of 2009, a novel influenza virus emerged, known as influenza A (H1N1)pdm09. According to CDC, it was detected first in the United States and quickly spread across the world, causing a pandemic or global outbreak of a new influenza A virus. This new virus contained a combination of influenza genes not previously identified in animals or people. The virus was very different from other H1N1 viruses circulating at the time, so seasonal influenza vaccines offered little cross-protection against infection with the new H1N1 virus, according to CDC. A vaccine against the new virus was produced, but it was not available in large quantities until late November—after the peak of illnesses during the second wave in the United States. CDC activated its EOC on April 22, 2009, to manage the H1N1 response. From April 12, 2009, to April 10, 2010, CDC estimated there were about 60.8 million cases, 274,304 hospitalizations, and 12,469 deaths in the United States due to the new H1N1 virus. According to CDC, few young people had any existing immunity—as detected by antibody response—to the virus, but nearly one-third of people over 60 years old had antibodies against it, likely from exposure to an older H1N1 virus. Multiple strains of influenza can infect humans, including strains that originate in animals. According to CDC, human infections with an Asian lineage avian influenza A (H7N9) virus were first reported in China in March 2013. During an epidemic that lasted from October 1, 2016, through September 30, 2017, the World Health Organization reported 766 human infections with H7N9 virus, making it the largest H7N9 epidemic. From 2013 to December 7, 2017, there were 1,565 humans infected with Asian lineage H7N9 reported by the World Health Organization. According to CDC, while the risk posed by H7N9 virus to the public’s health was low, the agency was concerned about its pandemic potential. Infectious Disease Models Agencies use infectious disease models to answer a variety of public health questions, including those related to outbreak preparedness and response. A model is a physical, mathematical, or logical representation of a system, phenomenon, or process that allows a researcher to investigate that system, phenomenon, or process in a controlled way. For example, the classic Susceptible-Infected-Recovered or “SIR” model divides a population into three categories: 1) susceptible to the disease, S; 2) infected and infectious, I; and 3) recovered or removed from the infected or susceptible population, R. This model uses equations to determine how many people move between these three categories. The equations contain parameters—numerical descriptors of the disease based, for example, on experiment, expert opinion, or statistics of an ongoing or past outbreak. The equations allow the researcher to estimate how many people are or could be affected by the disease. For example, for past Ebola outbreaks, models estimated that after 40 days, about 44 percent of the population in close contact with infected individuals was susceptible to infection, 31 percent was infected, and 22 percent was recovered. Based on these parameters, equations for transfer between categories, and underlying demographics of the community, an epidemiologist could use the model to estimate how many people within a given town could be susceptible, infected, or removed from the categories of susceptible or infected (due to death or recovery and immunity). Based on model estimates and if a vaccine was available, CDC officials said the decision maker could plan for a specific number of vaccine kits and additional medical staff and supplies to treat infected patients. Models can also help agency officials anticipate future outbreaks, forecast the spread or severity of a disease, and predict the effects and costs of different intervention options. After an outbreak, models can help sort out what happened, what drove the outbreak, and how it compared to past outbreaks. Other tools are available to accomplish some of these tasks, but models are particularly useful when existing data are not sufficient to answer a given question, or when agencies need to integrate data from disparate sources. Infectious disease models can be put into two broad categories: Statistical models. This type of model identifies relationships or patterns that can be used to describe what is occurring or predicts what may occur in the future based on what has occurred in the past. Statistical models tend to use a large amount of data, such as past observed events, to forecast future events, such as disease occurrence, but do not require a fundamental understanding of biological processes or human behavior. They can predict outcomes when causes are not known or understood and when scientific understanding of a disease is limited. They tend to use large amounts of data on past events to forecast future events. Statistical models do not provide full explanations about an infectious disease but may be used when epidemiologists have all or most of the data needed to test a hypothesis. Several benefits can be derived from statistical modeling, including the ability to control for multiple factors that might impact the outcome reviewed, and the ability to isolate the potential effect of infectious disease factors on a particular outcome. Mechanistic models. Mechanistic models rely heavily on scientific evidence and theory related to infectious diseases, and the understanding of disease dynamics or human behavior from prior knowledge—such as biological processes or interactions between people—to represent known processes. They use basic infectious disease science to inform public health guidance and provide insights into outbreak emergence, spread, and control. For example, population-based models can simulate the course of an epidemic by dividing the population into different categories, such as susceptible, infected, and recovered. Mechanistic models can project the likely course of disease transmission, calculate and predict the effect of proposed interventions, and take into account variable conditions, such as human behavior. Both statistical and mechanistic models can range from simpler to more complex. A simpler model may, for example, have fewer parameters (inputs) or equations than a more complex model. According to CDC modelers and an expert, a simpler model may be run with a variety of software, ranging from spreadsheet software to more sophisticated software, whereas more complex models are usually run using sophisticated statistical or mathematical programming languages. As a model becomes more complex, it can become harder to describe, recreate, and understand its internal functioning. Modeling is identified as a beneficial tool in various national plans for disease response and biodefense. These plans do not define the extent to which modeling should occur or how models should be developed for policy, resource allocation, or planning purposes. See table 1 for examples of relevant national plans. HHS Has Used Infectious Disease Models to Help Inform Policy and Planning Use of Models to Inform Planning and Policy Decisions CDC and ASPR use models primarily to answer questions from decision makers. CDC and ASPR officials told us, and documents show, that modeling is one source of information that may inform such decisions, along with sources such as expert opinion, surveillance, other prior work on the disease, and an official’s own knowledge. CDC modelers and officials said there is no “rule” as to when to use models, and in some situations, it may not be considered useful. For example, CDC did not use modeling when issuing a travel notice for an Ebola outbreak in specific provinces in the Democratic Republic of the Congo, officials said. Instead, CDC based the travel order on an analysis that considered disease incidence and prevalence, public health infrastructure, and the availability of therapeutics, among other things. Similarly, CDC officials responding to Ebola said modeling may be undesirable when it would take too long to engage the necessary external subject matter experts or when modeling would detract from responding to a disease. CDC and ASPR modelers use models for a variety of purposes. CDC officials said modeling is done differently for each disease, and the amount and type of modeling varies across CDC centers, in part because some centers have less capacity to conduct modeling than others. According to a CDC internal report, the most frequent uses of infectious disease modeling at CDC are: guiding preparedness and response efforts; conducting economic analyses to evaluate the benefits of public health actions, thereby reducing illness and deaths from infectious diseases; understanding pathogen biology, disease transmission, and estimating disease burden; and assessing the effect of interventions and prevention strategies. ASPR modelers and officials said models have provided information about topics such as: resources, including protective equipment, needed to help respond to an Ebola outbreak; the number of therapeutics and vaccine doses needed to respond to Ebola, both in Africa and domestically; expected U.S. demand for Zika diagnostics; and the number of vaccine doses needed to mitigate the spread of pandemic influenza. ASPR modelers and officials said modelers tend to serve in a broad role that can include modeling, data analysis, or other tasks. For example, officials said a modeler could provide a team with day-to-day analytic support and not necessarily spend time developing models or use them. Additionally, ASPR maintains a Visualization Hub that can be used for outbreak planning and response, including outbreaks of pandemic influenza and other emerging infectious diseases (see fig. 2). CDC and ASPR modelers and officials said they generally initiate modeling in response to questions from decision makers. The modelers then work closely with epidemiologists and other subject matter experts to answer the questions. Modeling, according to CDC officials, may be used by individuals or groups within centers, such as division directors, branches, or teams to influence decisions. Who answers a particular question depends, according to ASPR modelers and officials, on the decision maker. Sometimes questions asked will not be within their mission—modelers may suggest such questions be sent to a more relevant agency or part of HHS. CDC and ASPR have modeled to answer a variety of public health questions relevant to Ebola, Zika, and pandemic influenza, and, at times, the results helped inform policy and planning decisions. Modelers and officials provided the following examples: Planning: ASPR modelers and officials said the bulk of the agency’s modeling is related to the planning, development, and deployment of medical countermeasures. For example, these modelers and officials said many clinical trials for vaccines and therapeutics were planned during the 2014-2016 Ebola outbreak response. As a part of these planning activities, ASPR modelers said modelers developed forecasts of future trajectories of disease incidence under a variety of conditions. These forecasts indicated a significant likelihood the disease incidence in Sierra Leone could decrease to a level that would significantly reduce the success of the trials, according to modelers. Additionally, at the beginning of the 2014-2016 Ebola outbreak response, CDC modelers received modeling questions related to the resources needed to effectively limit the spread of the disease, according to CDC documentation. CDC used models to predict the number of Ebola cases that could be expected over time with and without disease interventions such as Ebola treatment units, community care centers, and safe burials. On the basis of this information and other factors, including a United Nations document on Ebola needs, CDC leadership and other U.S. government officials recommended a rapid increase in Ebola response aid, according to CDC documentation. According to CDC documentation, later analyses demonstrated that this increase helped to greatly reduce the actual number of cases, compared to the likely number if prompt action had not been taken. Additionally, in response to the H7N9 influenza outbreak in 2017, ASPR modeled to determine when doses of influenza vaccine should be delivered and how many doses should be administered in order to mitigate a domestic outbreak. This model found that having a vaccine stockpile could be helpful in preventing disease and that a slow effort to administer an H7N9 vaccine could reduce the vaccine’s usefulness. Policy: During the Zika outbreak, CDC modelers and officials said they modeled to determine the potential effectiveness of using pesticides to remove insects from aircraft, trains, or ships. According to modelers and agency officials, the issue arose as concern about Zika virus grew, including from other countries and U.S. agencies, like the Department of Transportation and Department of Defense. The model indicated that humans are more likely than insects to transport Zika on airplanes, and officials therefore concluded that the use of pesticides on airplanes would not be an effective intervention. According to CDC modelers and officials, this modeling resulted in an additional sentence being added to World Health Organization policy, which stated that pesticide use was not expected to be effective. The extent of modeling conducted for Ebola, Zika, and pandemic influenza varied according to the question being asked, along with other factors as follows: Type of question: CDC and ASPR have used models to answer such questions as who should be prioritized for vaccination or treatment, how transmissible a disease is, and how effective certain interventions are likely to be, according to modelers and agency officials. For example, ASPR modelers and officials said they modeled to help estimate the resources needed to respond to an Ebola outbreak; the number of therapeutics and vaccine doses needed to respond to Ebola, both in Africa and the U.S; and the expected U.S. demand for Zika diagnostics. One ASPR official said that, during the 2009 pandemic influenza outbreak, modeling questions were used to provide decision makers with information on what might happen in a given situation. For example, models were used to provide information related to decisions on early vaccine distribution and how this intervention could affect the potential mortality rate. Time to model: How soon decision makers needed information also influenced the extent to which CDC and ASPR modeled. For example, if decision makers needed an answer in a week, modelers would inform the decision makers about how much of the answer they could provide within that time frame, ASPR modelers said. Similarly, CDC modelers and officials said that, in one instance, modelers had only 12 hours to provide decision makers with information. Even estimating the time needed to develop and conduct modeling could represent an additional challenge, according to CDC modelers responding to Zika. According to a CDC article on modeling to inform responses to novel influenza viruses, the amount of time required to develop and execute a model can vary from less than a week to more than a month. Agency officials concurred with these time frames. Personnel and data availability: The availability of qualified personnel was also a factor that affected how much modeling agencies conducted for the selected diseases. For example, CDC modelers and officials said the agency’s Division of Vector-Borne Diseases has focused its resources in other areas, such as building the capacity of states to address vector-borne diseases, and therefore had not invested in individuals with the right skill sets to conduct modeling for the Zika outbreak response. As a result, the division had to call on the three or four CDC modelers from outside of the division who were available to assist with the Zika outbreak response, which limited the amount of modeling that could be performed. Data challenges can also limit the types of modeling conducted. For example, when modeling for Zika, ASPR modelers said they used available information, but data quality and availability limited their ability to model. More data typically become available as an outbreak progresses, but models may be most helpful at the beginning of an outbreak when critical decisions need to be made (see fig. 3). CDC and ASPR do not keep a list of all modeling conducted, and we therefore cannot quantify the extent of their efforts in terms of a number of models. ASPR modelers and officials said modeling is typically one small aspect of the way the agency carries out its mission. One ASPR official said models are never the sole source of information for decision-making. According to NIH officials, NIH does not conduct or fund internal modeling for decision-making purposes. NIH’s Fogarty International Center has conducted self-initiated, internal modeling to answer questions generated from research, and from ideas from Center-held workshops. Two NIH institutes—the National Institute of General Medical Sciences and the National Institute of Allergy and Infectious Diseases—along with NIH’s Fogarty International Center have awarded grants for external modeling research for our selected diseases. However, NIH officials said these efforts were intended to advance science, not for policy or outbreak response. Use of Models to Inform Resource Allocation Decisions CDC and ASPR modelers and officials said they considered modeling results to a limited extent when making decisions about resource allocation. While modeling can help determine the amount of particular resources needed during an infectious disease outbreak, CDC modelers and officials said it is not central to their resource allocation planning. For example, CDC modelers and officials noted that while a model could inform a decision maker about how many diagnostic testing supplies would be needed based on the range of predicted cases, this would be one input among many into the decision. Decision makers would also consider whether there are other diagnostic test supplies for similar diseases that could be used, the extent of laboratory testing capacity, or the longevity of those supplies. Models can be used to help plan for the cost of interventions by determining the numbers or types of interventions that can be used during a response to an infectious disease outbreak, according to CDC modelers and officials. It can also help decision makers recognize gaps in their ability to implement resource allocation decisions, according to CDC officials. For example, CDC leadership described how modeling input requirements spurred analysis of the factors limiting hospitals’ use of ventilators during a pandemic influenza outbreak. This work, according to CDC officials, helped determine the number of ventilators that should be included in the national stockpile. While modeling results are important to consider during a public health event, ASPR officials and modelers said it is also important to consider concrete financial estimates based on prior experience and whether recommended medical interventions or countermeasures are available or effective. For example, ASPR modelers and officials have occasionally been asked to analyze costs for medical countermeasures, but modelers and officials said that few medical countermeasures typically meet the requirements of decision makers, and existing medical countermeasures are typically unavailable for use in a response. ASPR modelers and officials noted that the usefulness of modeling to the decision maker in these instances is limited. In the event that they were asked to model for such questions, ASPR modelers and officials said time would also be a limiting factor in their analysis. CDC has also developed models to inform decision-making at the state level, specifically to assist state and local public health agencies in developing outbreak response plans. A professional organization of epidemiologists we contacted expressed some concerns with limitations of CDC models, specifically noting that state and local officials viewed CDC models as lacking the level of refinement needed for their state- and local-level planning needs. To follow up, we interviewed officials from a non-generalizable selection of five states based on their reported use of CDC models, the level of selected disease activity in the state, and geographic variation. Two of the five state health departments we contacted reported using one of CDC’s models for Ebola, Zika, or pandemic influenza. These two states confirmed that the usefulness of the CDC FluSurge pandemic influenza model was limited by unrealistic assumptions or a lack of predictive capability, but added that the models were useful to them when considering how to allocate resources or otherwise prepare for a severe pandemic. Officials from one state health department told us they had similar concerns with the CDC Ebola model regarding an unrealistic overestimate of the potential cases, but added that it was useful for informing staff allocation planning as part of their overall response. Officials from another state health department told us they used CDC’s Zika modeling results that indicated how many emergency room visits they could expect and what symptoms it would take to confirm a Zika infection. At the time, state officials said, commercial testing for Zika was not available, so this modeling was very helpful to health officials looking to recommend who hospitals should test based on the presence of Zika symptoms. State health department officials added that many other factors are considered when deciding on resource allocation, such as local leadership and willingness to embrace the public health response. Agencies Coordinate Infectious Disease Modeling Efforts but Do Not Fully Monitor, Evaluate, and Report on Coordination The four HHS agencies that work on infectious disease modeling reported using multiple mechanisms to coordinate their efforts. However, they do not routinely monitor these efforts, evaluate their effectiveness, or report on them to identify areas for improvement. HHS Agencies Coordinate Infectious Disease Modeling Efforts in Multiple Ways The four HHS agencies that work on infectious disease modeling—ASPR, CDC, FDA, and NIH—reported using multiple mechanisms to varying extents to coordinate such efforts. For example: Emergency Operations Center (EOC). During the response to an outbreak, CDC activates its EOC—a temporary, formal organizational structure for coordinating expertise within CDC and among agencies. The four HHS agencies—ASPR, CDC, FDA, and NIH—used EOCs to coordinate modeling efforts during responses to Ebola, Zika, and pandemic influenza outbreaks. For example, during the 2015-2016 Zika outbreak, CDC’s EOC served as the command center for monitoring and coordinating the response by bringing together CDC scientists with expertise in areas such as arboviruses (the category that includes Zika), reproductive health, birth defects, and developmental disabilities. CDC modelers and officials told us that they had weekly strategy meetings and briefings with response leadership within the EOC where they discussed which modeling questions to prioritize. In general, CDC modelers in the EOC were expected to coordinate with modelers from other agencies within and outside of HHS—such as ASPR, FDA, NIH, and the Department of Homeland Security—to produce timely estimates of cases, hospitalizations, and deaths. These estimates can inform response leadership and enable them to assess the speed and impact of the geographic spread of the pandemic. Modelers in the EOC also provide support to decision makers as they examine the potential effects of various response options. These options include when and how to deploy Strategic National Stockpile assets, such as influenza antiviral drugs and mechanical ventilators. We found the use of EOCs to be consistent with leading collaboration practices we have previously identified, such as defining and articulating a common outcome. Public Health Emergency Medical Countermeasures Enterprise (PHEMCE). The four HHS agencies also participated in PHEMCE, a federal interagency body formed by HHS in 2006 that coordinates the development, acquisition, stockpiling, and recommendations for use of medical products that are needed to effectively respond to a variety of high-consequence public health emergencies. PHEMCE is led by ASPR and also includes partners at the Departments of Defense, Veterans Affairs, Homeland Security, and Agriculture. PHEMCE’s 2017-2018 strategy and implementation plan, its most recent, identified Ebola, pandemic influenza, and emerging infectious diseases more broadly as high-priority threats. PHEMCE leadership could ask modelers to address questions related to these infectious diseases, according to ASPR modelers and officials. According to ASPR officials, such questions tend to support larger response- related efforts, and modeling results are often incorporated into final reports and products. According to ASPR officials, as of February 2020, the PHEMCE structure has been updated and it is unclear how modeling fits into the new structure. We found that coordination through PHEMCE is consistent with leading collaboration practices such as establishing mutually-reinforcing or joint strategies. Working groups. Modelers with the four HHS agencies have participated in working groups related to infectious disease modeling (see table 2). The use of working groups and similar bodies is consistent with leading collaboration practices that we have previously reported as useful for enhancing and sustaining interagency collaboration, such as identifying and addressing needs by leveraging resources. For example, CDC and ASPR modelers participated in the National Science and Technology Council’s Pandemic Prediction Forecasting Science and Technology Working Group, which facilitates coordination among numerous federal agencies. In 2016, this group produced a report that identified challenges in outbreak prediction and modeling for federal agencies and offered recommendations for federal actions to advance the development and effective application of outbreak prediction capabilities. Description This interagency working group, directed by the National Science and Technology Council, is responsible for analyzing the state of infectious disease modeling and prediction, and facilitating coordination among numerous federal agencies. According to CDC modelers and officials, as of October 2018, the charter for this group is no longer active, and it meets on a voluntary, ad hoc basis. According to CDC officials, this group connects modelers by holding seminars, managing an email list, and arranging for members to peer review one another’s models. This group had over 160 participants from various centers across CDC, as of June 2019. During the 2014-2016 Ebola and 2015-2016 Zika outbreaks, the Department of Health and Human Services’ (HHS) Office of the Assistant Secretary for Preparedness and Response (ASPR) established temporary modeling coordination groups that brought together government agencies and academics to share early modeling results and discuss pressing questions that could be answered through modeling, according to ASPR modelers and officials. A wide range of entities participated in these groups, including the four HHS agencies, other federal agencies such as the Departments of Defense and Homeland Security, universities, and foreign entities, such as the World Health Organization and the United Kingdom. According to ASPR modelers and officials, there are no plans to convene modeling coordination groups unless there is an ongoing infectious disease outbreak. Joint model development. ASPR and CDC modelers jointly developed some modeling products during outbreak responses. For example, during the 2014-2016 Ebola response, ASPR and CDC developed a model to estimate future numbers of Ebola patients needing treatment at any one time in the United States. According to a publication describing the model, policymakers have used it to evaluate responses to the risk for arrival of Ebola-infected travelers, and it can be used in future infectious disease outbreaks of international origin to plan for persons requiring treatment within the United States. Building these positive working relationships can help bridge organizational cultures by building trust and fostering communication, which facilitates collaboration and is vital in responding to emergencies. For example, in our 2011 report, we found that, through interagency planning efforts, federal officials built relationships that helped facilitate the federal response to the H1N1 influenza pandemic. Similarly, HHS officials said that federal coordination during the H1N1 pandemic was much easier because of these formal networks and informal relationships built during pandemic planning activities and exercises. Memoranda of understanding. The four HHS agencies have entered into various agreements through memoranda of understanding in order to define their relationships for coordinating infectious disease modeling (see table 3). Generally these memoranda were between individual agencies rather than department-wide. We found that the use of memoranda of understanding was consistent with leading collaboration practices, such as agreeing on roles and responsibilities. Our prior work found that agencies that articulate their agreements in formal documents can strengthen their commitment to working collaboratively. Similarly, CDC modelers and officials said that written agreements can reduce the possibility of misunderstandings or disagreements and help ensure that participants have a mutual understanding of collaboration goals. For example, in the absence of such written agreements, the potential for duplication is increased because agencies could be working on similar types of models without one another’s knowledge. Table 3. Selected Examples of Memoranda of Understanding for Coordinating on Infectious Disease Modeling Collaborating agencies The Office of the Assistant Secretary for Preparedness and Response (ASPR) and Centers for Disease Control and Prevention (CDC) ASPR and the Food and Drug Administration (FDA) Description From 2013 to 2018, CDC and ASPR had a memorandum of understanding to promote collaboration, provide expertise, and facilitate data and information exchange related to infectious disease modeling. This agreement expired in 2018. ASPR modelers and officials told us that, as of August 2019, it had not been updated, and there were no plans to do so. Despite this, according to CDC modelers and officials, the substance of the agreement is still being followed. CDC modelers and officials told us they continue to collaborate with ASPR modelers on the development of models that address questions of mutual interest. For example, for the ongoing Ebola response, CDC modelers and officials said they have kept ASPR informed on modeling efforts, and ASPR shares data on vaccine production that is included in one of the models. ASPR and FDA have a memorandum of understanding to promote collaboration and enhance knowledge and efficiency by providing for the sharing of information and expertise. This memorandum was in place from 2012 to 2017, and was then renewed in 2019. It remains valid unless modified by consent of both parties or terminated by either party immediately upon written notice in the event that a federal statute is enacted or a regulation is issued by a federal partner that materially affects the memorandum. According to FDA modelers and officials, the agreement facilitates collaboration related to FDA’s Medical Countermeasure Initiative and FDA’s role in supporting the HHS-led Public Health Emergency Medical Countermeasures Enterprise (PHEMCE). FDA modelers and officials told us that the agreement supports the frequent, ongoing collaborations between FDA and ASPR, including collaboration related to preparedness for emerging infectious diseases. However, FDA modelers and officials said, while no specific steps have been taken with regards to collaborating on infectious disease modeling under the agreement, modeling assistance could be provided in the future, if needed. Description From 2013-2018, ASPR had a memorandum of understanding with NIH’s Models of Infectious Disease Agent Study program to (1) enable Models of Infectious Disease Agent Study program researchers to work with ASPR as part of public health preparedness and response activities, (2) share data and information, and (3) support model development and use in the HHS modeling hub. This agreement has expired. ASPR modelers and officials told us that, as of August 2019, it has not been updated, and there were no plans to do so. Since 2015, CDC has had a memorandum of understanding with NIH’s Models of Infectious Disease Agent Study program, to promote collaboration and facilitate the exchange of data, tools (models), methods, and information. It was set to expire in February 2020. From 2013 to 2018, ASPR had separate memoranda of understanding with the Departments of Defense and Homeland Security to promote collaboration, provide expertise, and facilitate data and information exchange. The goals of the collaboration in both agreements were to explore ways to, among other things: share analytical approaches and efforts, such as modeling and simulation tools, in support of public health preparedness and response activities; provide personnel as needed to facilitate analytical efforts; and share data and information. These goals were similar to those laid out in the agreement between CDC and ASPR. These agreements expired in 2018. ASPR modelers and officials told us that, as of October 2019, they have not been updated, and there were no plans to do so. Forecasting competitions. CDC and NIH have sponsored formal forecasting competitions to improve modeling for Ebola, Zika, and seasonal influenza. According to a report from the National Science and Technology Council, controlled, multi‐center modeling contests and projects generate valuable insights. For example, they often show that simpler models perform as well as more complex models and that ensemble models, which combine the results of multiple models to predict an outcome, perform better than an individual model. Such competitions are consistent with a leading collaboration practice we previously reported: identifying and addressing needs by leveraging resources. In this case, such leveraging allowed CDC and NIH to obtain additional benefits and insights on models that may not otherwise be available. These modeling competitions can therefore help the HHS agencies better prepare for future outbreaks through coordination with participants. The following are examples of forecasting competitions sponsored by CDC or NIH: Ebola competition. NIH’s Fogarty International Center held an Ebola forecasting competition from August to December 2015, related to the 2014-2016 West African Ebola outbreak, to compare the accuracy of predictions from different Ebola models, among other things. According to NIH modelers and officials, lessons learned from the challenge were that (1) with regard to short-term incidence predictions, ensemble estimates were more consistently accurate than predictions by any individual participating model; (2) as expected, more accurate and granular epidemiological data improved forecasting accuracy; (3) the availability of contextual information, including patient-level data and situational reports, is important for accurate predictions; (4) the accuracy of forecasting was not positively associated with more complex models; and (5) coordination of modeling teams and comparison of different models is important to ensure robustness of predictions. According to NIH officials, based on these lessons and in response to the most recent Ebola outbreak, NIH has established a coordination group to share information about modeling and data sharing for this particular outbreak and a formal model comparison is underway under World Health Organization leadership. Aedes (Zika) competition. In 2019, CDC hosted a forecasting competition related to using models to predict the presence of Aedes mosquitoes, which is a vector for the Zika virus. Evaluating these models can, according to CDC, help clarify model accuracy and utility, the seasonal and geographical dynamics of these mosquitoes, and key directions for future research. According to CDC documentation, these advances can contribute to improved preparedness for arboviral invasion in the United States and in other regions where Aedes suitability may be limited and changing. CDC plans to evaluate forecasts for this competition in early 2020, as soon as final surveillance data for 2019 are available. FluSight (seasonal influenza) competition. CDC holds an annual seasonal influenza forecasting competition—known as FluSight—to facilitate efforts to engage external researchers to improve the science and usability of seasonal influenza forecasts. The results of the competition are evaluated by the CDC Influenza Division, which works with state and local partners to determine whether the results are useful to them and if there are other metrics, milestones, or targets that would be more helpful in making public health decisions. According to CDC officials in February 2020, the results from the FluSight competition are not directly incorporated into pandemic influenza forecasting because the most accurate seasonal influenza forecasts would not necessarily be the most accurate pandemic influenza forecasts. According to these officials, the overall lessons learned from the FluSight competition relate to how to quantify, visualize, and communicate model results and model accuracy, as well as the value of forecast ensembles to summarize multiple models. CDC officials said these lessons are incorporated into pandemic influenza forecasting plans. Coordination with academic and other modelers. CDC coordinated infectious disease modeling efforts with academic and other modelers through various means, including the following: Intergovernmental Personnel Act agreements. CDC has used agreements under the Intergovernmental Personnel Act of 1970 to collaborate with external experts on modeling efforts. For example CDC’s Division of Vector-Borne Diseases had an agreement from 2014 to 2017 to assign a CDC official to the Harvard T.H. Chan School of Public Health. The agreement was to help CDC integrate with a larger modeling community and provide the Harvard School of Public Health with expertise in arboviral diseases and applied public health. Vector-Borne Disease Centers of Excellence. CDC has funded the Vector-Borne Disease Centers of Excellence, which are engaged in modeling-specific projects. In 2017, CDC established five universities as regional centers of excellence to help prevent and rapidly respond to emerging vector-borne diseases across the United States. According to CDC, the goals of the centers are to build effective collaboration between academic communities and public health organizations at federal, state, and local levels for surveillance, prevention, and response, among other things. Support for other governmental entities. CDC has coordinated with other entities—such as state and local officials—to provide modeling tools, estimates of case counts, or effects of interventions during the Ebola, Zika, and pandemic influenza outbreaks. For example, CDC developed pandemic influenza models for state and local health departments to use in influenza pandemic planning activities. The tools are available on the CDC pandemic influenza website and from ASPR’s emergency preparedness information portal. As previously discussed, officials from two of the states we spoke with said they generally were unaware of the availability of the models. According to CDC modelers and officials, these models were developed in the mid- 2000s for pandemic influenza planning and remain useful but had not been a priority to update because they have not received a request to do so. Informal collaboration. CDC has engaged in a range of informal collaborations related to infectious disease modeling. According to CDC modelers and officials, modelers often develop relationships through conferences or other contacts. For example, CDC modelers and officials said they informally collaborated on Ebola modeling needs with academic institutions, as well as modelers and analysts in the World Health Organization and other U.S. government agencies, such as the Federal Emergency Management Agency. For example, CDC modelers and officials told us that model estimates produced under collaboration with academics helped inform decisions about how many beds to be ordered and delivered on the ground in West Africa during the 2014-2016 Ebola Outbreak. Similar to the forecasting competitions described above, such informal coordination mechanisms are consistent with the best practice of identifying and addressing needs by leveraging resources, thus obtaining additional benefits that may not be available if they were working separately. For example, we have previously reported that informal collaboration mechanisms—such as building relationships between key personnel and soliciting input for research projects—can provide the opportunity to leverage expertise. HHS Agencies Do Not Fully Monitor, Evaluate, and Report on Coordination Efforts CDC and ASPR modelers and officials did not routinely monitor, evaluate, and report on coordination efforts for infectious disease modeling. While CDC did conduct after-action reviews for Ebola and Zika, which included a review of modeling efforts, such reviews are not routine outside of a response and do not examine modeling coordination between agencies. ASPR modelers and officials told us they saw no reason to monitor coordination efforts under the memorandum of understanding with CDC because such memoranda outline expectations rather than requirements. However, we have found that agencies that create a means to monitor, evaluate, and report the results of collaborative efforts can better identify areas for improvement. We have previously reported that progress reviews or after action reviews can be useful mechanisms for monitoring, evaluating, and reporting on collaborative efforts. For example, we previously reported that, to monitor, evaluate, and report on the status of achieving the Healthy People 2010 objectives, HHS held progress reviews in which the federal agencies with lead responsibilities for a focus area reported on the progress towards achieving the objectives. During these reviews, the participating agencies discussed the data trends, barriers to achieving the objectives, strategies undertaken to overcome barriers, and alternative approaches to attain further progress. By holding similar progress reviews in which CDC and ASPR evaluate and report on coordination efforts for infectious disease modeling, these agencies could be better positioned to identify and address challenges prior to infectious disease outbreaks occurring, which could lead to improved responses. Further, there is the potential for overlap and duplication of modeling efforts across agencies, which may not be identified if coordination efforts are not effectively being monitored, and which could lead to inefficiencies. The memorandum of understanding between CDC and ASPR had expired in 2018. Agency officials told us they had no plans to review or update the agreement. According to ASPR modelers and officials, the agreement has not been updated because it was not a priority and the substance of the expired agreement is being followed. However, without an active agreement in place that clearly defines the goals of the collaborative effort and the roles and responsibilities of participants, a lack of understanding and agreement becomes more likely, particularly as agencies’ priorities evolve over time. Our prior work on leading collaboration practices found that agencies that articulate their agreements in formal documents can strengthen their commitments to working collaboratively, and that such agreements are most effective when they are regularly reviewed and updated. Further, we found that the memorandum of understanding between ASPR and CDC was not fully implemented when it was active. For example, according to this agreement, CDC was to appoint a designee to participate in a steering committee related to modeling within HHS. However, ASPR modelers and officials told us that this steering committee was never formed because of changing leadership and priorities. They told us that HHS does not have any intention to form such a steering committee in the future. However, our past work shows creating a steering committee or other similar coordination mechanism could help facilitate monitoring of coordination efforts. We similarly found that other memoranda of understanding related to infectious disease modeling were not fully implemented. For example, although ASPR had a 2013-2018 memorandum of understanding with NIH’s Models of Infectious Disease Agency Study program, ASPR modelers and officials said they rarely use models funded by NIH, including those funded through the program. In particular, ASPR modelers and officials recalled only using one such model in recent years. That model, known as “FluTE,” is an influenza model that was used as part of a larger study on vaccine availability. However, ASPR modelers faced challenges in using this model. Specifically, these ASPR modelers and officials said the FluTE model initially was not compatible with ASPR’s computer system, so software engineers had to modify the source code to resolve the compatibility issue. The model did not have documentation describing its parameters, according to ASPR modelers and officials, so they had to read through the model’s source code to understand them. Similarly, regarding a separate agreement between ASPR and FDA, FDA modelers and officials said that, while there is ongoing information sharing, no specific steps have been taken with regard to collaborating on infectious disease modeling under the agreement. However, these modelers and agency officials said that modeling assistance could be provided in the future, if needed. CDC and ASPR Generally Followed Identified Practices for Infectious Disease Modeling, but CDC Has Not Fully Ensured Model Reproducibility We identified four elements of practices for developing and assessing models: (1) communication between decision maker and modeler, (2) description of the model, (3) verification, and (4) validation. We determined that CDC and ASPR generally followed these GAO-identified practices for 10 models we reviewed. However, for four of the 10 models, CDC modelers did not provide all of the details needed in the verification steps to reproduce their model results, which is inconsistent with HHS guidelines on transparency and reproducibility. CDC and ASPR Generally Followed Identified Modeling Practices but Did Not Always Fully Assess Model Performance According to our interviews with agency modelers and experts, along with our review of selected literature, there are no documented standards that prescribe the steps agencies must or should follow when developing and assessing models. However, based on our interviews and review, we identified four broad elements of the modeling process that modelers generally consider. They are: 1 communication between modelers and officials to refine questions to be addressed by the model, such as geographic spread of the disease and total cases of the disease; 2 description of the model, including detailed descriptions of assumptions and data sources used; 4 validation. Figure 4 outlines the model development and assessment process. Based on our assessment of 10 selected models, we found that CDC and ASPR generally took steps that corresponded to our four elements, and agency modelers generally agreed with our assessment of each model. See table 4 for more information on the elements. See appendix III for a list of models we reviewed and a complete list of the steps we identified that make up each element. Communication between modeler and decision maker. In all 10 agency models we reviewed, we found that agencies took all the steps we identified for communication between decision maker and modeler. In some cases, these steps were formalized, while in others they were informal. For example, CDC modelers responding to Ebola ensured communication with decision makers by following a memo template they developed, which has a section requiring modelers to communicate key aspects of their model. These modelers noted, however, that they would not follow all the steps in their memo template for models developed during an outbreak because of time constraints. CDC modelers responding to pandemic influenza noted they do not have formal best practices for communication about key model aspects to decision makers, and a CDC modeler responding to Zika highlighted the role of CDC’s Emergency Operations Center (EOC) in communication between decision makers and modelers, which is activated only during a response. ASPR modelers noted that—as a best practice—they hold a discussion for all new models, in which decision makers describe what they are looking for and modelers describe what they can provide. Description of the model. In nine of the 10 models we reviewed, modelers took all steps we identified for describing their model type, inputs, outputs, assumptions, and limitations. In one case, ASPR’s “flumodels” package, the agency did not carry out the step of describing the model’s limitations. ASPR modelers told us they did not do so because they expected the model’s intended users—primarily federal public health modeling experts—would understand the limitations of their model, an assumption we find reasonable. Verification. In six of 10 models reviewed, we found agency modelers followed most of the steps we identified for model verification. However, in four of the seven CDC models reviewed, CDC did not publish the model’s code, a part of model reproducibility and a model verification step. We examine CDC’s policy and efforts on reproducibility in more detail below. Validation. For four of the 10 models we reviewed, agencies performed few validation steps. In all three CDC pandemic influenza models we reviewed, and the ASPR Zika model, sensitivity analysis was the only validation step performed. CDC influenza modelers said they did not perform other validation steps because of a lack of comparable external models or applicable data which could be used for other types of model validation. For example, they said they could not validate their models using real-world data because they made projections for scenarios that did not come to pass (e.g., an unmitigated pandemic influenza outbreak). They said they have continued to look for comparable models that could be used to cross-validate their model estimates. ASPR modelers responding to the Zika outbreak also did not have access to comparable external models or applicable data to confirm their model projections, but have since attempted to validate their model. For the other six models we reviewed, agencies carried out most but not all validation steps. For example, CDC modelers responding to Zika also said they did not perform cross-validation (comparison of different model results to each other) for their Zika model because of a lack of comparable models. However, these ASPR and CDC Zika modelers said they have attempted to validate their model since its publication as new data emerges, and we found this occurred. Assessing Model Validity Assessing model validity means determining whether a model is sufficiently accurate for its purpose. Several methods are available, including the following: Modelers can compare the results of the model against real-world data the model was designed to predict. If there are no such data, another method is to determine how much the model projections change in response to changes in input data. This is known as model sensitivity analysis. Modelers can also withhold a part of the available data in building the model and then confirm the model can reproduce the withheld data. real-world data is to run the model along with a separate, independent model using the same input data, and comparing the outputs. CDC modelers and ASPR modelers responding to Zika followed identified practices and validated their model projections for the Zika outbreak, although their efforts yielded mixed results for model performance. CDC modelers responding to Zika attempted to estimate whether there was an enhanced risk of microcephaly in infants born to expectant mothers infected with Zika. Using data available during the initial stage of the outbreak, they calculated the enhanced risk to be between 0.88 and 13.2 percent if the mother was infected in the first trimester. In two subsequent studies using later data on the actual incidence of microcephaly as a result of the outbreak, other researchers found the enhanced risk was within the bounds of CDC modelers’ earlier projections: a 10 percent enhanced risk in one study and an 8.3 percent enhanced risk in the other. In the second case, ASPR modelers attempted to estimate potential new cases of Guillain-Barré syndrome, a rare disorder in which the body’s immune system attacks part of its own nervous system, in places burdened by Zika infection. Their initial projections were that there would be between 191 and 305 new cases in Puerto Rico, a three- to five-fold increase above the number normally expected. ASPR modelers attempted to verify these results themselves and found that the incidence did increase, but only two-fold, to 123 new cases. through independent performance evaluations. For example, agencies sometimes host modeling competitions, in which independent modelers compare the predictive performance of multiple models under controlled conditions using standardized data. The National Institutes of Health hosted an Ebola forecasting competition in 2015, and the Centers for Disease Control and Prevention (CDC) launched its FluSight competition in 2013. The Challenge of Modeling During an Outbreak. Early in the 2014-2016 Ebola outbreak, Centers for Disease Control and Prevention (CDC) officials faced the challenge of answering questions with limited data and time. In order to estimate the potential number of future cases and to aid in planning for additional disease-control efforts, CDC developed EbolaResponse, an Excel spreadsheet-based model that could forecast how interventions would impact the outbreak. Using EbolaResponse, CDC predicted in early September 2014 that 1.4 million cases of Ebola could occur in Liberia and Sierra Leone by January 2015, if the world health community did not increase interventions. These estimates included a correction factor intended to account for the underreporting of cases and that, according to officials, was to represent model uncertainty. Partly because of these estimates of rapidly increasing cases, CDC and others increased intervention by sending more treatment units, personnel, and medical supplies in late 2014. EbolaResponse was created to model the effects of intervention, and it later turned out to be unreliable for the 4-month forecast that CDC used to support its request for increased intervention. Independent analysis found that the model could forecast cases up to a month ahead well but could not provide any measure of uncertainty. Furthermore, the model was unable to make accurate forecasts much beyond 3 months, a limitation that was common among the models used during the outbreak. CDC later reported that roughly 8,500 cases, or 34 percent of the corrected EbolaResponse prediction of 25,000 cases, occurred in Liberia by the end of January 2015. We also found that CDC and ASPR modeling approaches varied somewhat, while generally remaining within the bounds of our identified practices. For example, all the agency modeling groups reviewed their model assumptions, but they also varied in whether this review was formal or informal and internal or external. CDC modelers responding to Ebola use a formal internal peer review process during non-outbreak periods, as well as a detailed checklist to ensure communication with decision makers, full consideration of model inputs and outputs, quantification of model uncertainty, and validation of the model. By contrast, CDC modelers responding to Zika told us they do not have a formal system for evaluating their models, and instead rely on their own review of model assumptions. ASPR and CDC pandemic influenza modelers told us their modeling approach also relied on peer review, but the review was done by external experts; informally for ASPR and formally for CDC pandemic influenza modelers. There are several reasons agency modeling approaches can vary. According to agency modelers, agency modeling practices can be influenced by the availability of time, data, and comparable models. For example, CDC pandemic influenza modelers and officials said they follow a shortened process when facing time constraints by documenting model development in a journal publication after the model has already been put to use. Similarly, CDC modelers responding to Ebola noted that, during a response, a lack of time may mean models are not reviewed through CDC’s formal clearance process; instead, a more informal review of model results may occur. CDC and ASPR modelers also described variation in the complexity of the models they use. They said they sometimes use both simple and complex models for the same disease and during the same outbreak. CDC modelers and officials responding to Ebola said that they preferred models run in spreadsheet programs for their transparency and communicability, whereas CDC influenza modelers mostly use dedicated statistical software programs to run models and spreadsheets for communicating with state and local health departments. ASPR modelers develop more complex prediction models so that they can be reused to answer more than one question, as opposed to models run in spreadsheet programs that are designed to answer one question. Experts and agency modelers generally agreed that infectious disease models should not be more complex than is necessary to answer the questions they were developed to address. A simpler model may be run on a variety of software programs, ranging from spreadsheet programs to specialized programming languages that can do statistical analysis. One downside of models run in spreadsheet programs, according to CDC influenza modelers, is that it is harder to conduct quality control measures. Two experts we spoke to, along with CDC Zika modelers, also expressed concerns with reliability and reproducibility of models run in spreadsheet programs. CDC Has Not Fully Implemented a Policy to Ensure Model Reproducibility Since 2002, HHS agencies responsible for disseminating influential scientific, financial, or statistical information have been required to ensure methods used to develop this information are “reproducible.” A 2019 report from the National Academies of Sciences, Engineering, and Medicine noted that the scientific enterprise depends on the ability of the scientific community to scrutinize scientific claims and to gain confidence over time in results and inferences that have stood up to repeated testing. As part of this process of scrutiny, a study’s data and code should be made available so that the study is reproducible by others. The National Academies report defines reproducibility as obtaining consistent computational results using the same input data, computational steps, methods, code, and conditions of analysis. Reproducibility is specifically addressed earlier in this section in our discussion of model verification, a step that requires making code available for independent review. HHS requires its component agencies to either follow HHS department guidelines on reproducibility or to ensure their own guidelines include a high degree of transparency about the data and methods used to generate scientific information. HHS guidelines require that, in a scientific context, agencies identify the supporting data and models for their published scientific information and provide sufficient transparency about data and methods that an independent reanalysis could be undertaken by a qualified member of the public. When asked whether CDC has specific policies related to reproducibility that would have applied to provision of model code in their published scientific research, CDC referred to its guidelines developed in response to the 2002 HHS Guidelines. However, CDC guidelines do not contain any reference to reproducibility, models, or provision of model code. CDC guidelines for review of scientific information provided to the public focus on completeness, accuracy and timeliness, data management and analysis, clarity and accuracy of presentation, and validity of interpretation of findings. CDC’s policy on public health research and non-research data management and access does not make any reference to reproducibility or model code. This lack of reference to reproducibility in CDC’s guidelines and policies is not in accordance with HHS guidelines. Our review found four instances in which CDC modelers did not provide model code when they published their models. CDC modelers said in some instances, issues with publication formats made the code difficult to share, they did not have time to produce a user-friendly version of the code, or they would share the code upon request. By contrast, ASPR modelers provided code for every model within our review when they published their models. While neither agency cited a specific HHS policy that required them to share model code, ASPR modelers noted that their internal peer review process typically includes sharing model source code with other modelers within PHEMCE. In our review of HHS guidelines and agency-specific guidance for these HHS guidelines, we found that, of three published agency guidance, two require reproducibility, or transparency for the methods used in the reports they issue to the public. Of these agencies, CDC was the only one that did not explicitly require transparency or reproducibility. The National Academies report noted that researchers have to be able to understand others’ research in order to build on it. This report also notes that the ability of qualified third parties to reproduce a model using published code is important because it can reveal mistakes in model code, which can lead to serious errors in interpretation and reported results. If researchers do not share an important aspect of their study, such as their model code, it is difficult to confirm the results of their research and ultimately produce new knowledge. One agency official acknowledged the importance of releasing model code, noting that HHS could benefit by ensuring policies across the agency are consistent regarding reproducibility and transparency in modeling. By not specifically addressing reproducibility in their policy on dissemination of scientific information, CDC risks undermining the reliability of the scientific information they disseminate to the public. Modelers Faced Several Challenges and Have Worked to Address Them Based on our review of documents and reports from agencies, as well as expert and agency interviews, we identified three categories of challenges that CDC modelers and officials and ASPR modelers faced when modeling for Ebola, Zika, and pandemic influenza, along with steps they took to address the challenges. The categories are data, resources, and communicating results. Data Challenges According to a 2016 report from the National Science and Technology Council (NSTC), obtaining timely and accurate data and information has long been a major challenge to an effective response during an infectious disease outbreak. One expert described reliable data as a modeler’s most limited resource. Until data of sufficient quality and quantity are available and usable, the predictive value of models will be limited. Agency modelers and officials provided examples of data-related challenges, which we categorize as follows: Data Access. Public health data, according to one expert, often has access restrictions. For example, ASPR modelers said their ability to access data during the 2014-2016 Ebola outbreak was reduced by a need to enter into agreements with data-owning countries in order to obtain patient data. Modelers said there were agreements between CDC and data owners, but further agreements would have been required for ASPR to obtain data because the agreements did not authorize CDC to share data with its partners. In addition to the example above, the lack of data sharing agreements during the 2014- 2016 Ebola outbreak response led to modeling projects being delayed, according to a CDC publication. ASPR modelers said their inability to obtain data without a data-sharing agreement made it challenging for them to developing a current, reliable estimate of Ebola incidence before modelers could start creating future estimates of disease incidence. They said that, as a result, they instead developed a statistical model, which provided less reliable estimates of future numbers of disease cases than they would have preferred. Modelers said they worked to address this challenge by obtaining data and indirect information through personal relationships with other modelers. In addition to the example provided above, CDC modelers and officials responding to Ebola described experiencing data access challenges. Data availability. Without sufficient data, models may be unable to identify an epidemic’s key drivers, which could result in misdirected intervention efforts. For example, ASPR modelers noted that during the 2015-2016 Zika outbreak response, there were substantial limits on available data, and data that were available could be unreliable and delayed. They said it was very difficult, and in many cases effectively impossible, to determine the accuracy of forecasting models for the evolving Zika outbreak. In addition, CDC officials and modelers responding to Ebola, Zika, and influenza described encountering limits on available data as an ongoing challenge. Steps that modelers said they have taken to address data availability challenges include designing models to use a minimum amount of data, building trust and communication with stakeholders who might be able to provide additional data, and updating data systems to provide all available information. According to CDC modelers, data availability will likely continue to pose a challenge to public health responses. Data collection. There is limited manpower during an infectious disease outbreak response, which can limit the health care system’s ability to collect data, according to CDC modelers and officials responding to Ebola and ASPR modelers. ASPR modelers said if a provider has to fill out a time-consuming form, then they will be delayed in treating the next patient. In order to address this challenge, CDC modelers and officials and ASPR modelers said data requesters should ask for the minimum amount of data needed. For example, CDC modelers and officials said they focus on understanding what data are essential, how they are collected, and the policy implications of reporting those data. A 2016 NSTC report recommended the federal government address this challenge by identifying questions likely to arise during an outbreak response, in order to help define and prioritize data collection and modeling goals. Data quality. Experts said creating models with low-quality data can result in inaccurate models that may not provide clear answers to decision maker questions. For example, CDC modelers and officials responding to the 2015-2016 Zika outbreak said the data quality varied, based on many factors such as surveillance systems that were doing different things and defining reporting Zika cases differently, and the availability of diagnostic testing. Because of data quality concerns, there were questions about whether modeling could be conducted, but through discussions modelers and agency officials said they were able to address challenges. To address such challenges, CDC modelers and officials responding to Zika said they worked to improve public data sharing, sent an official to the Pan- American Health Organization to help interpret data and understand the outbreak from an international perspective, and used modeling methods appropriate for data with high levels of uncertainty. In addition to the example provided above, CDC modelers and officials responding to Ebola, ASPR modelers, and experts described experiencing data quality challenges. Data integration. CDC modelers and officials responding to Ebola and Zika also faced the challenge of integrating multiple data sets, which may not be standardized or in a readily usable form. For example, CDC modelers and officials responding to Zika found it challenging to integrate data as the definition of the disease was refined over time. As the definition got more specific and monitoring systems became available, it was hard to establish data trends, these officials said. Further, there were variations in who would be tested, with all people who exhibited symptoms being tested in some areas, and only pregnant women in others, and also when data would be placed into a combined form and reported to state, national, or international officials, according to these officials. This integration issue may have complicated efforts to conduct modeling such as determining the risk of microcephaly in infants over time. In order to address this challenge, Zika modelers said they set up an online data repository to, among other things, standardize shared data. Resource-Related Challenges CDC modelers and officials responding to Ebola and Zika, along with experts, said finding staff with sufficient training to support modeling during an infectious disease outbreak represented an ongoing challenge. For example, CDC modelers responding to Zika said it can be difficult to find modelers with both an epidemiological background and skills in coding and mathematics. Modelers and agency officials said those who had the correct skills were in high demand, and it was difficult to fully engage them in the Zika outbreak response. They said they could have conducted more modeling or completed modeling efforts more rapidly if they had had access to more modelers with the right skills. To address this challenge, modelers participate in trainings on how to communicate what models can and cannot do, participate in working groups that support modeling efforts, employ the Intergovernmental Personnel Mobility Act Program, maintain collaborations with external partners, and host students and researchers. ASPR modelers said they faced personnel challenges in their modeling efforts but that they were wide-ranging and not specific to Ebola, Zika, or pandemic influenza. According to a 2016 NSTC report, time constraints make it challenging for researchers to keep up with scientific literature during an outbreak. CDC influenza modelers said they faced this challenge and that they conduct weekly searches for new influenza publications, which normally identify about 150 publications each week. To address this challenge, modelers said they conduct literature searches, share the responsibility of reviewing publications and informing others of their content, talk to experts, and attend conferences. Modelers said this challenge was more easily addressed than others. Communication Challenges Communicating model results can be difficult and, as modelers and agency officials pointed out, decision makers will not give credence to results from a model they do not understand. Model results, according to CDC influenza modelers, are often nuanced and complicated, and officials have to think about what pieces of information are the most important to convey to a decision maker, the public, or health officials. Furthermore, as one expert noted, the complexities of modeling can get lost in translation, especially with the media, which may focus on only a worst-case scenario. When modeling for infectious diseases, appropriately communicating complex information has been described as a constant challenge, and CDC influenza modelers described it as their biggest challenge. CDC influenza modelers particularly noted the challenge of communicating uncertainty. CDC influenza and ASPR modelers said if decision makers did not understand the models, they could misunderstand the results, which, according to ASPR modelers, could lead to errors in decision making. CDC modelers and officials responding to Ebola and Zika, CDC influenza modelers, ASPR modelers, and experts described experiencing challenges communicating model results to decision makers. Clear communication may help prevent misunderstandings. For example, one review article said officials may not understand what models can and cannot do before an epidemic, and modelers may not be fully aware of a decision maker’s needs. An expert said there is a need to constrain the use of models intended to inform decisions so that the model does not over- or under-influence a decision maker. And, according to ASPR modelers, decision makers sometimes want a model to make a decision for them, although models can only inform the decision making process. They said this is less of a problem during an outbreak response, when decision makers know they have to act based on incomplete information. Some steps officials described taking to address communication challenges were similar across CDC and ASPR officials. For example, CDC modelers and officials and ASPR modelers said they took steps to improve communication, such as working to develop relationships outside of an outbreak and to improve how data are visualized. For example, ASPR modelers and officials said they provided decision makers with a website that displays an interactive influenza model known as ShinyFlu. The website lets users adjust a model to see how its results could change based on its inputs used. However, modelers said this only works if the decision maker is willing to engage with data. Other steps to address communication challenges were not discussed by all modelers we spoke to. For example, ASPR modelers said that, when they use models with high uncertainty, they do additional research to assess and communicate how a model could be misrepresenting a real- world problem. Additionally, CDC modelers responding to Zika and CDC influenza modelers said they sometimes use the language of weather forecasting—which provides information on the risk of an event occurring over a specified period of time—to help communicate model outcomes. For all 10 of the models we reviewed, modelers communicated all the information they had agreed to provide to decision makers, including information about model uncertainty. Agency modelers and officials said they provided this information through discussions with decision makers and by showing decision makers the results of multiple modeling situations to convey uncertainty. Conclusions Infectious disease modeling is one tool that can provide decision makers with valuable information to support outbreak preparedness and response. In particular, modeling can help answer questions that are difficult to address in other ways because of practical, ethical, or financial reasons. Federal agencies have recognized the importance of modeling. CDC and ASPR reported using it to inform policy and planning questions and, to a more limited extent, to inform planning and the use of resources. HHS agencies that work on infectious disease modeling—ASPR, CDC, FDA, and NIH—reported using multiple mechanisms to coordinate their modeling efforts, including working groups, memoranda of understanding, and coordination with academic and other external modelers. The use of these mechanisms was consistent with many leading collaboration practices, such as defining and articulating a common outcome and addressing needs by leveraging resources. However, HHS does not routinely monitor and evaluate its coordination efforts, as called for by another leading collaboration practice, which limits the department’s ability to identify areas for improvement. Further, there is the potential for overlap and duplication of modeling efforts across agencies, which may not be identified if coordination efforts are not effectively being monitored, and could lead to inefficiencies. By holding progress reviews in which CDC and ASPR evaluate and report on coordination efforts for infectious disease modeling, these agencies could be better positioned to identify and address challenges prior to infectious disease outbreaks, which could lead to improved response efforts. CDC and ASPR modelers generally followed GAO-identified modeling practices, with the notable exception of model verification. Specifically, CDC did not make model code available to others for four of the seven CDC models we reviewed. HHS does not have a policy that requires its agencies to share model code, but it does require its component agencies to either follow its guidelines or ensure that their own guidelines include a high degree of transparency to facilitate reproducibility by qualified third parties. Without sharing code and other important information, CDC cannot ensure that its models are reproducible, a key characteristic of reliable, high-quality scientific research. Recommendations for Executive Action In order to facilitate HHS infectious disease modeling efforts, we are making two recommendations. The Secretary of Health and Human Services should develop a mechanism to routinely monitor, evaluate, and report on coordination efforts for infectious disease modeling across multiple agencies. (Recommendation 1) The Secretary of Health and Human Services should direct CDC to establish guidelines that ensure full reproducibility of CDC’s research by sharing with the public all permissible and appropriate information needed to reproduce research results, including, but not limited to, model code. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to the Department of Health and Human Services (HHS) for review and comment. In its comments, reproduced in appendix IV, HHS agreed with our recommendations and noted that it was developing a process to coordinate its infectious disease modeling efforts across its components. With regard to our second recommendation—that HHS should direct CDC to establish guidelines that ensure the full reproducibility of CDC’s research by sharing all permissible and appropriate information needed to reproduce research results, including, but not limited to, model code— HHS’s comments indicated that CDC believes it has already completed actions to implement this recommendation. For example, the HHS comments state that CDC has established policies such as “Public Access to CDC Funded Publications” and “Policy on Public Health Research and Nonresearch Data Management and Access” that ensure that results are made available to the public, as appropriate. However, as we state in our report, these policies do not contain any reference to reproducibility, models, or provision of model code and therefore do not fully address our recommendation. CDC also said in the HHS comments that its methods—including its practice of providing a copy of model code upon request—are in line with standard practice in the scientific community and peer- reviewed journals. However, in the four instances we identified where CDC modelers did not share code, code being available upon request was only one of the reasons cited. Further, this practice is inconsistent with those of the other HHS agencies we reviewed, and may limit the ability of external researchers to confirm the results of CDC’s research and ultimately produce new knowledge. As noted in our report, by not specifically addressing reproducibility in its policies on access to data and publications, CDC risks undermining the reliability of scientific information disseminated to the public. Therefore, we did not change our recommendation in response to HHS’s comments. We did, however, revise our report to include information on other HHS agency policies related to reproducibility. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, and to other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you are your staff have questions about this report, please contact Timothy M. Persons, Chief Scientist, at (202) 512-6888 or personst@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology In conducting our review of infectious disease modeling by the Department of Health and Human Services (HHS) agencies, our objectives were to (1) examine the extent to which HHS has used various types of models to inform policy, planning, and resource allocation for public health decisions for selected infectious diseases, (2) examine the extent to which HHS coordinated their modeling efforts for selected infectious diseases, (3) examine the steps HHS generally took to develop and assess the performance of its models for the selected diseases and steps it applied to a selection of infectious disease models, and (4) describe the extent to which HHS has addressed challenges related to modeling for selected infectious diseases. For purposes of this review, we focused on HHS because of its focus on scientific and technical issues related to disease modeling, role in infectious disease outbreak preparedness and response activities, and use of modeling for policy and regulatory issues related to disease. Within HHS, we identified four agencies—HHS’s Office of the Assistant Secretary for Preparedness and Response (ASPR), the Centers for Disease Control and Prevention (CDC), National Institutes of Health (NIH), and Food and Drug Administration (FDA)—which may develop or use infectious disease models. To inform all four objectives, we selected three naturally-occurring infectious diseases that have pandemic or epidemic potential—Ebola virus disease (Ebola), Zika virus disease (Zika), and pandemic influenza—to use as examples of broader infectious disease modeling efforts. We selected these diseases based on document review, their inclusion on NIH’s pathogen priority list, modeling being conducted by HHS agencies, and interviews with experts that we selected based on their experience with infectious disease. Based on these steps, the team selected diseases that fit into one of the three categories on NIH’s pathogen priority list: the disease (1) can be transmitted easily from person to person, resulted in a high mortality rate and had the potential for major public health impact, might cause social disruption, and may require special action for public health preparedness (Ebola), (2) was moderately easy to disseminate, and required specific enhancements for diagnostic capacity and enhanced disease surveillance (Zika), or (3) was an emerging pathogen that could be engineered for mass dissemination in the future because of availability, ease of production and dissemination, and have the potential for high morbidity and mortality rates and major health impacts (pandemic influenza). HHS Use of Models to Inform Policy, Planning, and Resource Allocation Decisions To examine the types of models developed by HHS agencies to inform policy, planning, and resource allocation decisions, we reviewed documents from 2009—the year of the last pandemic influenza outbreak in the United States—to April 2019 to identify examples of models developed by the agencies for the three selected diseases. For context on and examples of the types of modeling that CDC and ASPR have conducted, we reviewed published articles that CDC and ASPR officials and experts provided to us or cited during the course of our review, such as articles identified during interviews which we later obtained. We also obtained selected internal memoranda, when available, that described models used in the Ebola virus outbreak. We did not include FDA and NIH in this review because FDA has a limited role in modeling, and NIH generally funds, rather than conducts, modeling. This review yielded articles and memoranda describing about 60 CDC and ASPR models. See appendix II for a bibliography of model publications reviewed. We then categorized the models using categories derived from a federal working group report to characterize the types of modeling conducted and the purpose of the modeling, when that purpose was identified. To analyze each study, one analyst initially coded each study, and each classification was then independently reviewed to verify that it had been correctly classified and to resolve any categorization discrepancies. We used these categories to describe types of modeling efforts undertaken by HHS agencies. Because we focused on studies published between 2009 and 2019, our findings are not generalizable to models that were developed outside of that time period. Additionally, because we relied on agency officials or reviews of relevant agency documents and publications to identify studies, we may not have captured all studies relevant to our scope. Further, because CDC and ASPR modelers and officials said that they do not publish every model they conduct, our review was not intended to develop an inventory of the modeling conducted during the time period. Therefore, we were unable to determine the extent to which the models we identified represented agency modeling efforts as a whole. To describe the extent of model use for public health decision making, we interviewed officials from HHS agencies identified as decision makers for conducting the response to these selected diseases—CDC, ASPR, and FDA—and officials who conducted the modeling. We also interviewed two NIH institutes and one center about funding for research related to modeling for the selected diseases. Additionally, we conducted semi- structured interviews of officials from five states concerning their use of models prepared by HHS agencies for decision making, among other topics. We selected these states based on a review of a CDC draft report on states’ use of CDC models, on the level of influenza activity experienced by states, and consideration of geographic variation by U.S. region. During our review, we sought to identify the common types of decisions that could be informed by models, as well as the considerations that could impact the extent to which a decision maker requests and uses models for specific types of decisions. Based on interviews with agency officials and our review of HHS models we identified examples of models that were used to make specific decisions during response and non- response times. Because we relied on officials to describe the extent to which models inform decision making, we may not have captured all relevant instances when models for the selected infectious diseases informed decision makers. HHS Coordination of Modeling Efforts To examine coordination and collaboration across HHS agencies, we reviewed documents describing HHS agencies’ collaboration and coordination mechanisms such as Memoranda of Understanding, descriptions of Emergency Operations Center procedures, and after- action reports following infectious disease outbreaks. We also conducted interviews with and requested information from HHS officials, asking them to provide information on their efforts to coordinate their infectious disease modeling activities. In this report, and in our past work, we define coordination broadly as any joint activity that is intended to produce more public value than could be produced when organizations act alone. We compared these actions to relevant selected collaboration leading practices: define and articulate a common outcome; establish mutually reinforcing or joint strategies; identify and address needs by leveraging resources; agree on roles and responsibilities; establish compatible policies, procedures, and other means to operate across agency boundaries; and develop mechanisms to monitor, evaluate, and report on results. Because we judgmentally selected a group of experts and diseases, the results of our review cannot be generalized to HHS coordination efforts for other infectious diseases. However, our assessment of collaboration and coordination activities did cover modeling efforts for the three selected diseases. Developing Infectious Disease Models and Assessing Their Performance To identify steps that are generally considered when modelers develop infectious disease models and assess their performance, we conducted semi-structured interviews with relevant experts from academia and other organizations and CDC and ASPR officials, and reviewed literature identified by experts. We used a snowball sampling approach to identify relevant experts and groups. We initially identified five infectious disease modeling experts through informal conversation with individuals working in the field, infectious disease modeling experts known through GAO work, as well as a review of websites, publications, and grants funded by NIH. Using a snowball sampling approach, we reviewed key literature related to the steps generally taken to develop models and assess their performance, consulted with infectious disease modeling experts, and interviewed agency officials to identify relevant groups, as well as individual experts, who could convey to us the steps generally taken during infectious disease modeling. Through literature searches, the team identified literature from public health journals or other major sources. The team applied personal background and knowledge in public health, infectious disease modeling, and statistics to help identify key sources. For the selected literature, we reviewed references and used a snowball approach to identify further relevant studies. Finally, we reviewed CDC guidance on decision making for data access and long-term preservation as it related to documentation standards. Based on our review of identified literature, we developed a data collection instrument to assess the extent to which CDC and ASPR used the steps for infectious disease model development identified by experts and in the literature. Through this data collection instrument, we gathered information about the elements of developing and assessing model performance and the steps that could be taken within each element. In order to develop the data collection instrument, based on our review of literature, we mapped out steps to develop and assess model performance, and developed broad categories of assessment elements. Within each assessment element, we included steps modelers could take as a part of each assessment element. For example, the data collection instrument included items that recorded model verification steps that might have been taken by modeler(s) within the broader model verification element. The instrument was reviewed by internal stakeholders, who provided feedback on its content. Prior to sending the data collection instrument to the agency, we filled in information on verification steps taken for each of the 10 selected models, based on provided model documentation to reflect steps we determined modelers took as a part of the model development and assessment process. In order to provide officials with this information, two analysts reviewed each model’s documentation, with one analyst providing an initial coding of the model and the other reviewing and verifying the first analyst’s findings. This method was first tested on one of the 10 selected models by two analysts independently coding information from the model’s documentation into the data collection instrument and then reviewing coding choices to reconcile any differences found. We then sent the instruments with filled-in information to CDC and ASPR modelers to receive their feedback concerning the steps taken to develop models and assess their performance, provide any missing information, and resolve any ambiguities. See Appendix III for a list of the 10 selected models reviewed and steps to develop and assess model performance included in the data collection instrument. The data collection instrument was intended to record whether a specific step had been taken, but did not assess the quality of the modeling steps. In order to determine steps CDC and ASPR took to develop and assess its models, we selected a non-generalizable sample of 10 models for review in our data collection instrument that demonstrated steps that HHS agencies took to develop models and assess their performance. The model selection process described above informed our selection of infectious disease models. To be selected for inclusion in our non- generalizable sample, the model had to be (1) developed by CDC, or ASPR officials or contractors; (2) developed to answer a question about Ebola, Zika, or pandemic influenza; and (3) used to inform public health decision makers during an outbreak or for preparedness activities. We selected 10 models that differed in form and answered different types of questions, which included studies prepared during both outbreak preparedness and response times, and covered topics such as the impact of vaccination programs on deaths and hospitalization. For Ebola and Zika, we focused on review of selected papers or memos produced since 2014 in order to capture the time period following the 2014-2016 Ebola and 2015-2016 Zika outbreaks. For pandemic influenza, we focused on papers and memos produced since 2009, when the H1N1 pandemic occurred in the United States. Because we selected from a group of models identified by HHS modelers and officials for Ebola, Zika, and pandemic influenza, the results of our review cannot be generalized to other diseases outside of the scope of this report. Furthermore, we requested models that informed public health decision making, and did not consider models that were not used for this purpose. Because we reviewed a non-generalizable sample of 10 models, the results of our review cannot be generalized to a larger population of models prepared by HHS agencies. Challenges to Effective Modeling To identify challenges associated with modeling for the selected infectious diseases, we reviewed documents and reports to identify modeling challenges and steps to address those challenges, and interviewed agency officials and modelers, and experts identified through the previously-described snowball sampling methodology. We used semi- structured interview protocols that included open-ended questions about challenges associated with infectious disease modeling and limitations associated with model development. Not all officials and experts we interviewed provided comments on every challenge or limitation. In addition, because we judgmentally selected a group of experts and diseases, the results of our review cannot be generalized to all infectious disease modeling efforts. We conducted this performance audit from May 2018 to May 2020, in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Bibliography of Selected Model Publications Reviewed Ebola Models Carias, Cristina, et al. “Preventive Malaria Treatment for Contacts of Patients with Ebola Virus Disease in the Context of the West Africa 2014- 15 Ebola Virus Disease Response: An Economic Analysis.” The Lancet Infectious Diseases, vol. 16, no. 4 (April 2016): pp. 449-458. Christie, Athalia, et al. “Possible Sexual Transmission of Ebola Virus— Liberia, 2015.” Morbidity and Mortality Weekly Report, vol. 64, no. 17 (May 8, 2015): pp. 479-481. Martin I. Meltzer, et al. “Estimating the Future Number of Cases in the Ebola Epidemic - Liberia and Sierra Leone, 2014-2015.” Morbidity and Mortality Weekly Report, vol. 63, no. 3 suppl. (September 26, 2014): pp. 1-14. Meltzer, Martin I., et al. “Modeling in Real Time during the Ebola Response.” Morbidity and Mortality Weekly Report, vol. 65, no. 3 suppl. (July 8, 2016): pp. 85-89. Rainisch, Gabriel, et al. “Estimating Ebola Treatment Needs, United States.” Emerging Infectious Diseases, vol. 21, no. 7 (July 2015): pp. 1273-1275. Rainisch, Gabriel, et al. “Regional Spread of Ebola Virus, West Africa, 2014.” Emerging Infectious Diseases, vol. 21, no. 3 (March 2015): pp. 444-447. Undurraga, Eduardo A., Cristina Carias, Martin I. Meltzer, Emily B. Kahn. “Potential for Broad-Scale Transmission of Ebola Virus Disease during the West Africa Crisis: Lessons for the Global Health Security Agenda.” Infectious Diseases of Poverty, vol. 6, no. 159 (2017). Washington, Michael L., Martin I. Meltzer. “Effectiveness of Ebola Treatment Units and Community Care Centers Liberia, September 23- October 31, 2014.” Morbidity and Mortality Weekly Report, vol. 64, no. 3 (January 30, 2015): pp. 67-69. Zika Models Adamski, Alys, et al. “Estimating the Numbers of Pregnant Women Infected with Zika Virus and Infants with Congenital Microcephaly in Colombia, 2015–2017.” Journal of Infection, vol. 76 (2018): pp. 529-535. Dirlikov, Emilio, et al. “Guillain-Barré Syndrome and Healthcare Needs during Zika Virus Transmission, Puerto Rico, 2016.” Emerging Infectious Diseases, vol. 23, no. 1 (January 2017): pp.134-136. Ellington, Sascha R., et al. “Estimating the Number of Pregnant Women Infected With Zika Virus and Expected Infants With Microcephaly Following the Zika Virus Outbreak in Puerto Rico, 2016.” JAMA Pediatrics, vol. 170, no. 10 (2016): pp. 940-945. Grills, Ardath, et al. “Projected Zika Virus Importation and Subsequent Ongoing Transmission after Travel to the 2016 Olympic and Paralympic Games—Country-Specific Assessment, July 2016.” Morbidity and Mortality Weekly Report, vol. 65, no. 28 (July 22, 2016): pp.711-715. Johansson, Michael A., et al. “Zika and the Risk of Microcephaly.” The New England Journal of Medicine, vol. 375 (July 7, 2016): pp.1-4. Johnson, Tammi L., et al. “Modeling the Environmental Suitability for Aedes (Stegomyia) aegypti and Aedes (Stegomyia) albopictus (Diptera: Culicidae) in the Contiguous United States.” Journal of Medical Entomology, vol. 54, no. 6 (November 7, 2017): pp. 1605-1614. Mitchell, Patrick K. et al., “Reassessing Serosurvey-Based Estimates of the Symptomatic Proportion of Zika Virus Infections.” American Journal of Epidemiology, vol. 188, no. 1 (January 2019): pp. 206-213. Mier-y-Teran-Romero, Luis, Mark J. Delorey, James J. Sejvar, Michael A. Johansson. “Guillain-Barré Syndrome Risk Among Individuals Infected with Zika Virus: a Multi-Country Assessment.” BMC Medicine, vol. 16, no. 67 (2018). Mier-y-Teran-Romero, Luis, Andrew J. Tatem, Michael A. Johansson. “Mosquitoes on a Plane: Disinsection Will Not Stop the Spread of Vector- Borne Pathogens, a Simulation Study.” PLoS Neglected Tropical Diseases, vol. 11, no. 7 (July 3, 2017). Reefhuis, Jennita, et al. “Projecting Month of Birth for At-Risk Infants after Zika Virus Disease Outbreaks.” Emerging Infectious Diseases, vol. 22, no. 5 (May 2016): pp. 828-832. Russell, Steven, et al. “Detecting Local Zika Virus Transmission in the Continental United States: A Comparison of Surveillance Strategies.” PLoS Currents Outbreaks (November 22, 2017). Watts. Alexander G., et al. “Elevation as a Proxy for Mosquito-Borne Zika Virus Transmission in the Americas.” PLoS ONE, vol. 12, no. 5 (May 24, 2017). Influenza Models Atkins, Charisma Y., et al. “Estimating Effect of Antiviral Drug Use during Pandemic (H1N1) 2009 Outbreak, United States.” Emerging Infectious Diseases, vol. 17. no. 9 (September 2011): pp. 1591-1598. Biggerstaff, Matthew, et al. “Estimates of the Number of Human Infections With Influenza A(H3N2) Variant Virus, United States, August 2011–April 2012.” Clinical Infectious Diseases, vol. 57, suppl. 1 (2013): pp. S12-S15. Biggerstaff, Matthew, et al. “Estimating the Potential Effects of a Vaccine Program Against an Emerging Influenza Pandemic—United States.” Clinical Infectious Diseases, vol. 60, suppl. 1 (2015): pp. S20-S29. Carias, Cristina, et al. “Potential Demand for Respirators and Surgical Masks during a Hypothetical Influenza Pandemic in the United States.” Clinical Infectious Diseases, vol. 60, suppl. 1 (2015): pp. S42-S51. Cauchemez, Simon, et al. “Role of Social Networks in Shaping Disease Transmission during a Community Outbreak of 2009 H1N1 Pandemic Influenza.” Proceedings of the National Academy of Sciences of the United States, vol. 108, no. 7 (February 15, 2011): pp. 2825-2830. Dawood, Fatimah S., et al. “Estimated Global Mortality Associated with the First 12 Months of 2009 Pandemic Influenza A H1N1 Virus Circulation: a Modelling Study.” The Lancet Infectious Diseases, vol. 12 (September 2012): pp. 687-695. Fung, Isaac Chun-Hai, et al. “Modeling the Effect of School Closures in a Pandemic Scenario: Exploring Two Different Contact Matrices.” Clinical Infectious Diseases, vol. 60, suppl. 1 (2015): pp. S58-S63. Iuliano, A. Danielle, et al. “Estimates of Global Seasonal Influenza- Associated Respiratory Mortality: a Modelling Study.” The Lancet, vol. 391, no. 10127 (March 31, 2018): pp. 1285-1300. Jain, Seema, et al. “Hospitalized Patients with 2009 H1N1 Influenza in the United States, April–June 2009.” The New England Journal of Medicine, vol. 361, no. 20 (November 12, 2009): pp. 1935-1944. Kostova, Deliana, et al. “Influenza Illness and Hospitalizations Averted by Influenza Vaccination in the United States, 2005–2011.” PLoS ONE, vol. 8, no. 6 (June 19, 2013). Lafond, Kathryn E., et al. “Global Role and Burden of Influenza in Pediatric Respiratory Hospitalizations, 1982–2012: A Systematic Analysis.” PLoS Medicine, vol. 13, no. 3 (March 24, 2016). Meltzer, Martin I., Nancy J. Cox, Keiji Fukuda. “The Economic Impact of Pandemic Influenza in the United States: Priorities for Intervention.” Emerging Infectious Diseases, vol. 5, no. 5 (September-October 1999): pp. 659-671. Meltzer, Martin I., et al. “Estimates of the Demand for Mechanical Ventilation in the United States during an Influenza Pandemic.” Clinical Infectious Diseases, vol. 60, suppl. 1 (2015): pp. S52-S57. O’Hagan, Justin J., et al. “Estimating the United States Demand for Influenza Antivirals and the Effect on Severe Influenza Disease during a Potential Pandemic.” Clinical Infectious Diseases, vol. 60, suppl. 1 (2015): pp. S30-S41. Presanis, Anne M., et al. “The Severity of Pandemic H1N1 Influenza in the United States, from April to July 2009: A Bayesian Analysis.” PLoS Medicine, vol. 6, no. 12 (December 8, 2009). Reed, Carrie, et al. “Estimates of the Prevalence of Pandemic (H1N1) 2009, United States, April-July 2009.” Emerging Infectious Diseases, vol. 15, no. 12 (December 2009): pp. 2004-2007. Reed, Carrie, Martin I. Meltzer, Lyn Finelli, Anthony Fiore. “Public Health Impact of Including Two Lineages of Influenza B in a Quadrivalent Seasonal Influenza Vaccine.” Vaccine, vol. 30 (2012): pp. 1993-1998. Reed, Carrie, et al. “Estimating Influenza Disease Burden from Population-Based Surveillance Data in the United States.” PLoS ONE, vol. 10, no. 3 (March 4, 2015). Rolfes, Melissa A., et al. “Annual Estimates of the Burden of Seasonal Influenza in the United States: A Tool for Strengthening Influenza Surveillance and Preparedness.” Influenza and Other Respiratory Viruses, vol. 12 (2018): pp. 132-137. Russell, K., et al. “Utility of State-Level Influenza Disease Burden and Severity Estimates to Investigate an Apparent Increase in Reported Severe Cases of Influenza A(H1N1) pdm09 – Arizona, 2015–2016.” Epidemiology and Infection, vol. 146 (June 14, 2018): pp. 1359-1365. Shrestha, Sundar S., et al. “Estimating the Burden of 2009 Pandemic Influenza A (H1N1) in the United States (April 2009–April 2010).” Clinical Infectious Diseases, vol. 52, suppl. 1 (2011): pp. S75-S82. Tokars, Jerome I., Melissa A. Rolfes, Ivo M. Foppa, Carrie Reed. “An Evaluation and Update of Methods for Estimating the Number of Influenza Cases Averted by Vaccination in the United States.” Vaccine, vol. 36 (2018): pp. 7331-7337. Appendix III: Ten Selected Infectious Disease Models and Questions from Data Collection Instrument Appendix III: Ten Selected Infectious Disease Models and Questions from Data Collection Instrument Document describing model Meltzer, Martin I., Charisma Y. Atkins, Scott Santibanez, Barbara Knust, Brett W. Petersen, Elizabeth D. Ervin, Stuart T. Nichol, Inger K. Damon, Michael L. Washington. Estimating the Future Number of Cases in the Ebola Epidemic–Liberia and Sierra Leone, 2014-2015, MMWR. Volume 63, Number 3, September 26, 2014. Rainisch, Gabriel, Manjunath Shankar, Michael Wellman, Toby Merlin, and Martin I. Meltzer. Regional Spread of Ebola Virus, West Africa, 2014. Emerging Infectious Diseases. Volume 21, Number 3, March 2015. Asher, Jason. Forecasting Ebola with a Regression Transmission Model. Epidemics. Volume 22, 2018. Ellington, Sascha R., Owen Devine, Jeanne Bertolli, Alma Martinez Quiñones, Carrie K. Shapiro-Mendoza, Janice Perez-Padilla, Brenda Rivera-Garcia, Regina M. Simeone, Denise J. Jamieson, Miguel Valencia-Prado, Suzanne M. Gilboa, Margaret A. Honein, Michael A. Johansson. Estimating the Number of Pregnant Women Infected With Zika Virus and Expected Infants With Microcephaly Following the Zika Virus Outbreak in Puerto Rico, 2016. JAMA Pediatrics. Volume 170, Number 10, October 2016. Johansson, Michael A., Luis Mier-y‐Teran-Romero, Jennita Reefhuis, Suzanne M. Gilboa, and Susan L. Hills. Zika and the Risk of Microcephaly. New England Journal of Medicine. Volume 375, Number 1, July 7, 2016. Dirlikov, Emilio, Krista Kniss, Chelsea Major, Dana Thomas, Cesar A. Virgen, Marrielle Mayshack, Jason Asher, Luis Mier-y-Teran-Romero, Jorge L. Salinas, Daniel M. Pastula, Tyler M. Sharp, James Sejvar, Michael A. Johansson, Brenda Rivera-Garcia. Guillain-Barré Syndrome and Healthcare Needs during Zika Virus Transmission, Puerto Rico, 2016. Emerging Infectious Diseases. Volume 23, Number 1, January 2017. Biggerstaff, Matthew, Carrie Reed, David L. Swerdlow, Manoj Gambhir, Samuel Graitcer, Lyn Finelli, Rebekah H. Borse, Sonja A. Rasmussen, Martin I. Meltzer, Carolyn B. Bridges. Estimating the Potential Effects of a Vaccine Program against an Emerging Influenza Pandemic—United States, Clinical Infectious Diseases. Volume 60, Issue Supplement 1, 2015. Carias, Cristina, Gabriel Rainisch, Manjunath Shankar, Bishwa B. Adhikari, David L. Swerdlow, William A. Bower, Satish K. Pillai, Martin I. Meltzer, Lisa M. Koonin. Potential Demand for Respirators and Surgical Masks during a Hypothetical Influenza Pandemic in the United States. Clinical Infectious Disease. Volume 60, Issue Supplement 1, 2015. Reed, Carrie, Frederick J. Angulo, David L. Swerdlow, Marc Lipsitch, Martin I. Meltzer, Daniel Jernigan, and Lyn Finelli. Estimates of the Prevalence of Pandemic (H1N1) 2009, United States, April–July 2009, Emerging Infectious Diseases. Volume 15, Number 12, December 2009. Asher, Jason, Matthew Clay. Deterministic compartmental models for influenza with mitigations. R: “flumodels” package. Version: 1.0.7, April 24, 2017. Data Collection Instrument GAO Review of Model Assessment Steps for Selected Agency Models Purpose: The Government Accountability Office has been asked by the Congress to review the Department of Health and Human Services’ agency efforts to model infectious disease. As part of our methodology, we selected and reviewed published papers and internal memoranda from the sources provided to us. We reviewed these sources to describe the steps taken to describe, verify, validate, and communicate results of these modeling efforts. The purpose of this inquiry is to provide the authors of the selected papers the opportunity to confirm, clarify, or provide additional information in the table below. Instructions: In the table below, we have two sets of columns: one set indicating GAO’s assessment of whether the document contained information about a step being taken. The second set of columns is for the authors of the selected paper to fill out. If you agree with information in the GAO columns, please indicate your concurrence in the Reviewer Comments column. Otherwise, please provide information accordingly. If a step is marked “Step taken” please review the entries we have made in the GAO Reviewer Comments column for accuracy and completeness and indicate your concurrence in the Reviewer Comments column. Please also provide additional supporting documentation if available. For any steps that were taken, but where we indicated either “not taken” or “not enough information to determine” in our review, please provide a description of the actual steps and any documentation you may have. If a step was not taken, please provide an indication as to why that step was not taken and, if possible, please provide supporting documentation. For example, if limited data availability impacted the ability to conduct a model validation step(s), then please include this information in the appropriate table cells. Assessment Element Clarify Objectives Model Description Model Verification (Internal Validation, Internal Consistency, Technical Validity) 10 Independent expert (internal or external) review of key programming 11 Debugging tests and checks for coding accuracy 12 Model’s code or Excel spreadsheet is available 13 Test model assumptions (i.e. confirming model assumptions are reasonable and appropriate for question), for example: Distributional assumptions about model residuals Form of the model 14 Model handling of input data/parameters is verified as correct (i.e. as intended by developers) Model Validation 16 Sensitivity analysis (assessing impact of assumption/parameter uncertainty on output or model form) 17 Cross validation or between model comparisons: Compare results to other models that address the same problem 18 External validation: Compare model results to actual event data 19 Predictive validation: Compare model predictions for future events to actual outcomes. Communication 21 Modelers supply customer with agreed upon information, which may vary depending on the model 22 Modeler provides customer with clear information on uncertainty in model results, such as inclusion of standard errors or confidence intervals, or qualitative explanations of uncertainty in the model results Assessment Steps Question: Do you think that the assessment elements identified in the table above sufficiently reflect the steps that should generally be taken to develop and assess the performance of models? Would you remove any steps, add any steps, or make any other adjustments to these steps in order to consider them best practices in assessing performance of models, generally? Please explain. Appendix IV: Comments from the Department of Health and Human Services Appendix V: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contact named above, the following individuals made contributions to this report: Sushil Sharma (Assistant Director), Charlotte E. Hinkle (Analyst-in-Charge), Sam Amrhein, Breanne Cave, Jehan Chase, Carol A. Gotway Crawford, Justin Cubilo, Karen Doran, Nancy Fasciano, Douglas G. Hunker, Dennis Mayo, Anika McMillon, Sarah Resavy, Edward Rice, Ben Shouse, Amber Sinclair, Walter Vance, Sarah Veale, and Richard Zarrella.
Why GAO Did This Study Outbreaks of infectious diseases—such as Ebola, Zika, and pandemic influenza—have raised concerns from Congress about how federal agencies use modeling to, among other things, predict disease distribution and potential impacts. In general, a model is a representation of reality expressed through mathematical or logical relationships. Models of infectious diseases can help decision makers set policies for disease control and may help to allocate resources. GAO was asked to review federal modeling for selected infectious diseases. This report examines (1) the extent to which HHS used models to inform policy, planning, and resource allocation for public health decisions; (2) the extent to which HHS coordinated modeling efforts; (3) steps HHS generally takes to assess model development and performance; and (4) the extent to which HHS has addressed challenges related to modeling. GAO reviewed documents and interviewed HHS officials, state officials, and subject matter experts. GAO identified practices commonly used to assess infectious disease model performance and reviewed 10 selected modeling efforts to see if they followed these practices. What GAO Found Within the Department of Health and Human Services (HHS), the Centers for Disease Control and Prevention (CDC) and the Office of the Assistant Secretary for Preparedness and Response (ASPR) used models to inform decision-making during and after outbreaks of Ebola, Zika, and pandemic influenza. These agencies' modeling efforts informed public health planning, outbreak response, and, to a limited extent, resource allocation. Four CDC centers perform modeling. HHS agencies reported using multiple mechanisms to coordinate modeling efforts across agencies, but they do not routinely monitor, evaluate, or report on the extent and success of coordination. Consequently, they risk missing opportunities to identify and address modeling challenges—such as communicating clearly, and obtaining adequate data and resources—before and during an outbreak. As a result, agencies may be limiting their ability to identify improvements in those and other areas. Further, there is potential for overlap and duplication of cross-agency modeling efforts, which could lead to inefficiencies. CDC and ASPR generally developed and assessed their models in accordance with four steps GAO identified as commonly-recognized modeling practices: (1) communication between modeler and decision maker, (2) model description, (3) verification, and (4) validation. However, for four of the 10 models reviewed, CDC did not provide all details needed to reproduce model results, a key step that lets other scientists confirm those results. GAO found that CDC's guidelines and policy do not address reproducibility of models or their code. This is inconsistent with HHS guidelines and may jeopardize the reliability of CDC's research. This report also identifies several modeling-related challenges, along with steps agencies have taken to address them. What GAO Recommends GAO recommends that HHS (1) develop a way to routinely monitor, evaluate, and report on modeling coordination efforts across multiple agencies and (2) direct CDC to establish guidelines to ensure full reproducibility of its models. HHS agreed with GAO's recommendations.
gao_GAO-19-623T
gao_GAO-19-623T_0
SBA Has Not Fully Addressed Deficiencies in Oversight and Implementation for the WOSB Program SBA has not fully addressed deficiencies we have previously identified for the WOSB program, and these deficiencies are affected by SBA’s ongoing implementation of changes to the program authorized by the National Defense Authorization Act of 2015 (2015 NDAA). As of early June 2019, SBA had implemented one of the three changes to the program authorized in the 2015 NDAA. Specifically, in September 2015 SBA published a final rule to implement sole-source authority (to award contracts without competition), effective October 2015. The two other changes—authorizing SBA to implement its own certification process for WOSBs and requiring SBA to eliminate the option for firms to self-certify that they are eligible for the WOSB program—had not been implemented. On May 14, 2019, SBA published in the Federal Register a proposed rule that eliminates the self-certification option and describes a potential certification process to be administered by SBA. SBA officials have stated that the agency will not eliminate self-certification until the new certification process for the WOSB program is in place, which they expect to implement by June 2021. In addition, SBA has not fully addressed WOSB program oversight deficiencies described in our March 2019 report and first identified in our 2014 report. We reported that SBA did not have formal policies for reviewing the performance of its four approved third-party certifiers (private entities approved by SBA to certify the eligibility of WOSB firms), including their compliance with their agreements with SBA. Further, we found that SBA had not developed formal policies and procedures for, among other things, reviewing the monthly reports that certifiers submit to SBA. We recommended that the Administrator of SBA establish comprehensive procedures to monitor and assess the performance of the third-party certifiers in accordance with their agreements with SBA and program regulations. While SBA has taken some steps to address our recommendation, including conducting a compliance review of the certifiers in 2016, SBA officials said in June 2018 that SBA had no plans to conduct further compliance reviews until the final rule implementing the new certification process was completed. By waiting to improve its oversight of the WOSB program, SBA cannot provide reasonable assurance that certifiers are complying with program requirements and cannot improve its efforts to identify ineligible firms or potential fraud. In addition, the implementation of sole-source authority in light of these continued oversight deficiencies can increase program risk. Consequently, we maintain that our recommendation should be addressed. SBA also has not fully addressed deficiencies related to eligibility examinations that we described in our March 2019 report and first identified in our October 2014 report. We found that SBA lacked formalized guidance for its eligibility examination processes and that the examinations identified high rates of potentially ineligible businesses. As a result, we recommended that SBA enhance its examination of businesses that register for the WOSB program to ensure that only eligible businesses obtain WOSB set-asides. Specifically, we suggested that SBA should take actions such as (1) completing the development of procedures to conduct annual eligibility examinations and implementing such procedures; (2) analyzing examination results and individual businesses found to be ineligible to better understand the cause of the high rate of ineligibility in annual reviews and determine what actions are needed to address the causes, and (3) implementing ongoing reviews of a sample of all businesses that have represented their eligibility to participate in the program. SBA has taken some steps to implement our recommendation, such as including written policies and procedures for WOSB program eligibility examinations in a standard operating procedure and a Desk Guide. However, SBA does not collect reliable information on the results of its annual eligibility examinations. In addition, SBA continues to have no mechanism to look across examinations for common eligibility issues to inform the WOSB program. As we noted in 2014, by not analyzing examination results broadly, the agency is missing opportunities to obtain meaningful insights into the program, such as the reasons many businesses are deemed ineligible. Further, SBA still conducts eligibility examinations only of firms that have already received a WOSB award. Restricting the samples in this way limits SBA’s ability to better understand the eligibility of businesses before they apply for and are awarded contracts, as well as its ability to detect and prevent potential fraud. We recognize that SBA has made some effort to address our recommendation by documenting procedures for conducting annual eligibility examinations of WOSB firms. However, without maintaining reliable information on the results of eligibility examinations, developing procedures for analyzing results, and expanding the sample of businesses to be examined to include those that did not receive contracts, SBA limits the value of its eligibility examinations and its ability to reduce ineligibility among businesses registered to participate in the WOSB program. Leading fraud risk management practices state that federal program managers should design control activities that focus on fraud prevention over detection and response, to the extent possible. The deficiencies in SBA’s oversight of the WOSB program limit SBA’s ability to identify potential fraud risks and develop any additional control activities needed to address these risks. As a result, the program may continue to be exposed to the risk of ineligible businesses receiving set- aside contracts. In addition, in light of these continued deficiencies, the implementation of sole-source authority without addressing the other changes made by the 2015 NDAA could increase program risk. For these reasons, we maintain that our previous recommendation that SBA enhance its WOSB eligibility examination procedures should be addressed. In addition, similar to previous findings from SBA’s Office of Inspector General, our March 2019 report found that about 3.5 percent of contracts using a WOSB set-aside were awarded for ineligible goods or services from April 2011 through June 2018. At that time, SBA was not reviewing contracting data that could identify this problem and inform SBA which agencies making awards may need targeted outreach or training. As a result, we found that SBA could not provide reasonable assurance that WOSB program requirements were being met and that the program was meeting its goals. We recommended that SBA develop a process for periodically reviewing the extent to which WOSB program set-asides are awarded for ineligible goods or services and use the results to address identified issues, such as through targeted outreach or training to agencies making awards under the ineligible codes. In early May 2019, SBA said that it had initiated such efforts. SBA Has Not Yet Implemented Recommendations to Improve the HUBZone Certification Process In September 2018, we found that although SBA had adopted criteria and guidance for a risk-based approach to certifying and recertifying firms for the HUBZone program in March 2017, the extent to which it conducted a risk assessment to inform its approach was unclear. In 2015, we found that SBA lacked key controls for its recertification process and recommended that SBA assess the process. In 2009, SBA increased documentation requirements for certification but not recertification (which determines continued program eligibility every 3 years). In March 2017, SBA changed its recertification criteria and guidance to require firms with $1 million or more in HUBZone contract awards to provide documentation to support continuing eligibility. During our work for the September 2018 report, SBA officials stated they had completed a risk assessment of the HUBZone recertification process, but did not provide us with documentation on when they performed the risk assessment, which risks were identified and considered, or what analysis established the $1 million threshold. As of May 2019, SBA had not provided documentation showing that it had performed the risk assessment, but we maintain that an assessment of the recertification process would help inform a risk- based approach to reviewing and verifying information from firms that appear to pose the most risk to the program. In addition, SBA had not provided documentation showing that a technology-based solution designed to address some of the ongoing challenges with the recertification process had been implemented. SBA officials had previously estimated this solution would be available first in spring 2017 and then by the end of calendar year 2017. We also found in our September 2018 report that, based on our review of case files for a nongeneralizable sample of 12 firms in Puerto Rico that received HUBZone certification between March 2017 and March 2018, SBA did not consistently document or follow its policies and procedures for certification reviews: SBA did not have complete documentation in nine of 12 cases. SBA officials described alternative procedures they used to determine firms’ eligibility, but SBA had not updated its internal policy manuals to reflect these procedures, and analysts did not document use of such procedures in the files we reviewed. As a result, SBA did not have reasonable assurance that firms met HUBZone criteria. In four of 12 cases, SBA did not follow its policy to conduct three levels of review (by an analyst, a senior analyst, and the program director or deputy) when determining whether to approve or deny a firm. It was unclear to what extent SBA reviewed staff compliance with certification and recertification review procedures. SBA provided an assurance letter stating that it evaluated the Office of HUBZone’s internal controls and concluded the controls were effective, but the letter did not specify what steps SBA took for the evaluation. We recommended that SBA (1) update its internal policy manuals for certification and recertification reviews to reflect existing policies and procedures not currently in written guidance and (2) conduct and document reviews of staff compliance with procedures associated with HUBZone certification and recertification. In response to our report, SBA said that it planned to update its internal policies on certification and recertification by issuing a procedural notice and to begin reviewing and documenting staff compliance with the updated procedures outlined in the notice. However, as of May 2019, SBA had not provided documentation showing that it had completed these planned actions. SBA Has Taken Some Steps to Address Recommendations about the Procurement Scorecard In September 2018, we found that for fiscal year 2017, SBA revised the methodology for its Small Business Procurement Scorecard, which is used to assess federal agencies’ progress toward small business procurement goals. SBA made revisions to address requirements specified in the National Defense Authorization Act for Fiscal Year 2016. SBA (1) reduced the share of the total scorecard grade devoted to prime contracting achievement, which is the dollar amount of contracts awarded directly to small businesses, and (2) added an element calculating changes in the number of small businesses receiving prime contracts. SBA made two additional revisions—with input from other agencies’ representatives—to increase the share of subcontracting achievement results and the share of the peer review of required activities designed to facilitate small business procurement. In July 2018, officials said they had begun developing a plan to evaluate the effects of the revised scorecard methodology but did not provide a draft plan. Conducting a well-designed and comprehensive evaluation could aid SBA in determining whether the scorecard is an effective tool for helping to achieve the agency’s strategic goals. In our September 2018 report, we also found that the published fiscal year 2017 scorecards originally contained errors, including an incorrect grade and numeric score for one agency, and SBA does not have a process to ensure that scorecard results are published accurately. Although SBA later corrected the errors, it did not initially document that scorecards had been changed, which is inconsistent with SBA’s policy on information quality. SBA officials said that errors occurred in the process of formatting scorecards for publication. Errors in the published scorecards—and the initial lack of disclosure about corrections—weaken data reliability and may undermine confidence in scorecard data. We recommended that SBA (1) design and implement a comprehensive evaluation to assess scorecard revisions and (2) institute a process for reviewing scorecards for accuracy prior to publication and a mechanism for disclosing corrected information. Since our report, SBA has proposed a two-phase program evaluation of the scorecard. SBA officials said that they plan for phase one to include a report to Congress on the impact of the small business procurement goal program for Chief Financial Officers Act agencies and to provide a recommendation on continuing, modifying, expanding, or terminating the scorecard program. SBA plans to provide the phase one report in September 2019. In phase two, SBA plans to conduct a program evaluation that investigates the effectiveness of the small business contracting scorecard on federal agency small business contracting goal achievement. SBA has not provided a time frame for phase two. With respect to the second recommendation, SBA officials said that SBA has developed a procedure that includes a prepublication review process for procurement scorecards. The officials said the procedure identifies responsibilities, provides for an independent peer review, and includes supervisory review. Officials said the procedure also includes measures for post-publication review and corrections. We will review supporting documentation for this new procedure to assess whether this recommendation can be closed as implemented. Chairman Rubio, Ranking Member Cardin, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Acknowledgments If you or your staff have any questions about this testimony, please contact William Shear, Director, Financial Markets and Community Investment at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Andrew Pauline (Assistant Director), Paige Smith (Assistant Director), Winnie Tsen (Assistant Director), and Jennifer Schwartz. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Federal agencies conduct a variety of procurements that are reserved for small business participation through small business set-asides. These set-asides can be for small businesses in general, or they can be specific to small businesses that meet additional eligibility requirements in programs such as those for WOSB or HUBZone. SBA administers both the WOSB and HUBZone programs. SBA also produces an annual Small Business Procurement Scorecard to measure how much contracted spending federal agencies allocate to small businesses and whether the federal government is meeting its goals for awarding contracts to small businesses. GAO issued three reports between September 2018 and March 2019 on SBA contracting programs (see GAO-18-666 , GAO-18-672 , and GAO-19-168). This testimony is primarily based on these three reports and discusses prior GAO findings and SBA's progress on implementing GAO's recommendations on (1) the WOSB program, (2) the HUBZone program, and (3) SBA's procurement scorecard. To update the status of prior recommendations, GAO reviewed updates from SBA and interviewed officials. What GAO Found The Small Business Administration (SBA) has not fully implemented GAO's prior recommendations to address oversight deficiencies in the Women-Owned Small Business (WOSB) and Historically Underutilized Business Zone (HUBZone) programs and to improve evaluation of its procurement scorecard. GAO maintains that its recommendations should be addressed. Women-Owned Small Business Program. In its March 2019 report, GAO found that SBA had not addressed WOSB program oversight deficiencies identified in GAO's 2014 report (GAO-15-54). For example, GAO had found that SBA did not have procedures related to reviewing the performance of the four third-party certifers—private entities approved by SBA to certify the eligibility of WOSB firms—as well as information the certifiiers submitted to SBA. GAO recommended that SBA establish procedures to assess the performance of the certifiers and the information they submitted. While SBA conducted a compliance review of the certifiers in 2016, SBA said in June 2018 that it had no plans to conduct further compliance reviews until a final rule implementing a new certification process was completed. SBA officials said that they expected the rule to be implemented by June 2021. By waiting to improve its oversight of the WOSB program, SBA cannot provide reasonable assurance that certifiers are complying with program requirements and cannot improve its efforts to identify ineligible firms or potential fraud. HUBZone Program. In September 2018, GAO reported that it had reviewed case files for a nongeneralizable sample of 12 firms in Puerto Rico that received HUBZone certification between March 2017 and March 2018 and found that SBA did not consistently document or follow its policies and procedures for certification reviews. For example, SBA did not have complete documentation in nine of 12 cases and did not follow its policy to conduct three levels of review when determining whether to approve or deny a firm in four of 12 cases. As a result, SBA did not have reasonable assurance that firms meet HUBZone criteria. SBA said that it planned to implement GAO's recommendations that SBA (1) update internal policy manuals for certification and recertification and (2) conduct and document reviews of staff compliance with relevant procedures. However, as of May 2019, SBA had not provided documentation showing that it had completed these planned actions. Small Business Procurement Scorecard. For fiscal year 2017, SBA revised the methodology for its Small Business Procurement Scorecard, which assesses the efforts of federal agencies to support contracting with small businesses. For example, one revision reduced the share of the total scorecard grade devoted to prime contracting achievement (the dollar amount of contracts awarded directly to small businesses). GAO recommended in September 2018 that SBA design and implement a comprehensive evaluation to assess the scorecard revisions. Since that report was issued, SBA has proposed but not yet implemented a two-phase evaluation of the scorecard to include an evaluation of the scorecard's effect on federal agencies achieving small business contracting goals. SBA said that it expects to complete phase one by September 2019 and has not provided a time frame for phase two.
gao_GAO-19-683
gao_GAO-19-683_0
Background Airline customer service agents have a number of duties to assist passengers at the airport (see fig. 1). Customer service agents can check passengers into flights; handle and tag checked bags; and board and deplane passengers on the aircraft; in addition to assisting passengers when service failures occur, such as helping to locate a lost bag. At many airports and airlines, customer service agents are trained to work the ticket counter and the arrival and departure gates. In this role, airline customer service agents’ interactions with passengers can range from pleasant to routine to contentious. For example, if bad weather causes an airline to delay or cancel flights, harried passengers trying to make connecting flights or get to a destination may take their frustration out on a customer service agent. The following entities are responsible for helping to prevent or address passenger assaults: Airlines seek to provide a safe work environment for customer service agents. Among other things, airlines set policies and procedures instructing customer service agents how to handle and report incidents, in addition to how management should respond. Airport law enforcement responds to allegations of violence at airports and enforces state and local laws. According to airport law enforcement, when they respond to incidents, they generally capture information in police reports. Airport management, such as a security director, may be informed of alleged passenger assaults at the airport or support ensuing investigations and prosecutions. Prosecutors at the federal and state level decide whether to charge passengers for offenses that violate laws. No one federal agency is responsible for addressing passenger assaults against customer service agents at the airport. For example, FAA sets policies that airlines and their employees must adhere to for aviation safety, but TSA oversees the security of the nation’s civil aviation system. However, officials from both TSA and FAA told us their responsibilities for passenger assaults at airports are limited. In particular, FAA officials said their primary responsibility is for assaults onboard aircraft as opposed to at the airport. Similarly, TSA officials said they only get involved in assaults of airline customer service agents in the rare instances where incidents affect airport security. Within DOJ, FBI conducts investigations of incidents that are deemed to violate federal law, and federal prosecutors can decide whether to prosecute individuals for alleged incidents that are deemed to violate federal law. While Information Is Limited, Almost All Surveyed Customer Service Agents Reported Verbal Harassment, and Some Reported Physical Assaults No Comprehensive Information Is Available to Understand Assaults by Passengers against Airline Customer Service Agents Limited data are available to determine the frequency or nature of passenger assaults at airports against airline customer service agents. We reviewed selected data from DOJ, DOT, FAA, FBI, and TSA and found that no dataset can isolate such passenger assaults. For example, while the FBI collects transportation crime data from law enforcement agencies about incidents that occur at air, bus, or train terminals— including information on the victim, offender, and location of the crime— the data cannot isolate passenger assaults against airline customer service agents. While representatives from selected airport law enforcement agencies and airlines we interviewed said they collect information related to passenger assaults for their respective airports or airlines, these data were generally unavailable. In particular, representatives from all six selected airport law enforcement agencies we interviewed said providing data on passenger assaults against airline customer service agents would require manually reviewing all police reports. Results from one selected airport law enforcement agency that had manually reviewed its data for 2018 found that of the 237 assistance calls it received for incidents between customer service agents and passengers, law enforcement completed an incident report for 12 of these calls, and referred two reports to state prosecutors. Representatives from five of the six selected airlines declined to share data with us, saying data were not readily available, or were business proprietary, or business sensitive. Representatives from the remaining airline provided us with data from the third and fourth quarters of 2018; this data indicated that incidents between passengers and customer service agents generally remained constant, with an average of approximately 1.2 disruptive passengers per 1,000 passenger boardings. About 10 Percent of Surveyed Customer Service Agents Said Passengers Physically Assaulted Them in the Past Year In the absence of available data, we surveyed a non-generalizable sample of 104 randomly selected customer service agents to understand their experiences performing their jobs over the last year. According to these 104 customer service agents, almost all (96) reported experiencing verbal harassment, such as passengers yelling, cursing, or being argumentative (see fig. 2). Almost half (46) reported experiencing verbal threats, such as passengers threatening to harm the customer service agent. Twenty-two customer service agents reported that a passenger attempted to physically assault them by, for example, attempting to push them. Fewer (12) customer service agents said that passengers actually physically assaulted them. We also found that about one-third (34) of surveyed customer service agents said they experienced “other types of harmful actions,” which agents said included passengers destroying property, taking video of agents, grabbing agents’ identification badges, and stalking agents after work. Stakeholders we interviewed from selected airports, airport law enforcement, and airlines generally agreed that passengers can be verbally disruptive but that physical assaults are less frequent. More specifically, of these 17 stakeholders, most (13) agreed that disruptive passenger behavior is frequent. Most (11) also agreed that physical assaults occur less frequently than verbal threats. Nevertheless, while representatives from two selected unions did not have data on such actions, they emphasized to us that the customer service agents they represent face difficult working conditions. The union representatives also stated that passenger assaults, including verbal threats and physical assaults, are becoming more common. Further, three of the nine stakeholders who provided a perspective said that incidents against customer service agents are increasing. For example, representatives from one airline we interviewed said that over the past 5 years, they have observed an increase in both the frequency and severity of passenger assaults, in addition to other disruptive behavior. A number of factors may contribute to passenger assaults. Selected stakeholders, including those from airlines, airports, airport law enforcement, and other industry associations most commonly cited (24) alcohol consumption at the airport or drug use as a contributing factor. For example, according to representatives from one law enforcement agency, when customer service agents deny boarding to intoxicated passengers, passengers can become verbally or physically aggressive toward customer service agents. Other stakeholders told us that passengers increasingly have more opportunities to consume alcohol while waiting for their flights, thereby increasing alcohol-related incidents. For example, representatives from one airport noted that tablets at the boarding area allow passengers to place orders for alcohol while seated at the gate. Seventeen selected stakeholders we interviewed also told us that airlines’ business practices, such as charging fees for checked and carry-on baggage or policies around delays and cancellations might aggravate or surprise passengers and lead them to be aggressive toward customer service agents. Some stakeholders (10) also said that other factors, such as long lines and large crowds in the airport can increase passengers’ stress levels. Moreover, according to some stakeholders, service failures—such as flight delays, cancellations, or lost baggage—can exacerbate these stressors. Of the 61 surveyed customer service agents who reported experiencing verbal threats, attempted physical assaults, actual physical assaults, or other harmful actions, most (45) said these incidents negatively affected their overall well-being. Similarly, selected union representatives we interviewed also said that these incidents can increase stress and anxiety for customer service agents. Almost All Surveyed Customer Service Agents Who Said They Experienced a Passenger Assault Reported It, and Airline Management or Airport Law Enforcement Often Took Some Action Almost all customer service agents (56 of 61) who stated in our survey that they experienced passenger conduct amounting to more than harassment said they reported the conduct to someone. Specifically, 46 customer service agents stated that they contacted their immediate airline manager; 28 stated that they contacted airport law enforcement; and 6 stated that they contacted airport staff or other entities. These actions described by customer service agents we surveyed generally aligned with selected airlines’ procedures for handling passenger assaults. Specifically, representatives from five selected airlines told us that while their respective airline’s policy generally calls for agents to contact management first, agents can also contact airport law enforcement if they feel like their safety is threatened. However, representatives from two selected unions told us that airline managers are sometimes hesitant to inform law enforcement about incidents—or have their agents contact law enforcement—or to elevate incidents internally. According to one union representative, airlines prefer to keep such incidents internal and emphasize providing on-time service to their passengers. Contacting law enforcement could make this difficult to achieve, so when disruptive passenger behavior occurs, airlines may be inclined to allow the passenger onboard the aircraft instead of contacting law enforcement. Of the 56 customer service agents who stated they reported the passenger conduct, over half (33) said that, to their knowledge, representatives from airlines, law enforcement, or airports took action in response. According to our survey results, these representatives generally took a range of actions, including but not limited to, requesting that a passenger stop the disruptive behavior, completing an airline or police report, denying a passenger boarding, or arresting a passenger. Representatives most commonly removed passengers from an area or denied passengers from boarding (18); diffused the situation (7); or arrested the passenger (4). Twenty-six customer service agents said that no action was taken in response to the incident, which left some to not feel supported by airline management. Moreover, according to representatives from one union, in some instances, customer service agents feel that if airline management provides passengers with travel benefits, such as seat upgrades or airline miles, to diffuse these types of situations, it can appear to be condoning or rewarding any passenger misbehavior. The FAA Reauthorization Act of 2018 requires airlines to develop and submit employee assault-prevention and response plans to FAA by January 2019. In these plans, airlines are required to document: reporting protocols for airline customer service agents who have been the victim of a verbal or physical assault; protocols for notifying law enforcement after an incident of verbal or physical assault committed against an airline customer service agent; protocols for informing federal law enforcement about violations of federal law that prohibits interference with security screening personnel; protocols for ensuring that a passenger involved in a violent incident with an airline customer service agent is not allowed to move through airport security or board an aircraft until appropriate law enforcement has an opportunity to assess the incident and take appropriate action; and protocols for informing passengers of federal laws protecting federal, airport, and airline employees who have security duties within an airport. In March 2019, FAA officials said they had not received employee assault-prevention and response plans from all of the 49 U.S. airlines that were required to submit such plans. However, at that time, officials also said they were not concerned about any delays because they believed airlines already have internal policies and procedures for handling these types of incidents. Nevertheless, FAA officials told us they intended to issue a reminder to the airlines. Of the six selected airlines we interviewed, representatives from two airlines said they had submitted their plans to FAA, and representatives from the remaining four airlines said their plans were in development. Further, when we asked airlines to describe their policies for handling assaults, some of the policies that representatives described aligned to some requirements in the Act for the plans. For example, as discussed previously, all six selected airlines told us they had policies for how customer service agents or managers should notify airport law enforcement when assaults occur. Moreover, representatives from all six airlines also described reports that that customer service agents and employees complete when such incidents occur. In July 2019, FAA issued a notification to airlines, reminding them to develop and submit their plans. FAA officials attributed delays in following up with airlines to the government shutdown in early 2019 and multiple competing requirements in the Act. FAA officials also said they were initially hesitant to issue a notification around these plans, since the agency has a limited role and does not promulgate requirements for the training or oversight of customer service agents. Nevertheless, FAA officials said they plan to continue to follow up with the airlines as needed to collect the remaining plans. Most Selected Stakeholders Said State and Local Laws and Resources Sufficiently Deter and Address Passenger Assaults against Airline Customer Service Agents Despite General Satisfaction, Some Said Stronger Penalties and Other Legal Avenues Could Be Pursued All selected stakeholders we interviewed representing airlines, airports, airport law enforcement, and prosecutors (23 of 23) who provided a perspective said that current state and local laws sufficiently deter and address passenger assaults. We spoke with seven selected state prosecutors who told us that, among other offenses, they can charge passengers for actions against customer service agents with assault; battery (e.g., intentional causing of bodily harm); disorderly conduct (i.e., acts that are of a nature to outrage the sense of public decency, or affect the peace and quiet of persons who may witness them, or engaging in brawling or fighting); and trespassing. According to these prosecutors, they typically charge passengers for assaults as misdemeanors, which one prosecutor told us generally does not result in passengers’ serving any jail time. While four selected state prosecutors who regularly handle misdemeanor prosecutions did not have data isolating these crimes, three recalled charging passengers for assaults against customer service agents. For example, a representative from one prosecutor’s office estimated that, over the last 5 years, law enforcement had referred 25 to 30 of these incidents to his office and that his office had prosecuted six or seven of these cases. In determining whether to pursue a case, five prosecutors we interviewed told us they weigh a number of factors, such as whether the customer service agent is willing to file charges; whether law enforcement observed the assault; and whether witnesses are available to testify. Nonetheless, according to prosecutors we interviewed, crimes committed at airports present unique challenges. More specifically, according to one prosecutor we spoke with, the transitory nature of airports makes it difficult to get witnesses to testify at a trial, because they are often passing through the airport en route to another destination. Four selected prosecutors also told us that passenger assaults might be charged as felonies if, for example, the crime involves the use of a deadly weapon or causes serious physical injury to the victim. However, these prosecutors told us such instances are infrequent and incidents between passengers and customer service agents rarely rise to the level of severity of a felony charge. To that end, none of the three prosecutors we interviewed who typically prosecute felony cases could remember charging a passenger for an assault of a customer service agent within the last year. Nevertheless, some selected stakeholders told us opportunities exist to strengthen penalties for passenger assaults. More broadly, a few stakeholders that we interviewed—including one airline, one prosecutor, and one union—suggested opportunities exist to pursue harsher penalties. According to selected stakeholders, this could be achieved by, for example, prosecuting passenger assaults as felonies, prosecuting these incidents at the federal level, or seeking a legislative change to classify airline customer service agents as a protected class. For example, under Florida statute, an alleged battery against certain specified protected classes, including elected officials and teachers, are automatically reclassified from a first degree misdemeanor to a third degree felony charge, resulting in potentially harsher penalties. While Selected Stakeholders Generally Agreed Resources Are Sufficient, Some Suggested Improvements Most selected stakeholders we interviewed who provided a perspective said that their current resources sufficiently deter and address passenger assaults. Specifically, of the 20 selected stakeholders who provided a perspective, 15 said that current resources are sufficient and did not identify other resources that could improve their ability to address or mitigate passenger assaults. The remaining five stakeholders would like to see additional resources directed toward airport’s law enforcement agencies. In particular, four selected stakeholders said they believe that increasing the number and presence of law enforcement in airports would help deter or address passenger assaults. Representatives from one airline told us they hired private security officers to monitor ticketing and baggage areas at the airport to increase their security posture. While the purpose is not to address passenger assaults, representatives told us that these officers can respond to such assaults. The remaining stakeholder suggested law enforcement could receive additional training to improve responses when passenger assaults occur. Some of the selected stakeholders we interviewed who did not identify gaps in resources nonetheless offered suggestions to further deter or mitigate passenger assaults, including: Provide additional training for customer service agents. Three stakeholders told us customer service agents should receive additional training on conflict de-escalation. Increase information sharing and reporting. Three selected stakeholders said that information sharing could be improved among relevant stakeholders—including airlines and airport law enforcement. For example, representatives from one airline said they have limited insight into the outcomes of passenger assaults unless they contact airport law enforcement or prosecutors. Two selected union representatives said that having better data on these incidents could be beneficial to understand the scope of the problem. Increase public education and support for customer service agents. Representatives from two unions would like to see (1) signage at airports saying that assaults by passengers are subject to prosecution, and (2) airlines provide additional support to customer service agents, in the form of legal assistance or time off, to press charges against passengers alleged to have committed such assaults. Moving forward, the FAA Reauthorization Act of 2018 requires airlines to provide initial and recurrent training for all employees on, among other things, de-escalating hostile situations, and, as previously noted, the reporting protocols for these incidents. Providing such training and having additional reporting protocols could provide customer service agents with additional tools for diffusing these incidents and standardize how airlines respond to these incidents, respectively. Agency Comments We provided a draft of this report to DHS, DOJ, and DOT for review and comment. DOJ provided technical comments, which we incorporated as appropriate. DHS and DOT did not have any comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Transportation, the Attorney General, the Secretary of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff any have questions about this report, please contact me at (202) 512-2834 or VonahA@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Survey for Airline Customer Service Agents In the past year, how many times have experienced the following incidents: Passenger verbally harassed you Passenger verbally threatened you (i.e., said they would do something to you specifically) Passenger attempted to physically assault you (tried to hurt you) Passenger committed other harmful action (please describe) 3. How, if at all, have these incidents affected your overall well-being? a. No effect b. Slightly negative effect c. Very negative effect 4. Now thinking about the most severe incident you have experienced in the past year, which of the following airport officials, if any, did you contact about this incident? a. Immediate airline manager b. Airport law enforcement c. Airport staff d. Other–Please identify ______________________________ e. None 5. Did any airport or airline officials take action because of your most severe incident in the past year? a. No b. Don’t know c. Yes. Please describe the action that was taken. 6. How, if at all, could airlines support customer-service representatives when these incidents happen? Appendix II: GAO Contact and Staff Acknowledgments GAO Contact: Andrew Von Ah, (202) 512-2834 or VonahA@gao.gov. Staff Acknowledgements: In addition to the individual named above, other key contributors to this report were Jonathan Carver, Assistant Director; Melissa Swearingen, Analyst-in-Charge; Emily Flores; Clara Goldrich; Geoffrey Hamilton; Delwen Jones; Dawn Locke; Malika Rice; Kelly Rubin; and Amy Suntoke.
Why GAO Did This Study Recent media reports have detailed incidents at airports where passengers have acted disruptively or violently toward airline customer service agents, who assist passengers checking into their flights and boarding aircraft, among other things. While state and local laws generally prohibit these types of actions, some stakeholders have raised questions about these agents' safety. The FAA Reauthorization Act of 2018 included a provision that GAO examine passenger violence against airline customer service agents at airports. This report examines (1) what is known about assaults by passengers against customer service agents and (2) stakeholders' perspectives on the sufficiency of state and local laws and resources to deter and address such incidents. GAO interviewed and reviewed available information from a non-generalizable sample of representatives from five large airports and six large airlines. GAO also interviewed six airport law enforcement agencies, and seven prosecutors' offices. Further, GAO reviewed documents and interviewed two unions representing customer service agents and five federal agencies with airport safety or security responsibilities. GAO developed and administered a brief, non-generalizable survey to 104 customer service agents working at four selected large airports that GAO visited in March and April 2019. Survey results on customer service agents' experiences with passengers cannot be used to make inferences about all customer service agents but nevertheless provide valuable insights. What GAO Found No comprehensive data are available to determine the nature and frequency of passenger assaults—e.g., verbal threats, attempted physical acts, or actual physical acts—against airline customer service agents at airports. This lack of data is due, in part, to the limited federal role in addressing such assaults. GAO's survey of 104 airline customer service agents showed that over half (61) reported experiencing such action in the past year, while almost all reported experiencing verbal harassment. About 10 percent reported experiencing physical assaults. Stakeholders GAO interviewed said that while passengers are often verbally disruptive, physical assaults are less frequent. These stakeholders also said that alcohol consumption, frustration over airlines' business practices (e.g., fees for checked or carry-on baggage), and long lines can contribute to these incidents. Of the stakeholders—i.e., airlines, airports, law enforcement, and prosecutors— GAO interviewed who provided perspectives and have responsibilities for passenger assaults, all 23 said state and local laws sufficiently deter and address such incidents, and 15 (of 20) said current resources are sufficient. One prosecutor told GAO the transitory nature of airports makes it difficult to get witnesses to testify at trial; when prosecuted, passengers generally face misdemeanor charges. While stakeholders GAO interviewed generally did not identify gaps in resources, some said incidents could be further mitigated if, for example, airports made law enforcement's presence more visible or airlines provided conflict de-escalation training to customer service agents. The FAA Reauthorization Act of 2018 required that airlines (1) provide such training to all employees, and (2) submit plans to the Federal Aviation Administration (FAA) by January 2019 detailing how airlines respond to passenger assaults. In July 2019, FAA issued a notification to airlines reminding them to submit their plans; officials said they will continue to follow up with airlines until they receive the plans.
gao_GAO-20-3
gao_GAO-20-3_0
Background Federal agency IT systems provide essential services that are critical to the health, economy, and defense of the nation. However, federal agencies increasingly rely on aging legacy systems that can be costly to maintain. As we previously reported in May 2016, our review of federal legacy systems found that 26 federal agencies reported spending almost $61 billion on operations and maintenance costs in fiscal year 2015. In addition, many of the government’s IT investments used hardware parts that were unsupported and outdated software languages, such as the common business oriented language (COBOL). In some cases, this lack of vendor support created security vulnerabilities and additional costs because these known vulnerabilities were either technically difficult or prohibitively expensive to address. Congress enacted the MGT Act in December 2017 and established the TMF to help agencies improve, retire, or replace existing systems. Congress appropriates money to the TMF, which is used to fund projects approved by the board. As of August 2019, Congress had appropriated $125 million to the TMF—$100 million was appropriated in fiscal year 2018 and $25 million in fiscal year 2019. Overview of the Technology Modernization Fund The MGT Act assigns specific responsibilities to OMB, GSA, and the Technology Modernization Board for the fund’s administration and also assigns responsibilities to federal agencies that received awarded funds. Among other things, OMB. The act requires the Director of OMB to issue guidance on the administration of the fund and report the status of the awarded projects on a public website. The information reported is to include a description of the project, project status (including any schedule delay and cost overruns), financial expenditure data related to the project, and the extent to which the project is using commercial products and services. GSA. The act designates the Administrator of General Services with responsibility for administering the fund. This includes, among other things: (1) providing direct technical support in the form of personnel services and other services; (2) assisting the Technology Modernization Board with the evaluation, prioritization, and development of agency modernization proposals; and (3) performing regular project oversight and monitoring of approved agency modernization projects. In March 2018, GSA established a TMF Program Management Office within the agency to manage these functions. An executive director leads the office and reports to the Office of the Deputy Administrator within GSA. The act requires the Administrator of General Services, in consultation with the Director of OMB, to establish administrative fees at levels sufficient to ensure the solvency of the fund in order to help offset GSA’s operating expenses for these functions. Agencies pay fees if they receive funding for a project. Technology Modernization Board. The board has responsibility for providing input to the Director of OMB for the development of processes for agencies to submit proposals, making recommendations to the Administrator of GSA to help agencies refine their submitted proposals, and reviewing and prioritizing submitted proposals. The board also is responsible for recommending the funding of modernization projects to the Administrator of GSA, and for monitoring the progress and performance of approved projects. In addition, the board is tasked with monitoring the operating costs of the fund. As part of its oversight of awarded projects, the board requires each project to present a quarterly update and report on the status of milestones achieved in order to ensure the project is on schedule. Other federal agencies. The act stated that any agency that submits an IT-related project proposal and receives TMF funding must repay the transferred amount as well as pay an administrative fee. After the board approves a project proposal, the respective agency is required to sign an interagency agreement with the TMF Program Management Office that specifies the terms of the TMF funding repayment, the administrative fee, and the repayment schedule before initial funds are disbursed and the project begins. Figure 1 provides an overview of key TMF activities that OMB, GSA, and the Technology Modernization Board have undertaken to meet the responsibilities outlined in the MGT Act. These include the establishment of TMF administrative processes and the Technology Modernization Board’s project award announcements, among other activities. These activities are also discussed in greater detail following the figure. In February 2018, OMB issued guidance on the implementation of the MGT Act that included instructions for agencies on submitting applications for TMF funding. Agencies were allowed to begin submitting initial application proposals on February 27, 2018. The guidance included an initial application template that agencies were required to complete. As part of the template, agencies were required to provide an estimate of the TMF funding request and the agency’s method used for cost estimation. Subsequently, in March 2018, OMB issued funding guidelines for projects receiving awards. The guidelines stated that project proposals must include a reliable estimate of any project-related cost savings or avoidance relative to pre-modernization activities using the templates provided. In addition, the guidelines stated that estimates must undergo appropriate due diligence and concurrence from the requesting agency’s Office of the Chief Financial Officer prior to submission to the board, in consultation with OMB’s Resource Management Office and GSA’s TMF Program Management Office. Further, the guidelines stated that the agency’s estimation process would be subject to GAO review, pursuant to the act. For agencies receiving a TMF award, the guidelines stated that agencies were required to repay all transferred funds as well as an administrative fee, which was determined based on the amount of awarded funding. As part of the process, agencies were required to establish a written agreement with GSA that set forth the terms for repaying the transferred funds and the administrative fee. Agencies were required to start making payments one year after the initial amount of award funding was transferred and complete all payments within five years, unless otherwise approved by OMB. While the guidelines noted that reimbursement was not contingent upon the achievement of project-related savings, agencies could use the project’s generated cost savings to repay the award. Agencies Follow a Two-Phase Proposal Process When Applying For a TMF Award The TMF application process occurs in two phases, each of which requires agencies to submit specific documents. During Phase 1, agencies are required to submit an initial project proposal providing preliminary information about the project, its purpose, and its anticipated benefits. Within this documentation, agencies must confirm that funding for this project has never explicitly been denied or restricted by Congress or OMB, in accordance with the MGT Act. Also during this phase, the Technology Modernization Board evaluates proposals and makes recommendations for project funding based on how well the project documentation demonstrates a strong execution strategy, technical approach, and includes a strong team with a demonstrated history of successful modernization efforts. The board encourages agencies to consider the adoption of commercial technology solutions in their proposals and present a strong technical approach and acquisition strategy to implement those solutions. In addition, agencies are encouraged to provide information on the potential impact of the modernization effort on the agency’s mission, feasibility, opportunity enablement (e.g. cost savings), and common solutions. If the board approves the Phase 1 initial project proposal, the project team will move on to Phase 2. In Phase 2, the agency must submit a financial plan showing a cost estimate and estimated savings from the implementation of the proposed project. Agencies must provide a more comprehensive project description than that provided in Phase 1, including discrete milestones, funding schedule, project plan, and financial plan. These documents must be approved by the agencies’ chief financial officer and CIO. Phase 2 proposals must also address any other areas identified by the board in the initial project review. Further, the agency proposal team must also prepare an in-person presentation for the board. OMB’s Resource Management Office reviews the proposal documentation to ensure that the proposed project aligns with the requesting agency’s mission. The office’s review is intended to ensure that the proposal does not duplicate funding provided through existing appropriations, or previously has been expressly denied funding or restricted by Congress. The review includes an assessment of the proposed project’s information on the reimbursement of the awarded funds, the project’s planned schedule, and out-year budget impacts. OMB also reported that the agency sends information on the proposed projects to Congressional appropriation committees for their review prior to the Technology Modernization Board’s approval of a project. Agencies with projects that the board recommends for TMF funding are required to sign an interagency agreement outlining the repayment terms. In addition, projects receive incremental funding contingent on the successful execution of milestones outlined in the written agreement for the transfer of funds. Figure 2 describes the steps in both phases of the TMF proposal process. As of August 2019, the Technology Modernization Board had awarded $89.36 million to seven projects. Table 1 lists the projects that have received funding (in alphabetical order by agency), descriptions of the projects, and when the TMF funding awards were announced. For more details on each of the awarded projects, see appendix II. OMB and GAO Have Issued Federal Cost Estimating Guidance OMB Circular A-11 directs agencies to follow the guidelines outlined in its appendix on cost estimating for all IT investments and acquisitions within the federal government. Since OMB first introduced its cost estimate appendix in 2006, as noted in the circular, the cost estimating appendix has been based on the GAO Cost Estimating and Assessment Guide. The appendix outlines a number of major steps in the cost estimating process and references the practices in GAO’s cost guide. Specifically, these steps include preparing a high-level work breakdown structure, defining ground rules and assumptions, developing the data by collecting information on the cost drivers, developing the estimate using various risk factors, performing a sensitivity analysis, documenting the estimate, and updating it on a regular basis. According to the GAO guidance, a cost estimate is considered reliable if it meets four characteristics and the specific set of best practices associated with each characteristic. Those characteristics are: Comprehensive: An estimate should include all life cycle costs (from the program’s inception and design through operations and maintenance), reflect the current schedule, and have enough detail to ensure that cost elements are not omitted or double counted. Specifically, the cost estimate should be based on a product-oriented work breakdown structure that allows a program to track cost and schedule by defined deliverables, such as hardware or software components. In addition, all cost-influencing ground rules and assumptions should be detailed in the estimate’s documentation. Well-documented: An estimate should be thoroughly documented, describe how it was developed; and include source data, clearly detailed calculations and results, and explanations of why particular estimating methods and references were chosen. Data should be traced to their source documents. Accurate: An estimate should be based on historical data or actual experiences on other comparable programs and an assessment of most likely costs, and be adjusted properly for inflation. In addition, the estimate should be updated regularly to reflect significant changes in the program—such as when schedules or other assumptions change—and actual costs, so that it should always reflect the current status. Credible: An estimate should discuss any limitations of the analysis because of uncertainty surrounding data or assumptions. In addition, the estimate should incorporate the results of a sensitivity analysis (that examine the effects of changing assumptions on the estimate), and risk and uncertainty analysis (that identifies all of the potential project risks and assesses how these might affect the cost estimate). The estimate’s results should be cross-checked, and an independent cost estimate should be conducted to see whether other estimation methods produce similar results. If any of the characteristics is not met, minimally met, or partially met, then the cost estimate does not fully reflect the characteristics of a high- quality estimate and cannot be considered reliable. Federal Law Generally Requires Agencies to Use Competitive Procedures When Awarding Contracts Federal agencies are generally required to use full and open competition to award contracts for the procurement of goods and services (including commercial IT products), with certain exceptions. The Competition in Contracting Act of 1984 requires agencies to obtain full and open competition through the use of competitive procedures in their procurement activities unless otherwise authorized by law. Using competitive procedures to award contracts means that all prospective contractors that meet certain criteria are permitted to submit proposals. While the Competition in Contracting Act generally requires federal agencies to award contracts using full and open competition, agencies are allowed to award contracts noncompetitively under certain circumstances. Generally, these awards must be supported by written justifications that address the specific exception to full and open competition that is being used in the procurement. An example of an allowable exception to full and open competition includes circumstances when the contractor is the only source and no other supplies or services will satisfy agency requirements. Federal agencies have the option to use a variety of contract types when purchasing IT products and services, including government-wide acquisition contracts, IT Schedule 70 contracts, and blanket purchase agreements. These contracts and agreements allow agencies to establish a group of prequalified contractors to compete for future orders under streamlined ordering procedures once agencies determine their specific needs. Agencies can then issue orders on these contracts and agreements, obligating funds and authorizing work to be performed. Agencies are required to publicly report their contract transactions in the FPDS-NG database. This contract transaction data includes information on the type of award made, the amount of the award, and whether competitive procedures were used. Specifically, agencies are required to identify the extent to which the contract was competed and what solicitation procedures were used. In addition, if an agency awards task orders on an existing contract, then the agency is required to identify whether competitive procedures were used. Further, if the contract did not use competitive procedures, then the agency is required to report the reason that the contract was not competed. About $1.2 Million Has Been Obligated to Cover TMF Operating Expenses and Agencies Expect to Realize Savings in Fiscal Year 2020 or Later As of August 31, 2019, GSA’s TMF Program Management Office had obligated about $1.2 million in operating costs for activities related to the establishment and oversight of the fund. While the office intended to assess administrative fees to fully recover its operating expenses, the actual amounts collected as of August 2019 had been less than planned. This was due to factors such as the office’s formulation of fee rates based on appropriations levels that were higher than what was ultimately received, along with changes to several projects’ scope and milestones. Further, cost savings have yet to be realized. Officials from the seven TMF-funded projects reported that they expect to begin realizing cost savings from their projects starting in fiscal year 2020 or later. TMF Operating Expenses Are to be Offset by Administrative Fee Collection, but Collected Fees Have Been Less Than Planned According to the MGT Act, the TMF Program Management Office may obligate funds to cover its operating expenses out of the appropriations received for the fund (totaling $125 million as of August 2019) in order to provide support to the Technology Modernization Board in meeting its responsibilities. To help offset TMF operating expenses, the act required that the GSA administrator, in consultation with the OMB director, to establish administrative fees at levels sufficient to ensure the solvency of the fund (so that obligations or transfers of funds to awarded projects never exceed the amount available in the fund for these obligations or award transfers). Subsequent OMB guidance, issued in March 2018, required TMF- awarded projects to pay an administrative fee on awarded funds, beginning the first year after the initial incremental amount of award funding was transferred to the agency. The TMF Program Management Office issued further guidance in June 2018 that established administrative fee rates based on a percentage of the amount transferred to an agency project and the payment period. During the time of our review, the office’s current administrative rate was for the period from July 2018 through September 2019. The fee rates were set in June 2018 with the intent to operate the fund as a full cost recovery model, meaning that the Program Management Office planned to fully recover all operating expenses through administrative fee collection by fiscal year 2029 if the office’s assumptions regarding appropriation levels and project selections were met. The office’s reported intention is to help preserve the capital of the fund, which would maximize the amount of appropriations available for award. Table 2 outlines the rates for TMF administrative fees based on the number of years to repay the awarded funds and the percentage of the transferred amount, for the period of July 2018 through September 2019. The TMF Program Management Office sets new rates annually after review from the Technology Modernization Board and approval by GSA’s Deputy Administrator; these rates go into effect in October of each year. As of August 31, 2019, the TMF Program Management Office had obligated about $1.2 million to cover its operating expenses and had begun to collect administrative fees from agency projects, consistent with the MGT Act. Specifically, from March 2018 (when the office began operations) through August 31, 2019, the office obligated approximately $409,000 in fiscal year 2018 and $797,000 for the first 11 months of fiscal year 2019. During the same period, the office collected $33,165 in administrative fees as of August 31, 2019. Based on this amount, the fund was able to only offset approximately 3 percent of its obligated operating costs as of August 31, 2019. The TMF Program Management Office’s administrative fee collection has been limited due to a number of factors that have affected the amounts scheduled to be collected: (1) no fees were collected in the first year of operation; (2) projects chose longer periods to make payments; (3) projects make payments based on funds transferred; (4) fee rates were determined based on assumptions regarding appropriations that were not met; and (5) project changes may affect fee collection. No fees were collected during the first year of operation. OMB’s funding guidelines allowed agencies to start paying administrative fees one year after a project received an award. Since the Technology Modernization Board began awarding funding in June 2018 (within fiscal year 2018), no projects were required to start paying administrative fees until fiscal year 2019, which deferred the start of the TMF Program Management Office’s fee collection by one year. Projects chose longer periods to make payments. When the TMF Program Management Office set administrative fee rates, agencies receiving awards were allowed to determine what rate they would pay according to how many years they planned to make payments. The office reported that a lower administrative fee rate was offered to projects that chose to repay awarded funds over a shorter period (3 years) rather than 5 years. All seven projects that have been awarded funding as of August 31, 2019, chose the longer repayment period of 5 years with a 3 percent rate. The Executive Director of the TMF Program Management Office reported that the office offered a lower administrative rate with the intent of making repaid funds available more quickly to be awarded to new projects. In doing so, the Technology Modernization Board expected to be able to make additional awards, which would increase the collection of administrative fees. Further, according to the Executive Director, the office did not expect that the agencies’ selection of a 5-year repayment term instead of a 3-year term to significantly affect the performance of the fund. However, as the Executive Director noted, these longer repayment terms do affect the collection of administrative fee payments because a longer repayment term means that these funds are not as readily available to award to new projects and generate new fees. Projects make payments based on funds transferred. Agencies receiving awards were only required to make administrative fee payments based on the amount of the award funding that was transferred, rather than based on the full awarded amount. As such, this reduced the amount of fees that the TMF Program Management Office could collect in the initial years that agencies made fee payments. As of August 31, 2019, the Technology Modernization Board had authorized the transfer of $37.65 million (of the $89.36 million awarded) to the seven projects. Based on the amounts transferred, the office is scheduled to collect $1.2 million in administrative fees through 2025 from the seven projects. Table 3 shows the current scheduled administrative fee payments that will be collected from the seven projects based on the amount of awarded funding that the projects had received as of August 31, 2019. Going forward, as the seven projects receive all of the remaining awarded funds, the projects are planning to pay a total of $2.68 million in administrative fees through 2025. However, the Technology Modernization Board had not made awards to any additional projects as of August 2019, and, as a result, the office will not likely be able to collect any additional fees from new projects until at least fiscal year 2021. Any newly awarded projects would be eligible to delay paying administrative fees until 1 year after the initial award date in accordance with the funding guidelines. Fee rates were determined based on assumptions regarding appropriations that were not met. The TMF Program Management Office set its current administrative fee rates in June 2018 based on the assumption that the fund would receive higher levels of appropriations than what was ultimately received. In doing so, the office projected that it would transfer more funds to projects, which would result in larger administrative rates over the initial years of the fund. Specifically, GSA requested $438 million in its fiscal year 2018 and 2019 budget requests for the TMF, but actually received $125 million in appropriations. Table 4 lists the amounts that GSA requested in its budget requests and the amounts appropriated for fiscal years 2018 through 2020. In making its June 2018 assumptions about the appropriations, the office projected that it would distribute larger amounts of funds in the first 2 years of operation and collect more administrative fees through fiscal year 2025. However, the office’s projected collection of administrative fees is less than what was scheduled as of the end of August 2019. In particular, while the office exceeded its projections for distributing funds in fiscal year 2018 ($1.93 million more than projected), the office had not yet met its projection of distributing $75 million in fiscal year 2019—specifically, as of August 31, 2019, the office had distributed only $25.71 million to awarded projects. Consequently, these lower levels of distributed funds decreased the amount of administrative fees scheduled to be collected. Table 5 shows the TMF Program Management Office’s projections for fund distribution for fiscal years 2018 through 2019 and its projected fee collection, compared to the current scheduled distributions and administrative fee collection for fiscal years 2018 through 2025, as of August 31, 2019. Going forward, the office had projected that it would distribute $75 million in fiscal year 2020. However, based on our analysis, only approximately $35.6 million was available in the fund as of August 31, 2019, to award to new projects. The Executive Director of the TMF Program Management Office stated that the office had to make assumptions about the TMF appropriation levels in order to develop the rate model. In doing so, all of the underlying assumptions and parameters related to determining the administrative fee rates and ensuring the fund operated at full cost recovery were reviewed by GSA’s Office of the Chief Financial Officer and Office of General Counsel, OMB, and the Technology Modernization Board before the GSA Deputy Administrator approved the fee rates in June 2018. In addition, the Executive Director noted that, at the time the rate model was developed, the office did not yet have information on the fiscal year 2019 appropriations and made the assumption that the fund would receive the same level of appropriations as in fiscal year 2018 ($100 million). However, based on the wide gap between the budget requests and what funds were ultimately appropriated in fiscal years 2018 and 2019, these assumptions regarding fund appropriation levels did not materialize and impacted the amount of fees that could be collected from awarded projects in fiscal year 2019. Four projects’ changes will affect fee collection. As of August 31, 2019, officials responsible for the management of four of the seven TMF- funded projects reported that they were planning to make significant changes to their projects’ approved scope or scheduled milestones. Officials from two projects reported that they had received approval for these scope changes from the Technology Modernization Board (in June 2019 and August 2019, respectively) and are currently waiting on approval for the repayment schedule changes as of August 31, 2019. Officials from the other two projects reported in August 2019 that they planned to present their changes to the board for approval. Based on our analysis, these changes are expected to affect the four projects’ administrative fee repayment schedules and reduce two projects’ administrative fee collection by $369,117. Table 6 lists the changes to the four TMF-funded projects as of August 31, 2019, as reported by the agencies; the status of the Technology Modernization Board’s approval of the changes; and the potential impacts these changes are expected to have on administrative fee collection. In addition, more details on the changes reported by the four projects are included in appendix II. The Executive Director of the TMF Program Management Office stated that the four projects’ reduction or delay in administrative fee payments should not affect administrative fee collection. The Executive Director explained that the return of prior awarded funds will allow the Technology Modernization Board to have more funds available to award to new projects, which would generate new administrative fees. However, these proposed changes to the four projects’ scope and schedule likely will affect upcoming administrative fee collection because additional time will be needed to review new project proposals. In addition, the agencies may delay administrative fee payments for one year after award issuance. As a result of the five factors that we identified that had impacted administrative fee collection as of August 2019, there is likely to be a period of time between when the office’s current administrative fee collection occurs and when the office can recover its operating expenses from this collection. Specifically, based on our analysis, it will take the TMF Program Management Office at least 5 years (until 2024) to recover the operating costs expended as of August 31, 2019, (over $1.2 million) with the current collection of administrative fees. In addition, once the two projects’ proposed scope and schedule changes are approved by the Technology Modernization Board (decreasing fees collected by $369,117), it is likely that the office will take longer than 5 years to recover these operating costs. Further, it is not clear when the TMF Program Management Office will recover future operating expenses incurred in fiscal year 2020 and beyond. Moreover, these factors will most likely continue to be a challenge for OMB and the office going forward if newly awarded projects choose longer repayment periods or more awarded projects make changes that affect fee collection. Consequently, OMB and the TMF Program Management Office are not currently on track to operate the fund at full cost recovery, as intended. The Executive Director of the TMF Program Management Office stated that the office had reduced its fiscal year 2019 operating expenses by almost 50 percent from the original planned operating level (in the fiscal year 2019 President’s Budget). In particular, the Executive Director reported that the office used temporary staff internally to deliver administrative and support activities, such as website updates and the preparation of meeting agendas and minutes, rather than rely on contractors. The office added that, using internal temporary employees had provided the office with the flexibility to scale operations up and down as appropriate. As of August 2019, the office was not pursuing a staff increase. Further, the Executive Director stated that, as of August 2019, the office was reassessing the assumptions for the administrative fee rate model for the upcoming year, including assumptions for fiscal year 2020 appropriations. The Executive Director added that the office would like to have more information on its fiscal year 2020 appropriations in order to help determine the new rate. These assumptions would be used to develop a new rate model that is expected to go into effect on October 1, 2019, for fiscal year 2020. As for the office’s ability to manage the fund at full cost recovery, the Executive Director stated that all of the assumptions would have needed to be met in order to ensure the TMF operated with full cost recovery. The Executive Director added that the office still intends to pursue full cost recovery going forward, but noted that this may change if the new set of assumptions is not met. Further, the Executive Director reported that four project proposals were in draft stages or pending a Technology Modernization Board determination as of August 2019. Since the fund was established in December 2017, OMB, the Technology Modernization Board, and the TMF Program Management Office have provided oversight of the fund’s awarded projects by requiring the respective agencies to provide quarterly updates on the status of project milestones and transferring additional funds only when milestones were reached. However, the board had not made a corresponding effort to ensure that the TMF Program Management Office’s operating costs and administrative fee collection remained on track to achieve full cost recovery as intended. In addition, the office’s plan to take 12 years—from the start of operations in fiscal year 2018 until fiscal year 2029—to fully recover its operating costs hinders GSA’s ability to maximize the amount of appropriations available for award due to the length of time necessary to recover its costs. As a result, as of August 2019, OMB and the TMF Program Management Office were not on track to recover all operating expenses related to fund administration and oversight, thereby leaving less of the fund’s capital available for project awards. The TMF Program Management Office’s authorized collection of administrative fees is intended to allow the office to offset expenses, which maximizes the amount of funding that can be awarded to projects. However, given the lower-than-expected collection of these administrative fees and the office’s lengthy time frame for recovering all costs, it may be prudent to review those fees and determine whether their rates are set appropriately. Unless OMB and the TMF Program Management Office take steps to develop a plan that outlines the actions needed to fully recover TMF operating expenses with administrative fee collection in a timely manner, there will be fewer funds available to award to projects that are intended to improve the efficiency and effectiveness of government IT systems. TMF Projects Plan to Begin Realizing Cost Savings in Fiscal Year 2020 or Later The MGT Act established the TMF to help improve, retire, or replace federal IT systems with more efficient and effective systems that would cost less money to operate and maintain. As part of its selection criteria, the Technology Modernization Board stated that the agency would need to clearly demonstrate in its proposal how the proposed project would generate cost savings or how the modernization of the system would dramatically improve the quality of service provided. In addition, OMB’s funding guidelines stated that the project proposal must include a reliable estimate of any project-related cost savings or avoidance using the templates provided. Agencies were required to identify what year their project would start to realize cost savings in the TMF application after receiving an award (the earliest year savings could begin to be realized was fiscal year 2019). Further, the guidelines stated that the agency’s estimation process would be subject to GAO review, pursuant to the act. As of August 31, 2019, officials responsible for project management for each of the seven TMF-funded projects reported that their projects had not yet begun to realize cost savings because either the project was still currently being implemented or the project had experienced changes to prior projections. Specifically, officials for four of the seven projects reported that their projects were currently meeting targeted milestones for implementation and would begin to realize cost savings starting in fiscal year 2020 or later as planned. Officials for the other three projects reported that they had recently made changes to the projects’ scope and scheduled milestones that delayed when the projects would begin to realize savings. For more details on the changes reported by these three projects, see appendix II. Table 7 shows the year that each of the seven TMF-funded projects report that they would begin to realize cost savings. One of the three projects that experienced changes, Agriculture’s Infrastructure Optimization project, had originally planned to begin realizing cost savings starting in fiscal year 2020; however, project scope and milestone changes delayed the expected date for realization of this savings. Officials from Agriculture’s Infrastructure Optimization project reported in August 2019 that the new time frame for realizing cost savings remained to be determined. In addition, Energy’s Enterprise Cloud Email project had originally intended to begin realizing cost savings in 2021, but changes to the project’s scope have delayed the realization of savings until 2024. The third project, GSA’s NewPay, had originally planned to begin realizing savings in 2024, but changes to the project’s technological implementation have delayed the realization of savings. In particular, officials from GSA’s NewPay project reported that the project still anticipates realizing cost savings, but the date for these savings remains to be determined. Congress established the MGT Act and the TMF to help agencies transform their legacy IT systems to be more cost effective and efficient. As the awarded projects complete implementation efforts, it will be critical for agencies to realize cost savings from these modernization efforts in order to help ensure the fund is successful. Savings Estimates for the Technology Modernization Fund Projects Are Not Reliable OMB’s Circular A-11 directs agencies to follow the guidelines outlined in its appendix on cost estimating for all IT investments and acquisitions within the federal government. Since 2006, as noted in the circular, the cost estimating appendix has been based on the GAO Cost Estimating and Assessment Guide. As discussed earlier, the appendix outlines a number of major steps in the cost estimating process and references the practices in GAO’s cost guide. According to GAO’s guidance, a reliable estimate should meet the criteria for four characteristics and the specific set of best practices associated with each of the characteristics. The four characteristics and the specific best practices, among others, are: comprehensive – the estimate should include all life cycle costs, a work breakdown structure, and ground rules and assumptions; well-documented – the estimate documentation should describe how the source data were used, the calculations that were performed and their results, and the estimating methodology used; accurate – the estimate should be based on historical data or actual experiences on other comparable programs and be updated regularly to reflect changes in the program; and credible – the estimate should incorporate the results of sensitivity, and risk and uncertainty analyses. According to the GAO guidance, if the overall assessment rating for each of the four characteristics is not fully or substantially met, then the cost estimate cannot be considered reliable. Based on our analysis of the cost estimates for the seven TMF-funded projects, the reported savings estimates that were derived from those estimates cannot be considered reliable. Officials responsible for developing the cost estimates for each of the projects did not incorporate all of the best practices for a reliable cost estimate, as defined in the GAO guidance and OMB Circular A-11. Table 8 describes the four GAO cost estimating characteristics, key practices associated with each characteristic (and the major steps in OMB Circular A-11), and the results of our analysis of the seven TMF- funded projects’ cost estimates. In addition, appendix III provides more details on our individual assessments of the seven projects’ cost estimates. In assessing the reliability of the projects’ cost estimates, we found that the TMF Program Management Office did not provide written guidance for developing the cost estimates in a manner consistent with federal requirements outlined in Circular A-11 or our best practices. Specifically, the only guidance that the Technology Modernization Board provided on the TMF website was the instruction to submit a project cost estimate using a template developed by the Program Management Office, and approved by OMB and the Technology Modernization Board. While the template provided a means to report costs for the proposed projects, the template did not require agencies to follow any of the best practices outlined in GAO’s Cost Estimating and Assessment Guide, and which is referenced by Circular A-11. Further, there were no written instructions for the template regarding the data elements or the fields required to be completed. Agency officials responsible for developing the cost estimate for each of the seven projects all confirmed that they were instructed to use the project cost estimate template to report their projects’ cost and savings estimates. In addition, these officials acknowledged that they did not follow their own internal cost estimate development processes or GAO best practices when developing their estimates. The Executive Director of the TMF Program Management Office stated that the project teams were expected to follow their own internal investment management process for developing the cost estimates. Additionally, the agencies’ chief financial officers and CIOs were required to review and approve the project proposal applications, including the completed cost estimate templates, prior to the agencies’ submissions to the Technology Modernization Board. Further, the Executive Director acknowledged that written guidance had not been developed for completing the project proposal documentation. Instead, the Executive Director stated that the office had held meetings, as requested by each project team, to provide assistance on how to complete the cost estimate template. The Executive Director stated that these meetings enabled the project teams to ask targeted questions on how to complete the template for their individual projects, which enabled the office to provide specific assistance on completing the template for each project. Staff in OMB’s Office of E-Government and Information Technology stated that agencies are required to follow the requirements outlined in Circular A-11 regarding the development of a cost estimate for all IT investments. In addition, the staff noted that each proposal is required to be approved by the agency’s Chief Financial Officer and CIO before being submitted to the Technology Modernization Board. The staff added that the information regarding the guidance for completing the proposal documentation and cost estimates is available on the TMF website. However, our review of the documentation provided on the TMF website did not identify any guidance regarding the development of the cost estimate as part of the proposal—except a statement requiring the completion of the provided template. The website also did not include any guidance instructing the agencies to follow the requirements outlined in Circular A-11, which references GAO’s cost estimating guidance. As noted in GAO’s cost estimating guide, reliable cost estimates can provide management the data necessary to make informed investment decisions, measure program progress, proactively correct course when warranted, and ensure overall accountability for results. Having a realistic estimate of projected costs also helps to ensure that projected cost savings are reliable. Building such quality into a cost estimate is addressed by the steps described in Circular A-11 (that references the practices outlined in GAO’s cost guide). Regardless of whether or not agencies were told to do so, it is an agency’s responsibility to follow these steps. Ensuring agencies understand the requirements they are supposed to follow when developing a cost estimate for their TMF proposal is critical to the success of the proposal process. If OMB and GSA do not clarify the requirement that agencies follow Circular A-11’s cost estimating process (that references GAO’s cost estimating guidance discussed in this report), agencies are at risk of continuing to provide unreliable cost information in their proposals to the Technology Modernization Board. Further, absent detailed guidance from the TMF Program Management Office on how to complete the cost estimate template, including information on the data elements and the fields required to be completed, agencies are at risk of providing incomplete or insufficient information in their project proposals. As a result, the board may not have sufficiently reliable project cost and savings information with which to make decisions on potential awards and whether these projects offer appropriate value for the investment being requested. TMF Project Acquisitions Used Full and Open Competition or an Authorized Exception The MGT Act requires the Administrator of GSA to ensure that the use of commercial off-the-shelf products and services are incorporated to the greatest extent practicable in agency projects awarded funding through the TMF. As required under the Competition in Contracting Act of 1984, all procurements, with certain exceptions, must be competed as full and open so that any qualified entity can submit an offer. Agencies are also required to publicly report their contract transactions in the Federal Procurement Data System-Next Generation (FPDS-NG), including information on the type of award made and whether competitive procedures were used. In addition, if an agency issues task orders on an existing contract, then the agency is required to identify whether competitive procedures were used. Further, if the contract did not use competitive procedures, then the agency is required to report the reason that the contract was not competed. As of August 31, 2019, six of the seven TMF-funded projects had awarded 23 contracts or task orders for work on the projects. Agency officials responsible for management of the six funded projects reported that 22 of the 23 awards used full and open competitive procedures, which we confirmed using acquisition data from FPDS-NG. HUD officials reported that the remaining award was based on a sole source contract that was not competed and an exception was documented. One project had not yet made an award. Table 9 lists the seven TMF-funded projects and the agencies’ reported use of full and open competitive procedures in FPDS-NG for the related awards, as of August 31, 2019. In making the 22 awards, agency officials responsible for the management of the six funded projects reported that they had relied on existing IT service contracts and blanket purchase agreements, or had established new blanket purchase agreements for these projects. Specifically, 11 awards were based on task orders issued on existing contracts. 9 awards were based on orders from existing blanket purchase agreements. 2 awards were made on new blanket purchase agreements. In making these awards using existing contracts and blanket purchase agreements that had followed full and open competitive procedures, the agencies complied with the requirements for using competitive procedures. In those cases where the agencies used existing blanket purchase agreements, these orders were coded as competitive based on data reported in FPDS-NG. For the one award where competitive procedures were not used, HUD completed a justification and approval for other than full and open competition, indicating that only one responsible source and no other supplies or services would satisfy the agency’s requirements. HUD officials stated that they chose a sole source contract because they wanted to retain the expertise of the existing contractors and maintain cohesion between the different phases of project work. For the project that had not yet made an award, officials responsible for the management of Agriculture’s Infrastructure Optimization project reported that, due to a change in the scope of the project made in June 2019, no contracts had been awarded yet for work on the project. The officials reported that they anticipated making an award by the end of December 2019 and that the contract is to be awarded using competitive procedures. Agencies’ continued adherence to federal acquisition requirements for full and open competition should help ensure that their TMF-funded investments deliver the intended services to benefit both the agencies and the public. Conclusions Since March 2018, when GSA established the TMF Program Management Office to administer fund operations, the office has obligated about $1.2 million to cover its expenses from managing the fund but has collected limited administrative fees to offset its expenses. As a result, the Technology Modernization Board has fewer funds than anticipated available to award to new projects. Going forward, OMB and the TMF Program Management Office are likely to face ongoing challenges in collecting administrative fees due to the factors that we have identified that affect fee collection and the office’s lengthy time frame for recovering all costs. While OMB and the TMF Program Management Office are not currently on track to recover all operating expenses in a timely manner, Program Management Office officials have expressed the intent to revisit their fee structure, in part to address the lower than anticipated amount of fiscal year 2019 appropriations. Because of the number of factors that are likely to affect fee collection, it will be critical that OMB and the TMF Program Management Office take steps to develop a plan that outlines the actions needed to fully recover TMF operating expenses with administrative fee collection in a timely manner in order to maximize the funds available for awards. By creating a new funding mechanism to help modernize federal IT systems, Congress intended that funds would be used to improve, retire, or replace existing federal IT systems to improve efficiency and effectiveness of these systems. However, since none of the seven TMF- funded projects’ cost savings estimates can be considered reliable, it is not clear whether the projects receiving funding to date will save the government as much money as was estimated. An important aspect to the success of the TMF will be clarifying the established requirement that agencies follow Circular A-11’s cost estimating process (that references GAO’s cost estimating guidance discussed in this report) in order to help ensure that the reliability of estimated savings for awarded projects is improved. Recommendations for Executive Action We are making five recommendations: two to OMB and three to GSA. Specifically: The Director of OMB should develop and implement a plan with GSA that outlines the actions needed to fully recover the TMF Program Management Office’s operating expenses with administrative fee collection in a timely manner. (Recommendation 1) The Director of OMB should work with GSA to clarify the requirement in the TMF guidance that agencies follow the cost estimating process outlined in Circular A-11 (that references GAO’s cost estimating guidance discussed in this report), when developing the proposal cost estimate. (Recommendation 2) The Administrator of General Services should develop and implement a plan with OMB that outlines the actions needed to fully recover the TMF Program Management Office’s operating expenses with administrative fee collection in a timely manner. (Recommendation 3) The Administrator of General Services should work with OMB to clarify the requirement in the TMF guidance that agencies follow the cost estimating process outlined in Circular A-11 (that references GAO’s cost estimating guidance discussed in this report), when developing the proposal cost estimate. (Recommendation 4) The Administrator of General Services should develop detailed guidance for completing the Technology Modernization Fund project cost estimate template, including information on the data elements and the fields required to be completed, in order to help ensure the accuracy and completeness of the provided information. (Recommendation 5) Agency Comments and Our Evaluation We provided a draft of this report to OMB and the five agencies for their review and comment. In response, of the two agencies to which we made recommendations, GSA stated that it agreed with one recommendation and partially agreed with the remaining two recommendations; and OMB did not state whether it agreed or disagreed with the recommendations. In addition, of the four agencies to which we did not make recommendations, one agency stated that it concurred with information presented in the report, two other agencies stated that they had no comments on the report, and a fourth agency did not state whether it had comments on the report. Further, four agencies provided technical comments on the report, which we incorporated as appropriate. The following discusses the comments received from each agency to which we made recommendations. GSA provided written comments in which it agreed with our recommendation to develop detailed guidance for completing the TMF project cost estimate template. Additionally, the agency partially agreed with our recommendation to develop and implement a plan with OMB that outlines the actions needed to fully recover TMF operating costs with administrative fee collection, stating the agency had concerns with our discussion of this topic in the report. Among the concerns was that we clearly did not acknowledge that GSA is on track to meet the requirement codified in the statute to maintain the solvency of the fund. However, our report did not make a conclusion that the fund was insolvent, or that the fund was on track to being insolvent. Rather, we discussed the factors that have affected administrative fee collection to date. In our discussion, we noted that as a result of these factors, it will take the TMF Program Management Office at least 5 years (until 2024) to recover the operating expenses expended as of August 31, 2019 (over $1.2 million) with the current collection of administrative fees. Consequently, as of August 2019, OMB and the TMF Program Management Office were not on track to recovering all operating expenses in a timely manner, thereby hindering GSA’s ability to maximize the amount of appropriations available for award. As such, we continue to believe our assessment is accurate. GSA also had concerns that we did not state that the TMF Program Management Office’s goal of full cost recovery for operating expenses was over the lifetime of the fund. In our report, we discuss that the TMF Program Management Office planned to fully recover all operating expenses through administrative fee collection by fiscal year 2029. In doing so, we noted that the office’s plan to take 12 years to fully recover its costs hinders GSA’s ability to maximize the amount of appropriations available for award due to the length of time necessary to recover its costs. Therefore, we believe that we have sufficiently discussed the time frame GSA plans to take to fully recover its costs. Further, GSA stated that our discussion of the TMF Program Management Office’s operating costs would be improved if we noted the large percentage of fund administrative costs was devoted to salaries for a limited number of staff. In determining the cost of administering the TMF, we analyzed the costs of establishing and overseeing the TMF and evaluated the collection of administrative fees from projects awarded funding, consistent with the MGT Act. In doing so, we noted the steps taken by the TMF Program Management Office to reduce its operating expenses, including reducing costs by 50 percent for fiscal year 2019, and not pursuing a staff increase in fiscal year 2019. We did not analyze any individual operating expenses and therefore, have no basis to comment on current salary expenses and whether they could or could not be reduced. As such, we believe that we appropriately discuss the costs of establishing and overseeing the TMF and the relationship of those costs to the goal of fully recovering all operating expenses. Accordingly, we believe our recommendation to develop and implement a plan to fully recover office operating expenses with administrative fee collection is still warranted. The agency also partially agreed with our second recommendation to work with OMB to clarify the requirement in TMF guidance that agencies follow the federal cost estimating guidance discussed in this report. GSA stated that the agency does not set cost estimating policy requirements for agencies, as that is the responsibility of OMB and agency CIOs. In our report, we discuss the MGT Act’s requirement that the Administrator of GSA, in consultation with the CIO Council and with the approval of the Director of OMB, administer the TMF. Because the GSA Administrator has been designated responsibility for administering the fund, the agency has a role in clarifying what guidance agencies should follow when developing their cost estimates for the TMF proposal application. Further, we acknowledge GSA’s statement that the agency will commit to working with OMB and the Technology Modernization Board to identify necessary updates to the cost estimating guidance as a positive step towards addressing our recommendation. Consequently, we believe our recommendation for GSA to work with OMB to clarify the requirement in TMF guidance that agencies follow Circular A-11’s cost estimating process (that references GAO’s cost estimating guidance discussed in this report), when developing the proposal cost estimate, is still appropriate. GSA’s comments are reprinted in appendix IV. OMB provided written comments in which the agency did not state whether it agreed or disagreed with our recommendations; however, OMB stated that the agency remains concerned with the facts, characterizations, and opinions in the draft report. The agency further stated that the draft report contains many key assumptions and recommendations that are misleading and paints an incomplete picture of the TMF. OMB then stated that while we met with the agency twice during the course of the audit, we engaged with GSA multiple times in contrast. According to OMB, many of the questions we posed to GSA would have been better answered by OMB, whose authorities in the budget, apportionment, and approval process for TMF proposals could have enabled us to state items in the report with greater accuracy. In addition, the agency stated that many of its corrections and suggestions offered in its review of the statement of facts were rejected by us, although the agency offered no examples to support its comments. We disagree with OMB’s statements regarding our audit methodology for several reasons. First, in meetings with staff from OMB’s Office of E-Government and Information Technology, we obtained information from the staff in all of the areas noted by OMB in its letter. In our report, we discuss OMB’s role in the fund’s administration and the approval process for TMF proposals, as well as OMB’s guidance in these areas. Further, we made ourselves available to engage with OMB throughout the course of the audit. For example, we arranged a meeting with the Federal CIO and her staff to discuss the administration of the TMF and to present our preliminary observations, but the meeting was cancelled by the Federal CIO’s office due to scheduling constraints and not rescheduled. Second, we incorporated many of OMB’s comments on the statement of facts related to OMB’s role in fund administration and the approval process into our draft report. For example, although we had included information in the statement of facts regarding the requirement that agency CIOs and chief financial officers approve TMF proposals prior to submittal to the Technology Modernization Board, OMB requested that we include this information in other sections throughout the report. OMB also requested that we include language in the report to ensure that it was understood that TMF projects began after an interagency agreement was signed between the TMF Program Management Office and the agency and not when TMF awards were announced. We incorporated these changes into the background and other relevant report sections. However, in cases where OMB asked us to incorporate the entirety of language from the MGT Act—rather than summarizing the law’s key requirements—we chose not to do so for the purposes of conciseness. In addition, OMB also requested that we update the status information for the TMF awarded projects in our report to be closer to the report’s issuance. However, as we had told OMB staff during our review, we intended to report project information as of August 31, 2019, based on our audit methodology and reporting timeframes. Consequently, we believe that we have accurately characterized the facts related to OMB’s role in TMF administration and sufficiently incorporated OMB’s relevant comments into our report. OMB also disagreed with our characterization of the TMF repayment process and the assumptions about potential insolvency of the fund. As noted above in our response to GSA’s comments, our report did not make a conclusion that the fund was insolvent, or that the fund was on track to being insolvent. Rather, our report discusses the factors affecting administrative fee collection and the impact these ongoing challenges have on the TMF Program Management Office’s ability to pursue a full cost recovery model and recover all costs by fiscal year 2029, as GSA intended. In addition, we acknowledged the Program Management Office’s efforts to reduce its operating costs in fiscal year 2019 (to under $1 million). OMB also stated that the primary shortcoming has been the fact that the TMF has been underfunded by Congress, leading to slower than anticipated project volume. In our report, among the factors that we discussed as affecting TMF fee collection, we noted that the initial TMF fee rates were determined in June 2018 based on assumptions regarding appropriations that were not met. We also noted the impact that these assumptions had on the TMF Program Management Office’s projected collection of administrative fees in the first two years of operation and for fiscal year 2020. Specifically, we noted that the office projected it would distribute $75 million in fiscal year 2020 but had only approximately $35.6 million available in the fund as of August 31, 2019. We concluded that OMB and the TMF Program Management Office were not on track to recovering all operating expenses in a timely manner, thereby leaving less of the fund’s capital available for project awards. At no point did we assert the fund was insolvent, or was in danger of becoming so. As such, we continue to believe our assessment of the fund’s ongoing fee recovery is accurate and that our recommendation for OMB and GSA to work together to develop and implement a plan to use administrative fee collection to fully recover operating expenses is still warranted. OMB also challenged our analysis of agency projects’ cost estimates using our Cost Estimating and Assessment Guide because, according to the agency, we had asserted that federal agencies must follow the cost guide when developing cost estimates for federal projects. OMB stated that all projects, including those submitted for consideration, must follow OMB Circular A-11, not the GAO guide. Since OMB first introduced its cost estimating appendix to Circular A- 11 in 2006, the circular has stated that the appendix is based on the GAO cost estimating guide. Specifically, the circular stated that the appendix is based on GAO’s “guide to their auditors on how to evaluate an agency's cost estimating process, and the reliability and validity of the data used to develop the cost estimates. Following these guidelines will help agencies to meet most cost estimating requirements.” Further, we reported that OMB’s Circular A-11 cost estimating appendix outlined a number of major steps in the cost estimating process, and referenced the practices outlined in GAO’s cost guide. As our report states, OMB Circular A-11 directs agencies to follow the guidance outlined in the appendix on cost estimating for all IT investments and acquisitions within the federal government, and as mentioned above, is based on GAO’s cost estimating guidance. We noted that OMB’s guidance referenced GAO’s cost guide; however, we did not assert that agencies were required to follow GAO’s cost guide independent of Circular A-11. Further, our analysis of the cost estimates for the seven projects found that none of the projects incorporated all of the best practices for a reliable estimate cost estimate, as defined in either OMB Circular A-11 or GAO guidance. We noted that the TMF’s website did not include any guidance instructing agencies to follow the requirements outlined in Circular A-11; however, we stated that, regardless of whether or not agencies were told to do so, it was an agency’s responsibility to follow these steps. Further, we noted that ensuring agencies understand the requirements they are supposed to follow when developing a cost estimate for the TMF proposal process is critical to the success of the proposal process. Accordingly, we continue to believe our assessment of the seven projects’ cost estimates is accurate and based on appropriate and generally-accepted criteria, and that our recommendations to OMB and GSA in this area are still warranted. However, in the interest of ensuring that our recommendations are explicit about clarifying which requirements agencies are to follow when developing cost estimates, we have modified the language of our related recommendations to more directly address Circular A-11. OMB also noted the additional requirements—beyond those found in Circular A-11—imposed on agency submissions by the Technology Modernization Board, including authoritative signoff by the agency chief information officer and chief financial officer for schedule and repayment documentation. The agency further asserted that the characteristics of the TMF, including the ability to incrementally fund projects and to adjust project scope and timing of project transfers, means that projects funded by the TMF are more likely to succeed. We agree that agencies’ executive review of submissions to the board is an integral part of ensuring the quality of those submissions. Such reviews, coupled with more clear direction to agencies on what federal guidance they are required to follow, as discussed above, will further strengthen the quality of the supporting documentation submitted to the board. Further, OMB also stated that the board takes seriously its responsibilities to make sure approved projects meet the requirements of the MGT Act, the guiding principles established by the board, and to ensure that projects repay all required amounts while successfully delivering smarter, more secure commercial capabilities to improve citizen services. In addition, OMB stated that the board requires that all approved projects have requirements to provide information, best practices, playbooks, and other supporting documentation. OMB also stated that the board has managed the TMF both in alignment with industry-wide best practices for iterative, agile financing for technology projects, and has been judicious and discerning in how it invests TMF funds. We agree with the importance of ensuring approved projects meet the requirements of the MGT Act. In our report, we acknowledged OMB, the Technology Modernization Board, and the TMF Program Management Office’s efforts to provide oversight of the fund’s awarded projects. However, our report also identified ongoing challenges with the TMF Program Management Office’s fee collection, including the office’s plan to take 12 years to fully recover its operating costs—a plan that was reviewed by the Technology Modernization Board and OMB—that will hinder GSA’s ability to maximize the funds available for awards. We also agree that it is important that all approved projects have requirements in place related to providing information and supporting documentation. In our report, we discussed that OMB’s funding guidelines required projects to include a reliable estimate of project- related savings. However, as we also noted, none of the seven projects’ reported savings estimates were reliable because they did not incorporate all of the best practices for a reliable cost estimate as defined in OMB Circular A-11 and GAO’s cost estimating guide. Therefore, it was not certain whether the projects that we reviewed would save the government as much money as was estimated. While it is important that the board have requirements in place, it is equally vital that agencies clearly understand the requirements they are supposed to follow—and that these requirements are clearly articulated on the TMF website—for the proposal process to be successful. As such, we continue to believe our recommendations to OMB and GSA are appropriate. OMB’s comments are reprinted in appendix V. In addition to the aforementioned comments, the four agencies to which we did not make recommendations provided the following responses. In an email received on November 22, 2019, a Director of Strategic Planning, Policy, Egovernment and Audits in the Office of the CIO at Agriculture stated that the agency concurred with the information presented in the report. In an email received on November 7, 2019, an audit coordinator in Energy’s Office of the CIO did not state whether the agency had comments on the report and provided technical comments, which we incorporated as appropriate. In written comments provided on November 19, 2019, the department stated that it had no comments to provide on the written report. HUD’s comments are reprinted in appendix VI. In an email received on November 6, 2019, an economist in Labor’s Office of the Assistant Secretary for Policy stated that the agency had no comments on the report. We are sending copies of this report to the appropriate congressional committees; the Director of the Office of Management and Budget; the Secretaries of the Departments of Agriculture, Energy, HUD, and Labor; the Administrator of GSA; and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-4456 or harriscc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology Our objectives were to: (1) determine the costs of establishing and overseeing the Technology Modernization Fund (TMF), as compared to the savings realized by projects that have received awards; (2) assess the extent to which cost savings estimates for awarded projects are reliable; and (3) determine the extent to which agencies have used full and open competition for any acquisitions related to the awarded projects. The scope of our review included the Office of Management and Budget (OMB) and the General Services Administration (GSA) TMF Program Management Office, the two organizations responsible for TMF administration, as well as the five agencies that had received the seven awards from the fund as of August 2019—the Department of Agriculture (Agriculture), Department of Energy (Energy), Department of Housing and Urban Development (HUD), Department of Labor (Labor), and GSA. For our first objective, we obtained and analyzed financial data from GSA related to actual and planned operating costs for establishing and overseeing the TMF for fiscal years 2018 through 2025 (fiscal year 2018 was the first year that the TMF was in operation). To ensure the accuracy and completeness of GSA’s financial data on the operating costs for TMF administration, we obtained information from officials within GSA’s Office of the Deputy Administrator on the controls in place for ensuring the reliability of the financial data. We also reviewed GAO, GSA Office of Inspector General, and GSA reports that discussed the results of prior reviews of internal controls for GSA financial systems. Based on discussions with agency officials and our reviews of these prior reports, we did not identify any specific findings that would affect our reporting of these data. In addition, we reviewed GSA-provided data for obvious errors and inconsistencies and identified no significant errors related to the accuracy or completeness of the data. Based on these steps, we determined that these data were sufficiently reliable for us to be able to report accurately on GSA’s operating costs for TMF administration. We also obtained and analyzed agency documentation from, and interviewed officials within, GSA’s TMF Program Management Office regarding the fund’s actual and planned operating expenses as of August 31, 2019. We assessed the collection of administrative fees used to ensure the solvency of the fund during the period from June 2018 (when projects first began to receive awards) through August 31, 2019. In addition, we interviewed staff in OMB’s Office of E-Government and Information Technology regarding OMB guidance and its administrative responsibilities for the fund. Further, we obtained and analyzed TMF project proposal documentation and signed interagency agreements and interviewed officials in charge of the TMF-funded projects within the Office of the CIO and other appropriate offices at each of the five agencies to determine the scheduled repayment transfers, administrative fee payments, and whether awarded projects had realized cost savings for fiscal year 2019. (Fiscal year 2019 was the first fiscal year that awarded projects could have realized cost savings as a result of receiving TMF funding.) In doing so, we confirmed that none of the seven projects had begun to realize cost savings; therefore, it was premature to compare the projects’ realized savings to TMF administrative costs. For the second objective, we analyzed TMF project proposals, including cost estimates and supporting documentation, from the five agencies that received the seven awards. In addition, we interviewed the agencies’ project officials responsible for developing the overall TMF cost savings estimate and associated cost estimates regarding their estimation processes. We compared each TMF-funded project team’s estimating methodologies and documentation to the best practices of a reliable cost estimate discussed in GAO’s Cost Estimating and Assessment Guide. Our analysis enabled us to determine whether each project’s cost estimate, used to determine the project’s cost savings estimate, was comprehensive, well-documented, accurate, and credible. The GAO Cost Estimating and Assessment Guide considers an estimate to be comprehensive if its level of detail ensures that all pertinent costs are included and no costs are double-counted or omitted; well- documented if the estimate can be easily repeated or updated and can be traced to original sources through auditing; accurate if it is not overly conservative, is based on an assessment of the most likely costs, and is adjusted properly for inflation; and credible if the estimate has been cross-checked with an independent cost estimate and a level of uncertainty associated with the estimate has been identified and quantified. For each characteristic, our analysis had five possible assessment categories: Not met. The estimate provided no evidence that satisfies any of the characteristic’s set of best practices. Minimally met. The estimate provided evidence that satisfies a small portion of the characteristic’s set of best practices. Partially met. The estimate provided evidence that satisfies about half of the characteristic’s set of best practices. Substantially met. The estimate provided evidence that satisfies a large portion of the characteristic’s set of best practices. Met. The estimate provided complete evidence that satisfies the characteristic’s entire set of best practices. A cost estimate is considered reliable if the overall assessment for each of the four characteristics are met or substantially met. We presented the results of our initial analysis of each TMF project cost estimate to its respective agency in July 2019. We asked the agencies to verify the information presented in the analysis and provide any updates or additional supporting documentation, as appropriate. Each of the agencies provided updated information, which we incorporated into this analysis, as appropriate. In addition, we interviewed staff in the Office of E-Government and Information Technology, as well as officials from the TMF Program Management Office, about the process for the review and approval of TMF-funded project cost savings estimates and cost estimate documentation. Because the Technology Modernization Board required agency project teams to use a template to submit the project cost savings estimates and because we learned from project officials at each of the five agencies that they did not rely on data from agency financial systems when completing the template, we took additional steps to assess the reliability of the data in the completed templates. First, we interviewed officials in the TMF Program Management Office responsible for developing the template in order to understand the purpose of each template data field and what information was required to be completed. We took this step because there were no written instructions for the template regarding the data elements or the fields required to be completed. We also interviewed officials in the Office of the CIO and other appropriate offices at each agency, who were in charge of completing the TMF cost estimate template. We discussed with these officials how the template was filled out and what sources of data were used. Because project teams did not rely on data from agency financial systems when completing the spreadsheet template, we reviewed agency responses and other supporting documentation to determine how the estimated costs and savings were derived and whether there were any qualifications of the provided data. This included whether certain costs were excluded from the program cost estimate, how up-to-date the data were, or whether there were other qualifications of the provided data. We followed up with agency officials regarding these qualifications as appropriate. Further, we reviewed the completed templates to identify missing data, or other errors, and consulted with our cost estimation specialists about these issues, as appropriate. Based on our assessment of each project’s cost estimate (used to derive the cost savings estimate) and the other measures we took to assess the reliability of the data included in the completed templates, we determined that the cost savings data for all seven TMF projects were not sufficiently reliable; thus, we did not include the estimated savings amounts in our report. In addition, we discuss the data’s shortcomings in the report. To accomplish the third objective, we obtained and analyzed contract documentation for each of the seven awarded projects. We also interviewed officials in charge of the TMF-funded projects within the Office of the CIO and other appropriate offices at each of the five agencies about acquisitions related to the awarded projects. Using the agency provided contract information, we obtained and analyzed data from the Federal Procurement Data System-Next Generation (FPDS- NG)—the government’s procurement database—for the period of June through August 2019. We assessed whether each awarded acquisition used full and open competition in accordance with the Competition in Contracting Act of 1984 and the federal acquisition regulation. To ensure the accuracy and completeness of the awarded projects’ contract information related to the use of full and open competition, we searched FPDS-NG data to confirm that all contracts and task orders related to the projects had been provided. We then presented the results of our analysis to officials in charge of project acquisitions at each agency and asked these officials to verify the completeness and accuracy of the FPDS-NG data and provide any updates, as appropriate. Officials in charge of all of the awarded projects confirmed the contract information related to the use of full and open competition and provided additional contract acquisition data, as appropriate. Based on these steps, we determined that these data were sufficiently reliable to report on the TMF-funded project acquisitions’ use of full and open competition. We conducted this performance audit from March 2019 to December 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Description of Projects Receiving Technology Modernization Fund Awards, as of August 2019 As of August 31, 2019, seven projects had been awarded funding from the Technology Modernization Fund (TMF). Once an award had been made, TMF funds were distributed to project teams incrementally based on each project’s performance against the milestones established in the project’s written agreement. These seven projects had received incremental funding of approximately $37.65 million and, of that amount, had obligated $18.05 million towards project implementation. The following description of each of the seven projects includes an overview of the awarded project, funding transfer, and project status information as of August 31, 2019, and how the project intends to repay the funds awarded. Department of Agriculture’s Farmers.Gov Portal Project The Department of Agriculture’s (Agriculture) Farmers.Gov Portal project is intended to help update and modernize conservation financial assistance and payment operations within the department’s Farm Service Agency and National Resources Conservation Service. These two agencies provide financial and technical assistance to farmers and ranchers through related conservation programs. While separately authorized and appropriated, the programs share common customers and also share interconnected systems. The project is intended to work to reengineer related financial assistance business processes at these agencies and update the agencies’ legacy systems so that the systems can be properly connected with the department’s common financial system. Due to changes to the project’s schedule, an official responsible for the management of the Farmers.Gov Portal project reported that the agency plans to delay requesting the remaining balance of $6 million in awarded funds from the Technology Modernization Board until fiscal year 2020. Figure 3 provides a summary of the Farmers.Gov Portal project. Officials from the Farmers.Gov Portal project reported that the department intends to repay the TMF funds awarded using annual appropriations from each of the two agencies involved in the project. Agriculture’s Infrastructure Optimization Project Agriculture’s Infrastructure Optimization project, managed by the Office of the Chief Information Officer (CIO), was originally intended to migrate 10 applications within the department to cloud services by the end of fiscal year 2019. However, officials responsible for the management of the project reported that they began working with the TMF Program Management Office to make changes to the project’s scope in June 2019, changing which applications would be migrated and reducing the number of applications to be migrated to one. Officials reported that the project now intends to migrate the Farm Production and Conservation’s Emergency Watershed Protection Program to cloud services but has not yet determined when the project will be completed. The program helps landowners, operators, and individuals to implement emergency measures after a natural disaster in order to help relieve imminent hazards to their life or property. Due to the change in scope for the project, officials responsible for the management of the Infrastructure Optimization project reported that planned to request a total of $500,000 for the project from the Technology Modernization Board ($4.5 million less than the original award amount). As a result of this change in scope, officials reported that the repayment period, administrative fee, and the time frames for repaying the transferred amount and associated fee, was being reevaluated by the agency. Project officials reported in August 2019 that they planned to present their revised project plan to the Technology Modernization Board for consideration and approval. If approved by the board, the project would likely reduce its administrative fee from $150,000 to $15,000. Figure 4 provides a summary of the Infrastructure Optimization project. Officials from the Infrastructure Optimization project reported that the department originally intended to repay the TMF awarded funds by using the planned cost savings and avoidances accrued from not having to pay the costs for the maintenance of these 10 applications. In fiscal year 2018, the department reported spending approximately $4 million to cover labor costs for maintaining these 10 on-premise applications. However, project officials reported that, with the change in scope to the project, the details for how they will repay the awarded funding are currently under reevaluation. Department of Energy’s Enterprise Cloud Email Project The Department of Energy’s (Energy) Enterprise Cloud Email project, managed by the Office of the CIO, was originally intended to complete the consolidation, upgrade, and migration of 26 of the department’s on- premises email systems to cloud email software as a service by fiscal year 2021. However, the department made changes to the project’s scope in February 2019, reducing the number of mailboxes that would be migrated from approximately 47,080 to 24,531. Officials responsible for the management of the Enterprise Cloud Email project within Energy’s Office of the CIO reported that the department was able to migrate 22,549 mailboxes to cloud services using department funds prior to receiving TMF-awarded funds. Due to the change in scope for the project, officials from the Enterprise Cloud Email project reported that they planned to request a total of $7.41 million in funding for the project from the Technology Modernization Board ($7.80 million less than the original award amount). As a result of this change in scope, officials reported that the repayment period, administrative fee, and the time frames for repaying the transferred amount and associated fee, will change from what was originally approved by the Technology Modernization Board. Project officials reported in August 2019 that they intended to present their revised plan to the Technology Modernization Board for consideration and approval. If approved by the board, the project would reduce its administrative fee from $456,510 to $222,406 and would complete the fund repayment in 2024 rather than 2025. Figure 5 provides a summary of the Enterprise Cloud Email project. Officials from the Enterprise Cloud Email project reported that the department intends to repay the TMF funds awarded by using the planned cost savings and avoidances accrued from future operations and maintenance costs for these email systems. In fiscal year 2018, the department reported spending approximately $4.78 million to cover operations and maintenance costs for the 26 on-premise email systems originally in scope for the project. However, the department could not provide an update on the operations and maintenance costs for the current email systems that are to be migrated using TMF funds. Department of Housing and Urban Development’s Unisys Migration Project The Department of Housing and Urban Development’s (HUD) Unisys Migration project managed by the Office of the Chief Technology Officer was originally intended to migrate five of the department’s most critical business systems from an on-premise mainframe database to cloud computing services by the end of fiscal year 2020. These systems help manage the Federal Housing Administration’s mortgage insurance program as well as over one hundred HUD grant, subsidy, and loan programs managed through the Office of the Chief Financial Officer. Due to delays in awarding contracts for the project, a HUD official reported that the department had submitted a request to the Technology Modernization Board in August 2019 for the project to be rebaselined. The official reported that the project planned to delay requesting the next disbursement of $5 million from fiscal year 2019 to fiscal year 2020 and the project is now intended to be completed by March 2021. Figure 6 provides a summary of the Unisys Migration project. Officials from the Unisys Migration project reported that the department intends to repay the TMF funds awarded by using the planned cost savings accrued from reducing the department’s overall operations and maintenance costs for these systems. In fiscal year 2018, the department reported spending approximately $11.6 million in operations and maintenance contract costs for maintaining these five legacy systems. Department of Labor’s Visa Application Transformation Project The Department of Labor’s (Labor) Visa Application Transformation project, managed by the Office of the CIO, is intended to replace a paper- based labor certification process for certain types of work visas with an E-Certification process. The new system is intended to enable the department to issue a labor certification securely and electronically to employer applicants, similar to an electronic boarding pass issued by airlines. In addition, this project is expected to streamline and improve data accessibility and reporting capabilities by creating a data hub at Labor. This hub is expected to allow the department to securely transmit these labor certifications and other necessary documentation to the Department of Homeland Security’s U.S. Citizenship and Immigration Service, with an eventual linkage to the Department of Agriculture and the Department of State. Figure 7 provides a summary of the Visa Application Transformation project. Officials responsible for the management of the Visa Application Transformation project within the Office of the CIO reported that the department intends to repay the TMF funds awarded by using the planned cost savings accrued from eliminating the costs of procuring security paper and printers for printing the certifications as well as reduced costs for contractor and federal employee support of the paper process. In fiscal year 2019, the department reported spending approximately $1.9 million on these costs for the paper-based process. General Services Administration’s Application Modernization Project The General Services Administration’s (GSA) Application Modernization project, managed within the Office of the Chief Technology Officer, is intended to modernize 11 applications currently using proprietary vendor technology by converting them to use open source technologies. GSA currently has 88 applications that are in need of modernization and intends to use the lessons learned and new capabilities as a repeatable process that will be used for future migrations of other proprietary applications to open source technologies. Figure 8 provides a summary of the Application Modernization project. Officials responsible for managing the Application Modernization project reported that it intends to repay the TMF funds awarded through: (1) its existing working capital fund and (2) the planned cost savings and avoidances accrued from reducing operations and maintenance costs, and eliminating hardware and operating system software costs for these proprietary applications. In fiscal year 2018, the agency reported spending approximately $23.9 million to cover these costs. GSA’s NewPay Project The NewPay project, managed within GSA’s Office of the CIO, is intended to modernize GSA’s payroll system for its 21,000 users and replace it with a cloud-based software as a service solution. This is expected to lay the foundation for modernizing federal legacy payroll systems to a cloud-based solution for approximately 2.1 million federal civilian employees. Currently, four federal agencies (Agriculture, Department of Defense, Department of the Interior, and GSA) serve as payroll providers for federal civilian employees. NewPay also is intended to encompass time and attendance solutions which are intended to be implemented in later project phases. Project officials reported that they originally planned to complete the migration to NewPay and shut down GSA’s legacy systems by 2023 and consolidate all other government legacy provider payroll operations into NewPay. However, officials reported that the strategy for transitioning other legacy payroll providers to NewPay was revised in mid-summer 2019. Going forward, GSA and the other federal payroll providers plan to focus on completing the migration of all systems to NewPay prior to transitioning and consolidating payroll operations within GSA. Project officials reported that GSA is working with OMB and the other agency payroll providers to identify funding available for these efforts so that a new schedule can be developed. Figure 9 provides a summary of the NewPay project. Officials responsible for managing the NewPay project within the Office of the CIO reported that the agency intends to repay the TMF funds awarded through subscriptions and fees that federal agencies are to pay to utilize the software as a service solution and through fees NewPay intends to collect for serving as a payroll operations provider. In fiscal year 2018, the four federal agency payroll providers spent approximately $300 million providing payroll services for approximately 2.1 million federal civilian employees. Appendix III: Analysis of Cost Estimates for Projects Receiving Technology Modernization Fund Awards Agencies submitting full project proposals to the Technology Modernization Board during phase II of the proposal process for the Technology Modernization Fund (TMF) were required to submit information on the project’s cost estimate and cost savings estimate using a spreadsheet template (known as appendix B). We compared each TMF-funded project team’s estimating methodologies and documentation to the best practices of a reliable cost estimate discussed in the GAO Cost Estimating and Assessment Guide. According to GAO’s guidance, a reliable estimate should meet four characteristics and the specific set of best practices associated with each of the characteristics. Those four characteristics are: Comprehensive: An estimate should include all life cycle costs (from the program’s inception and design through operations and maintenance), reflect the current schedule, and have enough detail to ensure that cost elements are not omitted or double counted. Specifically, the cost estimate should be based on a product-oriented work breakdown structure that allows a program to track cost and schedule by defined deliverables, such as hardware or software components. In addition, all cost-influencing ground rules and assumptions should be detailed in the estimate’s documentation. Well-documented: An estimate should be thoroughly documented; describe how it was developed; and include source data, clearly detailed calculations and results, and explanations of why particular estimating methods and references were chosen. Data should be traced to their source documents. Accurate: An estimate should be based on historical data or actual experiences on other comparable programs and an assessment of most likely costs, and be adjusted properly for inflation. In addition, the estimate should be updated regularly to reflect significant changes in the program—such as when schedules or other assumptions change—and actual costs, so that it should always reflect the current status. Credible: An estimate should discuss any limitations of the analysis because of uncertainty surrounding data or assumptions. In addition, the estimate should incorporate the results of a sensitivity analysis (that examine the effects of changing assumptions on the estimate), and risk and uncertainty analysis (that identifies all of the potential project risks and assesses how these might affect the cost estimate). The estimate’s results should be cross-checked, and an independent cost estimate should be conducted to see whether other estimation methods produce similar results. In assessing each project’s estimate against the components of the four characteristics, we assigned one of five assessment categories: Not met. The estimate provided no evidence that satisfies any of the characteristic’s set of best practices. Minimally met. The estimate provided evidence that satisfies a small portion of the characteristic’s set of best practices. Partially met. The estimate provided evidence that satisfies about half of the characteristic’s set of best practices. Substantially met. The estimate provided evidence that satisfies a large portion of the characteristic’s set of best practices. Met. The estimate provided complete evidence that satisfies the characteristic’s entire set of best practices. A cost estimate is considered reliable if the overall assessment ratings for each of the four characteristics are met or substantially met. The following discusses in detail our assessment of the seven TMF awarded projects’ cost estimates. Department of Agriculture’s Farmers.Gov Portal Project Table 10 includes our detailed assessment of the Department of Agriculture’s (Agriculture) Farmers.Gov Portal project. Based on the overall assessment ratings for each of the four characteristics, Agriculture’s project cost estimate is not considered reliable. Agriculture’s Infrastructure Optimization Project Table 11 below includes our detailed assessment of Agriculture’s Infrastructure Optimization project. Based on the overall assessment ratings for each of the four characteristics, Agriculture’s project cost estimate is not considered reliable. Department of Energy’s Enterprise Cloud Email Project Table 12 includes our detailed assessment of the Department of Energy’s (Energy) Enterprise Cloud Email project. Based on the overall assessment ratings for each of the four characteristics, Energy’s project cost estimate is not considered reliable. Department of Housing and Urban Development’s Unisys Migration Project Table 13 includes our detailed assessment of the Department of Housing and Urban Development’s (HUD) Unisys Migration project. Based on the overall assessment ratings for each of the four characteristics, HUD’s project cost estimate is not considered reliable. Department of Labor’s Visa Application Transformation Project Table 14 includes our detailed assessment of the Department of Labor’s (Labor) Visa Application Transformation project. Based on the overall assessment ratings for each of the four characteristics, Labor’s project cost estimate is not considered reliable. General Services Administration’s Application Modernization Project Table 15 includes our detailed assessment of the General Services Administration’s (GSA) Application Modernization project. Based on the overall assessment ratings for each of the four characteristics, GSA’s project cost estimate is not considered reliable. GSA’s NewPay Project Table 16 includes our detailed assessment of GSA’s NewPay project. Based on the overall assessment ratings for each of the four characteristics, GSA’s project cost estimate is not considered reliable. Appendix IV: Comments from the General Services Administration Appendix V: Comments from the Office of Management and Budget Appendix VI: Comments from the Department of Housing and Urban Development Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, the following staff made key contributions to this report: Dave Hinchman (Assistant Director), Jason Lee (Assistant Director), Jessica Waselkow (Assistant Director), Chris Businsky, Jennifer Echard, Emile Ettedgui, Valerie Hopkins (Analyst in Charge), Anna Irvine, Julia Kennon, Sandra Kerr, James MacAulay, Priscilla Smith, and Mary Weiland.
Why GAO Did This Study In December 2017, the MGT Act was enacted, which established the TMF. OMB, the Technology Modernization Board, and GSA oversee the TMF. The board is responsible for approval of agency project proposals focused on replacing aging IT systems. Agencies receive incremental award funding and are required to repay the funds transferred and an administrative fee within five years. Agencies may use the project's generated cost savings to repay the award. GSA can use TMF appropriations to cover its operating expenses, and is required to collect administrative fees from awarded projects to offset these expenses. GSA's fee rate was established with the intent to fully recover its costs. As of August 2019, Congress had appropriated $125 million to the TMF. The act included a provision for GAO to report biannually on the TMF. For its first TMF report, among other things, GAO analyzed the TMF's operating costs and assessed the reliability of selected projects' cost savings estimates. To do so, GAO reviewed OMB and GSA's administrative fund processes, and GSA financial data on TMF operating costs. GAO also analyzed TMF project proposal and supporting cost estimate documentation from selected agencies. What GAO Found As of August 2019, the Technology Modernization Board had made seven Technology Modernization Fund (TMF) awards to five agencies, totaling about $89 million, and had transferred $37.65 million of this funding to the projects (see table). In addition, pursuant to the Modernizing Government Technology (MGT) Act, the General Services Administration (GSA) had obligated about $1.2 million to cover TMF operating expenses, but had recovered only about 3 percent of those expenses through fee payments. The seven projects are expected to make $1.2 million in scheduled fee payments by the end of fiscal year 2025; as of August, three projects have made fee payments totaling $33,165. Based on the current schedule, GSA will not fully recover these expenses until fiscal year 2025 at the earliest. GSA had collected fewer fees than planned to offset costs due to several factors. For example, the seven projects paid fees based on the amounts transferred, rather the total funds awarded, thereby reducing fee collections in the initial years. Two projects also proposed scope changes that are expected to reduce funding required and, thus, reduce total fees. Such factors raise doubts on whether GSA will be able to fully recover future operating expenses. Although GSA acknowledged this issue, the agency has not yet developed a plan outlining the actions needed to fully recover its TMF operating costs in a timely manner. The Office of Management and Budget's (OMB) funding guidelines require projects to include a reliable estimate of any project-related savings. However, the seven projects' reported savings estimates derived from cost estimates are not reliable. None of the projects incorporated all of the best practices for a reliable cost estimate, as defined in GAO and OMB guidance. Without clarifying the requirement that agencies follow Circular A-11's cost estimating process (that references GAO's cost estimating guidance discussed in this report), agencies are at risk of continuing to provide unreliable cost information in their proposals. What GAO Recommends GAO is making five recommendations—two to OMB and three to GSA, including developing a plan to fully recover operating costs and clarifying that agencies should follow required cost guidance. OMB raised a number of concerns that GAO addresses in the report. GSA agreed with one recommendation and partially agreed with the other two. GAO continues to believe all of its recommendations are appropriate.
gao_GAO-20-337
gao_GAO-20-337_0
Background We previously reported that DOL is one of more than a dozen federal agencies—known as National Drug Control Program agencies—that have responsibilities for drug prevention, treatment, and law enforcement activities. The Office of National Drug Control Policy (ONDCP) was established in 1988 to, among other things, enhance national drug control planning and coordination. As federal agencies engage in drug control efforts, ONDCP is responsible for, among other things, overseeing and coordinating the implementation of national drug control policy across the federal government. These responsibilities include promulgating a National Drug Control Strategy. In 2017 and 2018, ONDCP lacked a statutorily-required National Drug Control Strategy, and we recently reported that the 2019 National Drug Control Strategy did not fully comply with the law. In December 2019, we recommended that ONDCP develop and document key planning elements to help ONDCP structure its ongoing efforts and to better position the agency to meet these requirements for future iterations of the National Drug Control Strategy. We also found that the 2019 strategy did not contain several pieces of required information, such as quantifiable and measurable objectives, and specific targets for long-term goals, or a description of a performance measurement system. ONDCP subsequently issued the 2020 National Drug Control Strategy on February 3, 2020. We reviewed this Strategy and found that it made progress in addressing several statutory requirements but fell short in meeting others. Furthermore, in our March 2019 High-Risk report, we named drug misuse as an emerging issue requiring close attention. Based on our findings from a body of work related to drug misuse—including 25 new GAO products issued since our 2019 High-Risk report—we have determined that this issue should be on our High-Risk List. DOL’s Phase 1 and Phase 2 grants, targeted to support efforts for addressing the opioid crisis, are authorized by WIOA, which was enacted in 2014 and emphasizes the alignment and integration of workforce programs. ETA is responsible for some WIOA programs, which provide education and other services to help job seekers obtain employment and advance in the labor market, including job search assistance, career counseling, and a variety of occupational skills such as classroom and on-the-job training. In addition, WIOA emphasizes that employers are also customers of the workforce system, and includes provisions that involve them in helping the system provide the skilled workers they need. WIOA requires states to submit plans to DOL every 4 years, and updates to these plans every 2 years, that outline the state’s workforce strategies for core WIOA programs. The next state plans are due in 2020. WIOA gives state and local officials the flexibility to develop and fund services that meet the specific needs of their local communities and meet WIOA goals of increasing employment, retention, and earnings to promote economic self-sufficiency. To that end, WIOA core program performance measures and targets include those related to job attainment and retention; median earnings; and skill and credential attainment. DOL officials told us that states generally use the same WIOA performance measures for the Phase 1 and 2 grants as well. The WIOA-funded workforce development system provides services through a national network of approximately 2,400 American Job Centers (AJCs). State and local entities deliver WIOA-funded employment and training activities and coordinate with partner programs via the AJCs. ETA’s Phase 1 and 2 grants are intended, in part, to serve dislocated workers—adults whose jobs have been terminated, who have been laid- off, or who were self-employed. These grant funds are awarded to states, tribal governments, or outlying areas that, in turn, may work with local workforce boards to administer the grants. Grant recipients generally have 2 years to expend their funds. See table 2 for more information about these grants. Both grants require that recipients partner with community organizations, such as those in health care and justice systems, and with at least one local workforce development board or AJC. While grants cannot be used to pay the costs of in-patient drug treatment and in-patient rehabilitation programs, grantees may use some funding to provide supportive services to participants, such as assistance with child care. States may be using other federal funds to address the workforce impacts of the opioid crisis, including other WIOA-related funding. For example: Ohio received $8 million in September 2018 from DOL’s Trade and Economic Transition National Dislocated Worker Grant, which provides training and career services to dislocated workers affected by layoffs at one or more companies and are seeking reentry into the workforce. The state targeted 16 counties in the state that officials said had been hardest hit by the opioid crisis. State officials said they plan to use this grant to provide services to anyone who meets the criteria of a dislocated worker, and they felt the opioid crisis had a strong enough economic effect for the state to use the grant for those whose employment has been affected by the crisis. DOL’s Women’s Bureau granted Maryland $650,000 in September 2018 to fund two projects providing job-seeking supports to women affected by opioid use disorder. Pennsylvania plans to use HHS funding to expand treatment capacity for underserved populations through targeted workforce development, according to its grant application. Additionally, ETA has recently provided more funding opportunities to support state and local workforce efforts to address the opioid epidemic. In September 2019, ETA, in partnership with the Appalachian Regional Commission and the Delta Regional Authority, announced the 23 grantees on the first round of funding under the Workforce Opportunity for Rural Communities Initiative, which included a focus on serving individuals impacted by the opioid epidemic. Five of the ten awards in the Appalachian region committed to addressing opioid and other SUD impacts as part of their projects. Also, in October 2019, ETA announced another funding opportunity for $20 million in grants under the Substance Use-Disorder Prevention that Promotes Opioid Recovery and Treatment for Patients and Communities (SUPPORT) Act. The SUPPORT Act directs DOL to conduct a pilot grant program to address the economic and workforce effects associated with SUDs. Beyond those recently funded, workforce efforts to address the opioid crisis may need to continue for many years given the nature of SUD. Research suggests that incentives for avoiding drug misuse, such as obtaining and maintaining employment, can be highly effective in promoting recovery from SUD. However, an estimated 40 to 60 percent of people with SUD experience relapse, according to the National Institute on Drug Abuse. As a result, people with SUD often need ongoing support to reduce this risk. States Used Targeted Grants to Tailor Assistance to Job Seekers in Recovery, but Results Are Not Yet Known Workforce Agencies Relied on Partnerships to Enhance Services to Job Seekers in Recovery Officials in the four selected states that received Phase 1 and Phase 2 DOL grants told us that the required partnerships with community organizations were essential in their efforts to serve those affected by SUD. These relationships fostered both knowledge sharing and coordination, elements especially important to state officials with limited experience serving this population. For example, in Ohio, state officials said that input from community partners, such as substance use disorder and mental health boards, helped them identify who could best provide supportive services for job seekers in recovery. We found workforce agencies in all four states receiving targeted DOL grants worked to serve job seekers with the following partners: Health care organizations. Workforce officials said they partner with health care organizations to identify people in recovery from SUD who are ready to look for employment. For example, New Hampshire state officials described the state’s “hub-and-spoke” services system, where health care entities such as hospitals refer people affected by SUD to various services. The health care staff coordinate with local workforce agency staff and notify them when an individual in recovery is ready for employment and training services. Other states described similar coordination of services. For example, a local workforce agency in Washington is partnering with a nonprofit health care organization to coordinate workforce development efforts with health and social services. Justice organizations. Workforce agencies partnered with drug courts, detention centers, and other facilities to address the employment readiness and support needs of those in the juvenile and adult justice systems who may have SUD. For example, in Washington, local workforce agency officials told us that they provide training and education services—including reentry workshops and work readiness services—for their area’s juvenile justice facilities, where over 70 percent of the population has a substance use disorder. In New Hampshire, state workforce officials described a partner organization whose officials have relationships with all of the drug courts in the state, and also sit on the board of the drug court in one of the state’s largest counties. They said that drug courts provide people an option to seek recovery services instead of criminal charges, and the local workforce agency provides employment services for people participating in drug courts. Educational institutions. Partnerships with community colleges and universities helped workforce agencies to provide employment training for job seekers interested in participating in recovery services. In two of our selected states, officials reported using funds to support the development of peer recovery specialists. Such peer recovery specialists, according to HHS, can bring the lived experience of recovery to assist others in initiating and maintaining recovery. For example, in Ohio, the state workforce agency partnered with a community college to help people to become peer recovery specialists and licensed chemical dependency counselors. Maryland provided Phase 1 Grant funds to a research-based organization, housed on the campus of state university, which is preparing peer recovery specialists. Furthermore, local workforce agency officials in Ohio also told us that they worked with a university to put together a master’s degree in social work for those with Licensed Social Worker credentials or a bachelor’s degree. Other organizations. Partnerships with community organizations and housing commissions helped states address transportation and housing needs through referrals and coordinated services. For example, local workforce officials in Washington told us they work with partners through subcontracts or memoranda of understanding to help job seekers with childcare and housing so they can attain and retain employment. Also, officials in New Hampshire told us that one state partner works with sober living houses, which are group homes in which people in recovery can live during and after treatment. Several state workforce officials we interviewed noted that a key benefit to the WIOA targeted-assistance grants was forging partnerships which will have lasting impacts on how they conduct services in the workforce system. For example, officials in New Hampshire noted that the state plans to continue to leverage relationships with their partners after the grant expires. Also, officials in Ohio said these partnerships put new processes in place, including referral systems that will facilitate getting people in recovery into the workforce system over the long term. States not receiving targeted grants. Workforce officials in Alabama and Arizona, the states we selected that did not receive targeted DOL grants but are still experiencing high levels of opioid misuse in their communities, stated that they were engaged in some newly formed partnerships to address the workforce aspects of SUD. Alabama workforce officials said they recently began participating in a statewide opioid task force, including serving on the workforce subcommittee with other state departments, such as the state Department of Commerce. Arizona officials said that the state workforce agency partners with the state Department of Corrections and has implemented second chance centers, which offer services such as job training and onsite job fairs, within three prisons. They noted that in one of these prisons, the majority of women are incarcerated for drug-related offenses. Workforce Agencies Used Funding to Provide Employment Services to Job Seekers in Recovery Officials in the four selected states that received targeted DOL grants said they used this funding to assist those in recovery from SUD to obtain employment. While many of the services are also offered to other job seekers, officials said grant-funded efforts involved intensive work with SUD-recovering individuals, who may have inconsistent work histories or long periods of unemployment. New Hampshire state workforce officials reported providing individuals in recovery with services, including job training, direct placement in a job, or on-the-job training. As of January 2020, officials said the state had enrolled 177 individuals into its program, including some who are participating in on-the-job training (employment that is partially subsidized by grant funds). Similarly, officials at a local workforce agency in Washington told us that the agency aims to place 125 people affected by SUD into transitional jobs as part of its grant-funded activities. These subsidized jobs allow individuals to add experience to their resumes, as well as gain an employment reference. In Maryland, the state distributed part of its Phase 1 grant funds to local workforce agencies in eight counties directly or indirectly affected by the opioid crisis. These funds provide job seekers with employment, training, and support services that help them prepare for, secure, and retain employment, and advance along career pathways in high-demand industries and occupations—including those related to SUD recovery, such as counseling. Similarly, Ohio workforce officials told us they were reintegrating individuals who are affected by opioid use into the workforce by using some of their Trade and Economic Transition National Dislocated Worker Grant funds to provide career services, guidance, and counseling, along with support services. Several officials noted that, while their agencies may use the same process for those with SUD as those without to get individuals ready for jobs, it is often a longer process when someone is in recovery or otherwise affected by SUD. For example, officials from a local workforce agency in Ohio told us that those in recovery from SUD often need more services and support to work through barriers prior to job placement than other clients without the disorder. Agency workforce staff are to follow up with people in recovery to make sure they are still supported, even after they have found employment or have enrolled in training—sometimes on a weekly basis. Officials said that those in recovery may not have previously had a job or attended post-secondary school before, and must balance their recovery with these new responsibilities. Similarly, state workforce officials in New Hampshire said that many in recovery have not had the opportunity to build skills and confidence. The New Hampshire Work Ready program is a 60-hour program offered through the state’s community colleges that provides help in areas such as how to dress for an interview and the workplace. This program, which is available to all job seekers, also helps people decide what to disclose regarding their personal history and helps them emphasize their strengths. Officials characterized this program as especially helpful for people with criminal backgrounds. In response to the needs of those in recovery, they said the state has created a new “bridge” program to prepare individuals to participate in the Work Ready program, which will be implemented in recovery centers using targeted grant funds. States not receiving targeted grants. Workforce officials in Alabama and Arizona, states that did not receive targeted grants, said that state efforts to address SUD, and more specifically opioid use disorder, were largely focused on the health aspects of the issue. Alabama officials told us that the state workforce agency was not originally part of the Governor’s task force on opioid use disorder. The task force’s recommendations were mostly health care related and addressed issues such as provider practices. However, the task force has recently added a workforce subcommittee with the goal of identifying strategies and resources to provide in-demand career pathways for those affected by SUD, and officials reported that they plan to apply for Phase 2 funding in the future. Arizona state officials said that its workforce development system provided support in communities, but noted that there is not a coordinated strategy statewide. Arizona officials also emphasized that they consider SUD primarily a public health issue, not a workforce issue; they said that while employment is part of a spectrum of services, SUD is an issue that is best addressed on the health side. Agencies Also Funded Specialists and Are Piloting Workplace Programs To assist those affected by SUD in finding employment, local workforce agencies used their targeted grant funding to secure specialists. For example, officials at two local workforce agencies in Ohio told us they had hired or planned to hire new staff to work with the population affected by SUD. One agency plans to hire case managers specializing in mental health, who will team with AJC staff to help ensure clients in recovery get the support they need to be successful. The other agency plans to hire peer recovery specialists and job coaches to help those in recovery develop soft skills. One local workforce agency in Washington also hired peer recovery specialists, and is using them as case managers at an AJC. Another agency in Washington is using Phase 1 grant funds to employ four “navigators” to coordinate services to address the needs of those in recovery. In addition, officials said they are in the process of hiring a job developer to liaise between job seekers, navigators, and employers, and help recruit employers who are willing to hire those in recovery from SUD. Additionally, communities are exploring different workplace programs to support those in recovery. Officials in New Hampshire and Ohio reported using their Phase 1 and Trade and Economic Transition National Dislocated Worker Grant funds, respectively, to pilot recovery-friendly workplace initiatives, which provide training and supports to employers to help them better understand and work with individuals with SUD. Ohio state officials told us that, in three pilot counties, the state will train supervisors and managers and provide second-chance policies and employee assistance programs. According to these officials, recovery- friendly workplaces encourage an environment where employers, employees, and communities can collaborate to create positive change and eliminate barriers for those affected by SUD. In New Hampshire, employers may request that the state designate them as a recovery- friendly workplace. The New Hampshire workplace program will provide an advisor who conducts an orientation with management and staff and helps the employer publicize their participation in this effort so that their employees will know of their commitment, and will know their workplace is a safe place to disclose SUD. Employers in the program also agree to complete certain activities, such as conducting training and making connections with local recovery organizations. New Hampshire officials said they had 220 employers participating in the program as of January 2020. Workforce Efforts to Address Substance Use Disorder Are in Early Stages and Results Are Not Yet Known State and local workforce officials said that their efforts to meet the needs of job seekers and employers in communities affected by SUD are relatively new. For example, officials in Ohio said that state efforts are still very much in the preliminary planning stages of their broader implementation goals. They said that, at this point, they are looking at how to educate workforce agencies and staff about how to best address the needs of this population. State and local officials in our four selected states receiving targeted Phase 1 and Phase 2 grants were not yet able to report outcomes. Officials told us that it took time to organize and implement plans, causing delays in beginning activities. Specifically, workforce officials stated that: In Washington, officials said they received the notification for Phase 2 grant funding in March 2019. The state workforce agency finalized the contract with the local workforce agency at the end of May 2019, and began enrolling eligible job seekers in the late summer and early fall of 2019. In New Hampshire, it took the state six months to begin implementing grant activities after receiving funding in July 2018, and officials confirmed in January 2020 that they were still too early in addressing the opioid crisis to have any outcomes. In Maryland, officials originally planned to use funding to train peer recovery specialists to work in the state’s AJCs. However, the state Department of Health secured funding to train peer recovery specialists, and they did not want to duplicate efforts. As a result, they revised their plan to instead create an Opioid Workforce Innovation Fund, which delayed grant activities by six months or more. As of August 2019, Ohio officials said they were just starting to get the local workforce areas on board and acclimated. They reported that they had just completed training for the local workforce agencies on the grant rules and activities, and launched a toolkit to help agencies serve individuals with SUD. Workforce Agencies Face Challenges Helping Individuals Affected by Substance Use Disorder Gain and Maintain Employment Workforce Agencies Struggle to Support Job Seekers with a Range of Barriers to Employment Workforce agency officials in all six of our selected states told us they face challenges addressing the needs of job seekers affected by SUD, in part due to their limited experience in serving this population. For example, Health issues. Officials in all six states said they continue to struggle with ensuring job seekers receive necessary services due to lack of medical treatment, mental health services, and recovery services and personnel, especially in rural areas. For example, officials at a local workforce agency in a rural area of Maryland said their area has no addiction specialists, and many people in the area have to travel nearly 2 hours to receive recovery treatment and counseling. Involvement with the justice system. Individuals in recovery may be more likely to have criminal records that complicate obtaining and maintaining employment. Officials in New Hampshire told us that employers might not hire people with a criminal history, and that employers are allowed to ask about criminal history on a job application, even if the individual is in long-term recovery. Appalachian Regional Commission officials said that job seekers with a criminal record also have especially limited employment options in their region because the federal government and its contractors are large employers there, but may not be able to hire someone with a felony conviction, which is an issue for many individuals with SUD. Transportation difficulties. Lack of reliable, affordable transportation presents difficulties for many in recovery. For example, New Hampshire officials told us many people with SUD have lost their license or have no car, and few public transportation options are available in the state outside of urban areas. Local workforce officials in a rural area of Ohio said no reliable public transportation exists near them, and the limited taxi service that exists is very expensive. Housing difficulties. Individuals in recovery may not have access to stable housing, making it difficult to focus on job training or employment. Specifically, officials in Maryland, Ohio, and Washington cited homelessness as an issue among those in recovery. Further, New Hampshire officials said individuals who have a drug conviction may not be eligible for government-subsidized housing. While homelessness can be a result of a substance use-related history, local officials in New Hampshire, Ohio, and Washington told us that there is also a lack of affordable housing in their respective areas. Workforce Agencies Face Difficulties Recruiting Employers Workforce officials in all six selected states told us that they have had difficulty finding employers who are willing to hire those in recovery. As a result, workforce agencies risk not meeting WIOA performance targets related to (1) job seekers’ obtaining and maintaining employment and (2) effectiveness in serving employers. Workforce officials in all six states cited employer concerns around relapses, safety and reliability, suitability, and stigma. Relapses. Officials from the Appalachian Regional Commission said this was the most challenging aspect of SUD with respect to the workplace. Officials from another organization that works with employees with SUD also told us that employers may be reluctant to hire SUD-affected individuals because state laws or claims related to lack of reasonable accommodations under the Americans with Disabilities Act of 1990 can make it difficult to terminate individuals with a known substance disorder when they relapse. To address this, some employers put in place a zero- tolerance policy, automatically terminating an employee who tests positive for drugs. Safety/reliability. Workforce officials in Maryland said employers are concerned that SUD-affected employees may bring drugs into their workplaces or quit unexpectedly. New Hampshire officials told us that employer liability is an issue as employers are worried about accidents. They also told us employers are concerned about productivity loss due to SUD and, in particular, an employee’s inability to work a regular schedule because they or a family member is dealing with SUD. Ohio officials in one local area told us that employers in white-collar jobs are less willing to hire individuals in recovery because they are concerned about possible theft, and that workforce officials have been working with businesses to secure liability insurance. Suitability. Some employers will not hire a person who is unable to pass a drug test. This may present issues for individuals who take medication as part of their recovery treatments. For example, Alabama officials told us that a major reason that employers in their state did not hire job applicants for vacant positions was because they could not pass initial drug screenings. In addition, under U.S. Department of Transportation regulations on workplace drug and alcohol testing, when an employee performing safety-sensitive functions tests positive for drug use, they must be removed from performing such functions and evaluated for treatment options before returning to work. This includes those in aviation, trucking and locomotive transit. Certain entities regulated by the Nuclear Regulatory Commission are also required to administer drug and alcohol testing. Workforce officials in Washington said that it is also difficult for people with SUD to obtain the available jobs in their state in the health care field and with federal agencies because these jobs required drug testing. Stigma. Employers may also be reluctant to hire those affected by SUD because of its associated stigma. New Hampshire officials said that employers are concerned about people’s perceptions and believe it would hurt business if they declare themselves a recovery-friendly workplace. For example, they told us about an employer who runs a high-end restaurant in the state who expressed concern that customers may not want an individual with SUD preparing their food. Washington officials expressed similar concerns, saying that while some employers embrace being a recovery-friendly employer, others do not publicize this because they are unsure how it will be received by the public. Officials in Alabama also noted the need for honesty and transparency about the stigma of SUD and for employers who are willing to invest in their workers. DOL Is in the Early Stages of Supporting State and Local Efforts through Information Sharing and Technical Assistance; Workforce Agencies Identified Additional Needs Federal Partnerships Help DOL Identify Ways to Support State and Local Agencies Serving SUD- Affected Individuals According to DOL officials, they have begun working with ONDCP and other federal agencies to address the drug crisis. DOL officials noted that, although the National Drug Control Strategy does not include explicit goals and performance targets for DOL or employment and training- related efforts, DOL is using the strategy to guide its efforts in addressing the opioid crisis. DOL officials said they have regular conversations with ONDCP about how ETA can support the ONDCP strategy within its current authority. For example, one DOL official told us she communicates with ONDCP nearly every week. DOL officials also said they attend meetings hosted by ONDCP which occur roughly every 6 weeks and include representatives from all of the agencies involved in the National Drug Control Strategy. According to DOL officials, through these meetings, they have learned about government-wide efforts to support those affected by SUD, and have shared information about DOL’s own efforts to address the opioid crisis. DOL officials told us they communicate with other federal agencies regarding the opioid crisis. For example, DOL officials said that HHS provided a list of available grant funding to address the opioid crisis, and DOL has sent this list to its regional offices to distribute to states. In addition, ETA officials told us that two out of the six regional offices have staff serving on regional opioid task forces, for example, with HHS. DOL has also conducted several webinars with HHS on addressing training and employment needs of individuals and communities affected by SUD. Specifically, DOL officials described: a webinar in October 2018 discussing topics such as the rise in opioid use and a screening and intervention technique; a webinar in May 2019 for program staff working directly with participants in the workforce development programs located in states in the mid-Atlantic region, which are among those with the highest opioid-related deaths; and, a webinar with HHS, ONDCP, and other organizations in August 2019 on peer support recovery, including discussing how DOL grant funds have been used to train SUD-affected individuals to become peer recovery specialists. Internally, DOL officials told us they began a DOL-wide opioid workgroup in April 2019 to improve communication among units and strengthen connections across the agency. According to DOL officials and meeting agendas we reviewed, the workgroup meets about once a month, and discusses what DOL is doing to address the opioid crisis and identify any potential gaps in their efforts. They also invite speakers from external organizations, such as ONDCP, the Centers for Disease Control and Prevention, and the National Institute for Occupational Safety and Health. DOL Provides Some Support for Targeted Opioid Grants Recipients, and Has Plans in Place for Oversight and Evaluation ETA officials have provided technical assistance to states during the Phase 1 and 2 grant application processes, such as by clarifying allowable grant-funded activities and defining grant eligibility, and during grant implementation. According to officials, ETA assigned Federal Project Officers from one of its six DOL regional offices to work with each state. Officials have also encouraged information sharing among grantees. For example, officials said they hosted quarterly calls among grantee states, where they discussed performance reporting, evaluation, and use of the Federal Bonding Program, and have allowed time for peer- to-peer sharing of grant accomplishments and challenges. To encourage peer-to-peer sharing and engagement, ETA also provided grantees with a list of grantee contacts in all states that received Phase 1 or 2 grants. However, this technical assistance has been limited to those receiving the targeted grants, and is not offered to all states, tribes, and outlying areas that may be interested in conducting related work. DOL officials are working to improve available information on addressing the employment and training needs of those affected by SUD. According to DOL officials, interested entities can access a DOL website called WorkforceGPS with resources and materials on substance abuse, including its effect on the workforce system, and case management resources. DOL also contracted with a research organization to review literature that examines what is known about workforce programs for individuals with SUD. The research is meant to identify key themes and findings related to successfully implementing the Phase 1 grants, such as the role of mental health services in the lives of grant participants and different employment-related interventions. DOL officials said that, as a complementary piece to the literature review, the contractor was tasked with developing a resource guide that identifies promising practices across the public and private sectors, with a goal of providing up to date information on tools, programs, websites from across the country to serve as a resource for grantees who are planning and implementing their own initiatives. Officials said that the contractor shared preliminary results from its research activities with targeted grantees in October 2019. Based on these results, DOL officials reported that there was a lack of evidence about the relationship between opioid use disorder and employment. Therefore, they said, the literature review covers a broader range of information related to SUD in an effort to provide useful information. DOL released the full results of the literature review and resource guide on its website in March 2020. Regarding oversight of grant activities, DOL plans to review grantee performance through required state quarterly reports, which have only recently begun to be submitted. DOL requires that these reports include financial data and program performance information (such as characteristics of, and services received by, participants, as well as participant outcomes). These quarterly reports also contain a narrative section where grantees can share information on project success stories, upcoming grant activities, and promising approaches. The final quarterly report for the grant must summarize the successes and/or challenges in delivering services, as well as address the topics of sustainability, replicability, and lessons learned. DOL officials said they do not have plans to share information from the summaries in the quarterly reports with other states. In addition, DOL officials told us that states generally are to use the same performance measures for these grants as they do for WIOA core programs. However, officials said they realize the SUD population could have different challenges than the rest of the WIOA population and, as a result, they are looking into developing new performance measures to address these differences. Regarding evaluation of grant activities, DOL has contracted with a research organization to conduct a 3-year evaluation of Phase 1 activities. The evaluation is expected to end in September 2021, with a final report to follow. DOL officials confirmed that there will be no interim reports. State and Local Workforce Agencies and Our Review Identified Areas for Further Assistance Although some state and local workforce officials we interviewed were aware of available technical assistance from DOL, they identified a need for more information to help them address challenges in serving communities affected by SUD, as discussed below. Furthermore, our review of DOL documents and guidance such as the ETA announcements to states of the targeted grants and the WIOA state plan guidance, found that these documents did not fully address the questions and concerns of state and local workforce officials. Federal internal control standards regarding risk assessment state that management should identify, analyze, and respond to risks related to achieving its objectives such as WIOA’s goals of increasing employment and retention. These standards also state that management should communicate with its partners to help achieve its objectives. Better communicating information could enhance DOL’s ability to respond to these risks. Specifically, state and local workforce officials and our review identified three areas in which additional DOL actions could help officials address the needs of job seekers in recovery and potential employers: Clarity about expectations and use of funds. Officials in Arizona, Ohio, and Washington said they would like clarification from DOL about its expectations regarding the role of state and local workforce systems in preparing individuals in SUD recovery for employment, or in determining the appropriate use of WIOA grant funds. Clarity around DOL’s expectations for state workforce agencies could be helpful, as Arizona officials emphasized that they consider SUD a public health issue, not a workforce issue, and have viewed SUD as an issue that is best addressed on the health side. Also, information on expectations and the use of non-targeted WIOA grant funds is especially important as states draft their 2020 WIOA state plans, which will set priorities for state workforce agencies for the next 4 years. For example, officials from one local area in Washington told us that they hoped to continue the grant activities and partnerships past the end of the current targeted grant, but they were unsure whether they could do this with non-targeted, WIOA formula grant funding. Our review of the targeted grant announcements found they did not contain information on whether this was an allowable use of funds. ETA issued guidance regarding the 2020 WIOA state plans in February 2020. However, our review of this guidance found that it does not provide specific information about states’ roles in meeting the needs of job seekers in recovery from SUD or their potential employers, or how non- targeted WIOA funding can be used to address those needs. DOL officials acknowledged that the guidance does not include such information, stating that the purpose of the guidance was to focus on the procedures and instructions for states in submitting their state plans, and not to provide specific suggestions on uses of WIOA funds or what particular strategies states should pursue. Clarity on the role of states and the use of WIOA funding would better position state workforce systems to meet the training and employment needs of those affected by SUD and their potential employers. Better information sharing with all states. Officials from four of the six selected states identified areas in which it would be useful for DOL to enhance its information sharing. Specifically, officials in these states told us that it would be useful for DOL to share information about lessons learned and successful strategies in addressing the needs of job seekers in recovery and potential employers with all states—whether or not they received targeted grants. They said such information would be particularly helpful given that many states are in the early stages of developing their programs. Officials stated that information based on the experiences of their peers would assist states in ensuring those in recovery are job ready and in hiring and retaining these workers. For example, officials from Arizona—a state without a targeted DOL grant—told us their communities could benefit from learning about experiences of states or local areas that are addressing the crisis within the workforce system, especially those using an approach that offers wraparound services such as transportation assistance. Additionally, officials from Ohio—a state with a targeted DOL grant—said they would like to learn from more experienced state officials who have been working for 6 months or a year within the workforce system to address the opioid crisis. Workforce officials stated that even if job seekers in recovery are trained and job ready, workforce agencies face challenges in addressing employers’ concerns about hiring these individuals. Workforce officials in five of the six selected states said that information about incentives for employers to hire individuals affected by SUD, and/or education for employers about this population, would be helpful given that perceived risks have led to difficulties with finding employers who are willing to hire this population. In particular, given limitations of federally supported incentive programs and the stigma associated with SUD, a dual approach—education and incentives—may be needed. However, at this point, most information on strategies to address employer concerns, including leveraging pre-existing federal programs, is not widely disseminated. DOL officials stated that they recognize the challenges state and local workforce agencies face in engaging employers in this area and are exploring use of existing programs to incentivize the hiring of job seekers with SUD. However, they acknowledged that to date, limited information has been shared with the large network of state and local workforce agencies. Thus far, DOL has been piloting and promoting one available incentive, the Federal Bonding Program, which is designed to help reduce employers’ risk by offering reimbursement for loss from illegal acts, such as theft or embezzlement, for individuals with criminal records. DOL officials have recognized that other existing incentive programs—targeted to employers of other populations, such as low-income, and other disadvantaged job seekers—may be helpful. They said that because the populations eligible for these programs share similar characteristics as those in recovery, they are exploring how to connect them to employers who are willing to hire those in recovery. For example, the Work Opportunity Tax Credit encourages employers to hire individuals from certain targeted groups who have consistently high unemployment rates, such as individuals with a felony record, by providing employers with a tax credit as an incentive to hire and retain these workers. However, state officials said, and our review confirmed, that these current federal programs may not fully address employer concerns. Specifically, bonds might not protect against other liabilities which may be of concern to employers, such as accidents caused by an employee under the influence of opioids. Furthermore, despite promoting awareness of these programs, DOL officials recognized that these efforts alone may not increase employer participation, particularly given the need to move beyond the stigma associated with that condition. Officials in two states told us that education is an important response in addressing employers’ concerns about the potential stigma associated with hiring individuals with SUD. For example, New Hampshire has a pilot program on recovery- friendly workplaces to educate employers about reducing stigma associated with SUD, as well as related human resource policies and employee assistance programs. Also, Arizona officials stated that workforce agencies need to understand the employer perspective and engage, educate, and involve employers. To date, DOL has been primarily communicating information about emerging, workforce system-based strategies to serve job seekers and employers affected by SUD with Phase 1 and 2 grantee states. As previously noted, DOL has an existing mechanism—its WorkforceGPS website—that could be used to share information more widely. Access to information on promising practices and lessons learned can help workforce agencies in all states learn about possible ways to address the needs of job seekers affected by SUD and their potential employers. More time to use grant funds. Officials in New Hampshire, Ohio, and Washington said that a longer time window in which to use the DOL grant funding would be helpful. For example, New Hampshire officials said the length of time needed for intake and enrollment for clients with SUD is longer than usual for a typical WIOA job seeker; therefore, more time to use the Phase 1 grant funds could help them with the more intense interventions. In addition, state workforce officials in Ohio told us it is complicated and takes time to develop new partnerships and trust at the local level, and to determine what the state and other partners can provide. Similarly, Washington state officials said the limit on the time allowed to use the Phase 1 and 2 grant funds has limited their ability to enroll job seekers in recovery and implement their partnerships. Specifically, state officials said that the delay in receiving funds means they will not have the full 2 years for grant activities. To meet DOL’s reporting deadlines, they will need to complete their activities earlier than anticipated. ETA officials told us that they are considering extending the Phase 1 and 2 grant periods for some states. In commenting on a draft of this report, they also said that If these limitations prevent a state from continuing its grant beyond a certain period of years, states can apply for a new grant should it still meet the conditions for eligibility, such as if the public health emergency declaration for the opioid crisis remains active. Conclusions In light of the persistent nature of the drug crisis and the complex set of issues facing individuals on the path to recovery, workforce agencies are likely to continue facing challenges in meeting the needs of this population and their potential employers. As the agency responsible for the nation’s workforce system, DOL can play an important role in serving communities and individuals affected by SUD who are seeking employment. However, state officials we interviewed expressed uncertainty about what is expected of them or the specific allowable uses of their non-targeted WIOA funds to address a crisis that has long been considered primarily a health and law enforcement issue. Our work raises concerns about how the workforce system continues to seek clearer direction on the role of states and the use of non-targeted WIOA grant funding in helping ensure the economic well-being of communities affected by this public health emergency. DOL’s current efforts are still in the early stages, and it will take time for the agency to fully identify and disseminate effective, evidence-based strategies. In the meantime, states are seeking the best information currently available to help their workforce systems support job seekers affected by SUD and their potential employers. DOL’s targeted grants provide an opportunity for grantees and non-grantees alike to learn states’ experiences in addressing the effects of the opioid crisis through the workforce system, but information on the current approaches states receiving targeted grants are using is not being shared beyond the targeted grantee community. Sharing this information with all states could better position workforce agencies to address the needs of job seekers affected by SUD and help employers understand and address the perceived risks of hiring job seekers in recovery. While the workforce system may take time to fully build its capacity to work with these job seekers and employers, opportunities exist to learn and make interim progress towards this end. Recommendations for Executive Action We are making the following two recommendations to DOL: The Assistant Secretary for Employment and Training should clarify DOL’s expectations of the role of state workforce agencies in addressing the employment and training needs of those affected by SUD and how non-targeted WIOA funding can be used to assist job seekers and employers. (Recommendation 1) The Assistant Secretary for Employment and Training should share information from targeted grantees with all state workforce agencies, tribal governments, and outlying areas regarding lessons learned and promising practices in addressing the needs of job seekers affected by SUD and potential employers. (Recommendation 2) We provided a draft of this report to DOL and HHS for review and comment. In its formal comments, which are reproduced in appendix I, DOL agreed with our recommendations. DOL also provided technical comments, which we incorporated as appropriate. HHS did not have comments. In its response, DOL noted that throughout our report, we refer to SUD, but that its targeted grants are limited to addressing SUD caused by opioids. While our report focuses on SUD more broadly, many of the efforts states and federal agencies are involved in focus on opioid use disorder, as a result of HHS’s emergency declaration. DOL also stated that it was in the process of announcing another round of grants in partnership with the Delta Regional Authority and the Appalachian Regional Commission, part of which will be available to address opioid or other SUD. DOL also noted that grant-funding limitations, including the availability of appropriated funds, make it difficult to address states’ concerns about not having enough time to spend their grant funds, and suggested that states may consider applying for a new grant. We have reflected this point of view in the final report. In response to our first recommendation, DOL officials said they anticipate providing information and technical assistance to help workforce system grantees understand how they can address the impacts of SUD on the workforce. ETA plans to issue guidance by the end of 2020 to share promising practices and describe how WIOA funds can be used to support job seekers in recovery and employers. In response to our second recommendation, DOL officials said ETA has created resources that are available to all states based on its experience administering some of the targeted grants. ETA officials cited the recently published literature review and companion resource guide, and said they also plan to share the evaluation of the Phase 1 grants widely when it is available, including any resources or tools developed by states that were awarded Phase 1 grants. In addition, ETA plans to host at least one webinar to share additional promising practices from the targeted grants that could be useful to local workforce boards around the country. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Labor, the Secretary of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0580 or nowickij@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Labor Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Danielle Giese (Assistant Director), Amy Sweet (Analyst-in-Charge), Linda Lootens Siegel, and Anna Kallschmidt made key contributions to this report. Also contributing to this report were Deborah Bland, Alex Galuten, Natalie Herzog, Tom James, Bill Keller, Sheila R. McCoy, Corinna Nicolaou, Monica Savoy, Almeta Spencer, Tonnye Connor-White, and Greg Whitney.
Why GAO Did This Study The Department of Health and Human Services declared the opioid crisis a public health emergency in October 2017. DOL has awarded grants to help address this crisis. GAO was asked to examine how WIOA-funded programs are addressing the employment and training needs of those affected by SUD. This report examines (1) how workforce agencies in selected states are using WIOA funding to address employment and training needs, (2) challenges agencies face in addressing employment and training needs, and (3) how DOL is supporting communities affected by SUD. GAO interviewed officials in four of the 10 states that received DOL grants in the early award rounds (as of March 2019)—Maryland, New Hampshire, Ohio, and Washington—and two that did not—Alabama and Arizona; reviewed related documentation and relevant federal laws and regulations; and interviewed DOL officials and researchers, selected for their knowledge about these issues. What GAO Found Workforce officials GAO interviewed in four of the 10 states receiving targeted Department of Labor (DOL) grants as of March 2019 said they were using Workforce Innovation and Opportunity Act (WIOA) funding to help meet the unique needs of those affected by substance use disorder (SUD). These officials, who said they had limited experience serving those affected by SUD, worked with required organizational partners and hired specialists to assist job seekers and to provide intensive job readiness services. However, these efforts are relatively new and outcomes are not yet known. Workforce officials GAO interviewed in two selected states without targeted grants said they had viewed SUD primarily as a public health issue, but had recently taken some steps to address it. For example, one state added a workforce subcommittee to an existing opioid task force. State and local workforce officials in all six states identified a range of challenges they face in addressing the needs of SUD-affected job seekers. For example, criminal history or a lack of transportation may make it difficult for these job seekers to obtain and maintain employment. Officials said another challenge is finding employers who are willing to hire those in recovery. They stated that employers are concerned about the risks to their businesses, such as potential employee relapse and possible negative reaction from customers. Officials were seeking more information and assistance to help address such concerns. DOL officials said they support SUD-affected communities mainly by providing information to states that apply for and receive targeted grants. However, officials in two selected states expressed uncertainty about DOL's expectations of states in serving the needs of SUD-affected job seekers and potential employers. Officials in another state said they were unclear on whether they could use non-targeted funds to continue targeted grant activities. GAO's review of related DOL guidance found that it does not provide specific information on expectations of states or the use of WIOA funds outside of targeted grants to address this issue. Further, while DOL has disseminated some information on serving job seekers with SUD (such as in quarterly calls with grant recipients), it does not plan to share information that grantees submit to the agency, such as lessons learned and successes, with all states. Doing so could help states meet the training and employment needs of those in recovery, and the needs of potential employers. What GAO Recommends GAO recommends that DOL clarify (1) its expectations of state workforce agencies and (2) how WIOA funding can be used in addressing the needs of those affected by SUD and potential employers, and share information with all states on lessons learned and promising practices. DOL agreed with our recommendations.
gao_GAO-20-80
gao_GAO-20-80_0
Background DOD uses military and commercial satellite communications (SATCOM) to meet its global communications requirements. DOD acquires wideband capacity through two methods: DOD purpose-built: DOD obtains some of its SATCOM through its purpose-built systems, which include Wideband Global SATCOM (WGS) satellites. While DOD awards contracts to commercial companies to build these systems, the department is responsible for the systems’ procurement, operations and sustainment; therefore, they are considered purpose-built. Commercial contracts: DOD also purchases commercial SATCOM services to supplement its purpose-built systems, such as for satisfying users who have needs beyond available military satellite resources, supporting training on ground systems, or meeting the needs of unique users. In these cases, DOD acquires commercial SATCOM bandwidth through several competitively selected vendors, who are responsible for operating and sustaining their own systems. Military SATCOM architectures fall into three types: protected, which provides secure, assured communications; wideband, which supports worldwide capacity for high data rate communications, including high-quality voice and imagery; and narrowband, which provides reliable and secure communications less vulnerable to adverse weather conditions or other physical limitations, such as distance, dense foliage, and terrain. DOD’s primary wideband satellite communications system, WGS, currently provides a portion of DOD’s required SATCOM bandwidth, but the Air Force estimates its satellite constellation’s capabilities will begin to degrade in the late 2020s. The Air Force is adding at least one more satellite to the WGS constellation and plans for an enhanced WGS-11 to provide the capacity of two satellites. During the Wideband AOA, DOD estimated that adding this satellite to the constellation would extend the availability of wideband communications to 2031. According to the Air Force, there is potential for adding a 12th WGS satellite to the constellation. Like other types of space systems, DOD’s wideband SATCOM systems generally involve four types of interrelated segments that make a space capability fully functional. As illustrated in figure 1, they include (1) the space segment—namely the satellites; (2) the ground segment, with network services and also including satellite and payload control systems and data processing subsystems and facilities; (3) user equipment, such as radios, terminals, and routers needed by the warfighter to use the capability; and (4) launch vehicles and facilities. Within the space segment, satellites operate in several different types of orbits to meet different communication and mission needs, as shown in figure 2. The orbital location of a satellite can affect its capacity to transmit data, or what parts of the Earth can receive its signal. For example, highly elliptical orbits are necessary for providing long dwell times over northern latitudes due to the curvature of the Earth, while other orbits cover remaining latitudes. Wideband satellites operate in different radio frequency spectrum bands. DOD typically relies on C, X, Ku, and Ka-bands to provide wideband connectivity, determined by where and how users are operating. Each of these frequency bands has advantages and disadvantages for various applications. Satellite transponders operating at the lower C-band frequencies are less susceptible to degradation from rain than other bands. In the United States, the X-band is specifically designated for use by the U.S. government and the North Atlantic Treaty Organization. The Ku-band operates at higher frequencies and can communicate with smaller antennas and offer more flexibility. The still-higher-frequency Ka- band satellites can transmit more data than C, X, and Ku-band satellites, but their signals are more susceptible to degradation from water vapor and rain than satellites in lower frequency bands. Commercial satellite communication providers have historically operated primarily in the Ku- band but are now expanding services in the Ka-band to offer higher data rates. AOA Process and Best Practices An AOA is a key first step in DOD’s acquisition process and assesses alternative solutions for addressing future needs. DOD acquisition guidance provides the purpose and procedures associated with conducting an AOA to support decision making. DOD experts in areas such as cost estimating, technological analysis, and acquisitions, along with military and commercial stakeholders, comprise the AOA study team. The study team is involved in the day-to-day work of the AOA process and conducts the analyses that form the foundation of the assessment. During the AOA study period, the study team develops alternatives to satisfy capability gaps that they assess against pre-established performance requirements. We have identified 22 best practices for an AOA process. Of these, 6 best practices are associated with a “comprehensive” AOA. Comprehensive means that the AOA process ensures that the mission need is defined in a way to allow for a robust set of alternatives, that no alternatives are omitted, and that each alternative is examined thoroughly for the project’s entire life cycle. Without a clearly defined mission need and comprehensive list of alternatives, the AOA process could overlook the alternative that best meets the mission need. Furthermore, without considering the complete life cycle of each alternative, decision makers will not have a comprehensive picture of the alternatives analyzed. DOD Conducted a Comprehensive Analysis of Wideband SATCOM Alternatives DOD completed its analysis of wideband SATCOM alternatives in June 2018 and identified 11 alternatives that represent several possible approaches to SATCOM acquisitions. We found the Wideband AOA to be a comprehensive assessment. DOD Developed Alternatives to Inform Future SATCOM Decisions The Office of the Secretary for Defense for Acquisition and Sustainment completed the Wideband AOA in June 2018 to support decision making for future wideband architectures. Several subsystems comprise a SATCOM architecture and can include the number, type, orbital location, and capacity of satellites and associated ground or user segments. WGS constellation satellites will begin reaching their end of life in the early 2030s, which means DOD will need to begin launching replacement system satellites in the late 2020s. DOD satellite systems take, on average, over 7 years to develop and launch the first satellite of a purpose-built system. Given these time frames, the Wideband AOA study team focused on possible alternatives DOD could begin developing as early as 2019. In October 2016, the Office of the Secretary of Defense- Cost Assessment and Program Evaluation developed the Wideband Communications Services Analysis of Alternatives Study Plan. This Study Plan provided the schedule and tasks to be conducted for the Wideband AOA. These tasks included identifying study questions to be addressed and listing measures of performance and effectiveness. The Study Plan also described the organizational structure and methodology for executing the Wideband AOA. The Wideband AOA study team developed 11 alternatives that broadly represented three different acquisition approaches: legacy DOD SATCOM procurement focused on purpose-built systems with some commercially-contracted services; commercial-focused SATCOM procurement; and a strategy that would transition from a mainly purpose- built system to a more commercial SATCOM-oriented model. Historically, DOD has bought purpose-built SATCOM assets, including satellites and supporting ground systems, while contracting for supplemental commercial bandwidth. Table 1 summarizes the architectures and these approaches. The Wideband AOA Process Was Comprehensive Our assessment of the Wideband AOA found that it met our criteria for a comprehensive AOA process. Table 2 shows our determinations of how fully the Wideband AOA met each of our six best practices. Appendix I provides more detail on our AOA best practices. Based on our analysis, we found that the Wideband AOA study team thoroughly addressed a wide range of possible satellite system alternatives. Moreover, the Wideband AOA study examined the ground segment systems—including user terminals—which will communicate with the satellite system DOD chooses to replace WGS. Although user terminals were not the primary focus of this AOA, DOD officials told us this effort was the first time DOD has studied and consolidated department-wide costs for these terminals, which they said provided valuable context to decision-makers. We discuss this new information on terminals in further detail later in this report. In Accordance with Its Study Plan, DOD Used Multiple Methods to Obtain Stakeholder Input As set forth in the AOA Study Plan, the Wideband AOA study team solicited and incorporated input from across DOD stakeholders, such as the military services, operational users, and SATCOM partner nations. The study team also solicited and incorporated information from commercial SATCOM vendors to inform its alternatives. Additionally, the Wideband AOA study team incorporated information from interrelated studies, referred to as pilots and pathfinders, that the Air Force and Defense Information Systems Agency conducted. These studies recommended ongoing experimentation and adaptation to identify, incorporate, and guide future commercial SATCOM development, as well as changes to DOD’s approach to SATCOM acquisitions. Military and Commercial Stakeholders Provided Input to the AOA As set forth in its Study Plan, the Wideband AOA study team obtained military input from across DOD and information from commercial SATCOM vendors to inform its alternatives. AOA working groups were one of several mechanisms DOD used to obtain stakeholder input. The AOA study plan directed the establishment of eight working groups to consolidate subject matter experts for relevant SATCOM topics, as shown in table 3. Each working group, task force, and team conducted its analysis and wrote an appendix to the AOA report summarizing its methodology, inputs, and results. Each team also provided its own conclusions or recommendations, which contributed to the overall findings and recommendations of the AOA report. Military service representatives who participated in the Wideband AOA described to us how their personnel were involved in many or all of the working groups. AOA study leaders also emphasized the quality of the input from the working groups and were confident the AOA successfully captured the perspectives of acquisition, operational, and user communities—personnel responsible for buying, controlling, and using wideband SATCOM. In addition to the working groups, the Wideband AOA study team developed functional requirements for the alternatives by requesting SATCOM user demand data from the services, and invited SATCOM partner nations to participate in the AOA—a portion of which accepted. These efforts provided additional information from user communities. Wideband AOA study team leaders described how they relied on a formal Joint Chiefs of Staff process to obtain inputs from the military services on their current and projected bandwidth demands. Through this process, the department obtained SATCOM user demand data from combatant commands, military services, and their sub-commands. The AOA study team then used these results to develop an aggregate user demand projection that was foundational to the AOA. Any viable alternative had to provide sufficient bandwidth to meet future user demand. DOD requested inputs from commercial SATCOM vendors and the Commercial Working Group used these to identify the space system subcomponents, namely technical characteristics, including frequency bands, orbit, and satellite mass that the Technologies and Alternatives Working Group eventually combined into the 11 final alternatives. The Commercial Working Group’s intent in identifying these subcomponents was to represent capabilities the SATCOM industry will have on-orbit by 2023, without depicting any single vendor’s potential system. The Commercial Working Group also incorporated results from DOD pilot and pathfinder efforts (discussed below) to develop a roadmap for DOD to implement an enterprise management approach to SATCOM procurement and operations. DOD Pilot and Pathfinder Efforts Provided Additional Information to the Wideband AOA Study Team The Air Force and Defense Information Systems Agency conducted interrelated pilot and pathfinder studies before and during the Wideband AOA that provided information on SATCOM business arrangements, user terminal prototyping, and acquisition efficiencies. In 2014 and 2015, Congress authorized, and then directed, DOD to carry out a pilot program on the acquisition of commercial satellite communication services. As part of this pilot, DOD initiated pathfinder projects to test the feasibility of these new business arrangements. The Air Force and Defense Information Systems Agency studied and prototyped methods to improve commercial SATCOM acquisition and provide more flexible satellite connections for mobile SATCOM users. The agencies did so by contracting with commercial SATCOM providers for the following: Air Force Pilot – define and demonstrate prototyping to improve access to commercial SATCOM. The Air Force completed phases 1 and 2 of this 3-phase pilot program, studying preferential purchasing approaches that incentivize industry and the types of SATCOM architectures that enable such purchasing, such as a managed services approach that consolidates commercial SATCOM procurement for DOD users. Phase 1 studied commercial satellite communication architecture and business structures. The Wideband AOA’s Commercial Working Group used the phase 1 results in its modeling of SATCOM enterprise management. Phase 2 demonstrated a flexible modem-to-terminal interface to allow a terminal to “roam” or switch between different manufacturers’ satellite constellations. Phase 3 is ongoing and focuses on network integration risk reduction efforts. Air Force Pathfinders – prove that innovative business arrangements can meet DOD requirements and reduce costs. Through the pathfinder research efforts, the Air Force purchased an on-orbit transponder as well as pre-launch transponder to demonstrate different strategies for buying SATCOM. The final pathfinder effort is ongoing and is to demonstrate how access to shared bandwidth and more flexible ground systems can improve SATCOM access for warfighters. These types of capabilities help users to move more quickly and easily, with a reliable SATCOM connection. Defense Information Systems Agency Pathfinders – examine how acquisition efficiencies improve SATCOM services. The pathfinders’ findings provided observations on market trends for SATCOM contracting, namely that pricing will continue to decrease. The pathfinders also showed that DOD’s typical SATCOM requirements are not stable from year to year, meaning DOD cannot accurately predict when or where it will need surge SATCOM capacity. The pathfinders also identified management challenges to aggregating SATCOM requirements. The pilot and pathfinder efforts recommended ongoing experimentation and adaptation to identify, incorporate, and guide developing commercial SATCOM capabilities, as well as changes to DOD’s traditional approach to SATCOM acquisitions. In particular, both the Air Force and Defense Information Systems Agency recommended that DOD adapt to changing business models, especially for managed services in commercial SATCOM, in which DOD would purchase SATCOM services but would not own or manage the systems and data rates. Changing business models could also include greater coordination with the SATCOM industry, so DOD can better incorporate commercial technology into future systems. The Defense Information Systems Agency also recommended that DOD pursue an alignment of common types of user terminals and SATCOM architectures. For example, many programs use a different approach to procuring terminals and SATCOM architectures, which prevents DOD from taking advantage of commonalities that could save resources. Such commonalities include users in the same geographic area. These Air Force and Defense Information Systems Agency recommendations overlap with half of the findings and recommendations of the Wideband AOA. DOD Concluded That Future Wideband SATCOM Requires a Hybrid Approach and More Knowledge, but It Lacks a Plan to Implement AOA Recommendations DOD concluded in the Wideband AOA that integrating purpose-built satellite systems and commercially-provided systems into a hybrid architecture would be more cost effective and capable than any single purpose-built or commercial system alone. The AOA study team recommended actions to obtain more information on transitioning to a more integrated architecture of purpose-built and commercial systems and reducing risk. However, DOD does not have a plan to implement these recommendations and inform timely decision-making. DOD Concluded That Future Wideband Communications Require a Hybrid Approach During the AOA, DOD found that integrating purpose-built satellites and commercially-provided systems into a hybrid architecture would save costs and provide more capability than any single purpose-built or commercial system alone. The department currently uses a mix of purpose-built and commercial SATCOM contracts, but DOD has not historically managed these systems in coordination, or with an enterprise approach. DOD considered 11 architectures in its final analysis and all were to some extent hybrids of purpose-built and commercial systems because the AOA study team found that DOD requires a combination of military and commercial system capabilities. The Wideband AOA report identified three of the 11 potential architectures that would best meet DOD’s wideband SATCOM needs: Legacy Purpose-Built and Commercial Contracting Architecture - Procure and field a new purpose-built constellation for X and Ka-band capabilities with anti-jam technologies and upgraded antennas. DOD would continue to contract for commercial SATCOM as needed. Commercial-Oriented Architecture - Pursue advanced commercial high capacity satellites with steerable beams over the Ka-band. Also procure 10 purpose-built satellites to meet the military’s requirement for X-band communications. Transitional Step to Commercial Architecture - Transition to commercially-managed services architecture in low-Earth orbit for approximately 5,000 users over the long term. DOD would procure and field the modernized, purpose-built legacy architecture described above, then modify its suite of user terminals to align with the new low-Earth orbit satellites, emphasizing a cost-effective strategy to do so. For users who do not transition to the new commercial satellites, the purpose-built constellation provides continued X and Ka-band capability. During the Wideband AOA, DOD found that any post-WGS solution must continue to provide purpose-built SATCOM capabilities. For example, some users require X-band communications and identified this as the single most important capability to maintain. However, commercial constellations provide limited X-band communications due to this band’s historical use for military communications. The companies and international partners that do offer X-band communications provide fragmented coverage that does not fully meet DOD’s needs. In addition, commercial satellite constellations do not offer services in all of the areas DOD operates, such as over oceans and in polar regions. At the same time, because purpose-built systems alone cannot meet all military requirements, DOD found it will need to rely on commercial capabilities as part of a future architecture. Consequently, the AOA study team assessed alternatives that would expand DOD’s use of emerging commercial technologies. For example, DOD expects certain operations, like aerial vehicle flights that rely on wideband SATCOM, to increase and drive demand for commercial SATCOM capabilities. Moreover, the AOA study team found that emerging commercial capabilities could meet routine military needs, such as training, at a competitive cost. The AOA study team concluded integrating these capabilities into a future architecture would be beneficial. AOA Recommendations Focus on Gaining Additional Knowledge for Decision-Making and Reducing Risk In its Wideband AOA report, the AOA study team made a series of recommendations focused on maintaining current wideband capabilities and overcoming near-term information gaps in transitioning to new SATCOM acquisition and management approaches. All of the recommendations focused on gaining information needed to transition to a hybrid architecture of purpose-built and commercial systems in the long term. Table 4 provides examples of DOD’s recommendations and the additional knowledge DOD needs to obtain as it pursues a post-WGS solution. The Wideband AOA recommendations also addressed risks associated with any new SATCOM architecture, which the study team found include: (1) the uncertain stability and maturity of emergent commercial SATCOM systems and (2) the magnitude of replacing or modifying SATCOM user terminals. Commercial Technology Stability and Maturity: DOD found in the Wideband AOA that the commercial SATCOM market needs time to grow and stabilize as industry seeks to build a consumer base, especially for low-Earth-orbit-based internet services. The AOA study team found that if commercial companies cannot close their businesses cases around proposed solutions, DOD investments or programs that rely on those proposed solutions may fail. Further, many commercial systems, especially those based in low-Earth orbit, are still maturing. SATCOM providers have not yet worked closely with DOD to see how they would need to modify such constellations to operate with future DOD systems, including ground systems. Wideband AOA stakeholders—military and commercial—also described their struggle to share information on technical requirements, new capabilities, and pricing. For example, military stakeholders wanted more detailed engineering data on emerging commercial capabilities while commercial stakeholders wanted additional information on proposed alternatives for providing cost data. Commercial stakeholders also sought to protect their proprietary information. DOD’s recommendation to invest in and shape commercial SATCOM development is aimed at reducing this risk and improving information sharing between DOD and the SATCOM industry. Replacing or Modifying User Terminals: Managing user terminal development and upgrades is complex and, according to DOD officials, is one of the largest challenges the department faces in selecting a post-WGS architecture. In its analysis, DOD found that managing upgrades or replacement costs and schedules for over 17,000 terminals of approximately 135 different designs was a major challenge. The AOA’s analysis showed that out-of-cycle terminal replacement would drive significant costs and affect DOD operations. For example, vehicles like Humvees or ships have maintenance periods that are scheduled years in advance. Changing terminals could require unscheduled maintenance, potentially disrupt personnel planning, and cost more than if the terminals were upgraded on their planned refresh cycles. Certain users also cannot transition to commercial SATCOM and still meet operational requirements. For example, Navy stakeholders told us their terminals were not considered for transition to commercial systems during the Wideband AOA due to a number of issues, including Ku-band radio frequency interference, all-weather availability, open ocean coverage, and network constraints. Both our past work and the Wideband AOA found that DOD faces ongoing risks in aligning its satellite and ground control systems. We have reported that these risks have arisen, in part, because user terminal development programs are typically managed by different military acquisition organizations than those managing the satellites and ground control systems. The AOA recommendation to develop an enterprise SATCOM terminal strategy is aimed at reducing the risk user terminals present to DOD’s post-WGS SATCOM architecture. DOD Does Not Have a Formal Plan to Implement AOA Recommendations DOD’s recommendations that focus on gaining additional knowledge align with GAO’s acquisition best practices for knowledge-based decision- making and risk reduction, but DOD lacks a formal plan to implement these recommendations. More specifically, DOD’s recommendation to gain knowledge about the viability and maturity of commercial SATCOM system technologies corresponds with our best practices that outline the importance of ensuring needed technologies are proven to work as intended before programs begin. According to officials we spoke with from various DOD organizations involved in the Wideband AOA and SATCOM acquisitions, they have work ongoing that provides relevant information, including Air Force pathfinders and a study of ground infrastructure supporting WGS. However, these officials told us that there is no formal plan to guide post-AOA efforts including coordinating and providing the knowledge DOD needs to mitigate risks and inform timely decisions on DOD’s next wideband communications architecture. If DOD does not develop and implement a plan—including roles, responsibilities, and time frames—for building knowledge, then DOD risks not having enough information to make timely, knowledge-based decisions on systems that provide critical communications for military operations. For example, the Wideband AOA recommended developing an enterprise terminal strategy to centralize user terminal procurement. Without a plan to guide such an effort, it is unclear what organization within DOD would begin working with the military services to develop this strategy and potentially adjust the services’ acquisition approach to terminals. At the same time, it is important to note that DOD space acquisition is facing a changing leadership environment, and developing and implementing a plan for post-AOA efforts would need to take place in the midst of such changes. In 2016, we reported that for over 2 decades, fragmentation and overlap in DOD space acquisition management and oversight had led to ineffective and untimely decision-making, leading to delays in space system development and increasing the risk of capability gaps across critical weapons systems. DOD and Congress are taking steps designed to ultimately streamline decision-making and clarify authorities for space; however, it will likely take several years to implement such changes. Moreover, it is unclear the extent to which these changes will affect acquisition of user terminals—a long-standing challenge for DOD because the organizations responsible for buying terminals are not the same organizations that buy satellites. The changes being instituted include: Re-established United States Space Command. In August 2019, the President re-established the U.S. Space Command as a unified combatant command. DOD will form today’s Space Command with some offices from Strategic Command responsible for space operations, with the mission to protect and defend space assets. Although U.S. Space Command does not conduct space acquisitions, it is responsible for the satellite operators who help systems like WGS function—stakeholders in a post-WGS decision. Transferred commercial SATCOM procurement to Air Force Space Command. At the direction of the National Defense Authorization Act for Fiscal Year 2018, Air Force Space Command assumed responsibility for procuring commercial satellite communications for DOD in December 2018. The Defense Information Systems Agency previously managed most commercial SATCOM acquisitions and is still responsible for other types of ground segment systems. Proposed Establishment of a United States Space Force. Early in 2019, the President and DOD proposed the establishment of a U.S. Space Force as a sixth branch of the U.S. Armed Forces within the Department of the Air Force. The Space Force would include the uniformed and civilian personnel conducting and directly supporting space operations from all DOD armed forces, assume responsibilities for all major military space acquisition programs—including those for SATCOM, and create the appropriate career tracks for military and civilian space personnel. Congress is deliberating the final composition of the proposed Space Force. Established the Space Development Agency. In March 2019, DOD established the Space Development Agency to unify and integrate efforts across DOD to define, develop, and field innovative satellite solutions, including communications. The Space Development Agency is focused on a low-Earth-orbit constellation to provide communications and other satellite-based operational support for DOD, which could also provide information for selecting a post-WGS architecture. As of this time, DOD has not determined how this new organization will mesh with the Air Force Space and Missile Systems Center that acquires satellite systems; the Defense Advanced Research Projects Agency, which creates breakthrough technologies and capabilities; and similar organizations within the department. Conclusions The Wideband AOA’s recommendations for gathering additional information to reduce risk and inform DOD’s decision-making are good first steps to ensure any post-WGS architecture will effectively and efficiently meet DOD’s needs. The addition of one or two more WGS satellites provides some extra time for DOD to field new satellites, avoid capability gaps, and implement the AOA recommendations. However, given the typical 7-year development timelines for space systems, DOD will need to decide on a way forward within the next several years so that new satellites will be available when needed. Attempting to implement the Wideband AOA recommendations without developing a plan for guiding multiple knowledge-building efforts across DOD raises risk that information gaps will not be closed in time to be useful or not closed at all. Consequently, it is important for DOD to coordinate these efforts and focus on how best to obtain a future wideband architecture that provides critical communications for military operations. Recommendation for Executive Action The Secretary of Defense should ensure that the Under Secretary of Defense for Acquisition and Sustainment develop and implement a plan to guide and coordinate efforts to implement the Wideband AOA recommendations to support timely, informed decisions on its next wideband satellite communications architecture. (Recommendation 1) Agency Comments We provided a draft of this report to DOD for review and comment. DOD provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or by email at chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Best Practices for the Analysis of Alternatives Process The analysis of alternatives (AOA) process is an analytical study that is intended to compare the operational effectiveness, cost, and risks of a number of potential alternatives to address valid needs and shortfalls in operational capability. This process helps ensure that the best alternative that satisfies the mission need is chosen on the basis of the selection criteria, such as safety, cost, or schedule. GAO has identified 22 best practices for an AOA process by (1) compiling and reviewing commonly mentioned AOA policies and guidance used by different government and private-sector entities and (2) incorporating experts’ comments on a draft set of practices to develop a final set of practices. These practices can be applied to a wide range of activities and situations in which a preferred alternative must be selected from a set of possible options, as well as to a broad range of capability areas, projects, and programs. These practices can also provide a framework to help ensure that entities consistently and reliably select the project alternative that best meets the mission need. The guidance below is meant as an overview of the key principles that lead to a successful AOA process and not as a “how to” guide with detailed instructions for each best practice identified because each entity may have its own process in place. The 22 best practices that GAO identified are grouped into the following five phases: Initialize the AOA Process: includes best practices that are applied before starting the process of identifying, analyzing, and selecting alternatives. This includes determining the mission need and functional requirements, developing the study time frame, creating a study plan, and determining who conducts the analysis. Identify Alternatives: includes best practices that help ensure the alternatives that will be analyzed are sufficient, diverse, and viable. Analyze Alternatives: includes best practices that compare the alternatives selected for analysis in terms of costs, benefits, and risks. The best practices in this category help ensure that the team conducting the analysis uses a standard, quantitative process to analyze the alternatives. Document and Review the AOA Process: includes best practices that are applied throughout the AOA process, such as documenting in a single document all steps taken to initialize, identify, and analyze alternatives, selecting a preferred alternative, and independently reviewing the AOA. Select a Preferred Alternative: includes the final step of comparing alternatives and selecting a preferred alternative that best meets the mission need. The five phases address different themes of analysis necessary to complete the AOA process and comprise the beginning of the AOA process (defining the mission need and functional requirements) through the final step of the AOA process (select a preferred alternative). There are three key entities who are directly involved in the AOA process: the customer, the decision maker, and the AOA team. The customer refers to the group that implements the final decision (i.e. the program office, agency, and the like). A complex AOA process that impacts multiple agencies can have multiple customers. The decision maker is the person or entity who signs off on the final decision and analysis documented by the AOA report, and who will select the preferred alternative based on the established selection criteria. The decision maker should remain informed throughout the AOA process. For example, the decision maker could form a committee that consists of management and other groups independent of the AOA process who possess the required technical expertise or broad organizational knowledge to keep the decision maker apprised of and to inform the AOA process. The AOA team is the group involved in the day-to-day work of the AOA process and who conducts the identification and assessment of alternatives that is the foundation of the AOA process. We assessed the Department of Defense’s (DOD) Wideband Communication Services AOA against the “comprehensive” characteristic. Overall, the AOA met the six best practices we identified. Table 5 shows the relevant AOA best practices for the “comprehensive” characteristic. Appendix II: Department of Defense Wideband Communications Services Analysis of Alternatives Recommendations The Department of Defense (DOD) made the following recommendations in its Wideband Communications Services Analysis of Alternatives (AOA) report: 1. Immediately conduct a business case analysis that examines incorporating anti-jam and cybersecurity features that improve upon legacy capability into the Wideband Global SATCOM (WGS) Space Vehicle (SV) 11/12 procurement. 2. Investigate the impacts of WGS SV 11/12 to ground infrastructure, mission management, and user terminals to understand necessary modifications. 3. Develop and implement a DOD Enterprise Satellite Communications (SATCOM) Terminal Strategy that targets an approved Joint Information Environment architecture, reduces complexity of terminal diversity and programmatic governance, facilitates rapid modernization, and drives innovating business reforms, optimizing cost, schedule, and performance and interoperability. 4. Fund a purpose-built capability post-WGS SV 11/12 meeting user demands, including all weather capabilities, with a recommended start in fiscal year 2020, including consideration of alternate orbital regimes and approaches to cost-effectively meet needs while addressing proliferation, protection, and resiliency. The purpose is to ensure availability of DOD SATCOM resources to meet requirements where anticipated commercial offerings fail to materialize or are insufficient. 5. Continue efforts to invest in and shape commercial capabilities to support future DOD needs, including protection features, resilience, contested and all-weather capabilities, and polar coverage. Additionally, invest in and shape commercial industry development and risk reduction efforts focused on cybersecurity, terminal militarization/weapon system integration, management and control, technology assessment and development, and spectrum access. 6. Continue to fund existing and new SATCOM risk reduction efforts, evaluate blended commercial/military constellations, and expand the scope of pilots to include development of architectural standards and interface controls for enterprise management and control, terminal recapitalization plans, and means for terminals and/or weapon system platforms to transition satellite constellations and any DOD managed services. 7. Fund the design and implementation of a prototype wideband enterprise SATCOM management and control capability based on an approved Joint Information Environment architecture that integrates the management of Military, Commercial, and International Partner- provided SATCOM services and networks and supports the Enterprise Operational Management requirement in the Joint Space Communications Layer Initial Capabilities Document Change 1. 8. Plan for investment in Protected Tactical Waveform capabilities to commercial and military band terminals to align with the Protected Anti-Jam Tactical SATCOM planned ground and space milestones. 9. Fund pilot efforts to identify risks and opportunities to use commercially-managed services for Army’s Combat Support Logistics Very Small Aperture Terminals and ways to mitigate that risk. 10. Pursue partnership opportunities with Norway and Canada to achieve earlier Arctic coverage capability. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Cristina T. Chaplain, (202) 512-4841 or chaplainc@gao.gov. Staff Acknowledgments In addition to the contact named above, Rich Horiuchi, Assistant Director; Burns C. Eckert (Analyst in Charge); Erin Cohen; Emile Ettedgui; Jon Felbinger; Kurt Gurka; Stephanie Gustafson; Jennifer Leotta; Roxanna Sun; and Jay Tallon made key contributions to this report.
Why GAO Did This Study DOD officials estimate spending an average of $4 billion each year to acquire and sustain wideband satellite communications that provide fast and reliable voice, video, and data transmissions critical to military operations. DOD is considering how to meet its future wideband needs across many different operating environments and scenarios. The National Defense Authorization Act for Fiscal Year 2016 required DOD to conduct a Wideband Communications Services AOA to identify ways to replace current systems as the satellites reach the end of their service lives. The National Defense Authorization Act for Fiscal Year 2017 contained a provision for GAO to assess DOD's analysis. This report addresses (1) whether the Wideband AOA was comprehensive, (2) how DOD solicited input from stakeholders, and (3) the conclusions DOD reached through the Wideband AOA. GAO reviewed the Wideband AOA along with DOD policies, documentation, and analyses; interviewed DOD officials and commercial stakeholders; and assessed the AOA against best practices for a comprehensive AOA process. What GAO Found The Department of Defense (DOD) conducted a comprehensive analysis of alternatives (AOA) process for wideband satellite communications, as determined through an assessment of the AOA against relevant GAO best practices. A comprehensive analysis of alternatives process indicates that the analysis team thoroughly addressed a wide range of possible satellite system alternatives. DOD used multiple methods to obtain stakeholder input, in accordance with its Wideband AOA study plan. For example, the study team incorporated input from across the military services and operational users, among others. Moreover, the Air Force and Defense Information Systems Agency conducted interrelated studies to provide additional information to the Wideband study team. DOD's analysis concluded that integrating military and commercial systems into a hybrid architecture would be more cost effective and capable than either acquisition approach alone. However, DOD also found that it needs more information to select its next satellite communications architecture and made recommendations for further study. Examples of these recommendations include: Develop an enterprise satellite communications terminal strategy – DOD found the magnitude of replacing user terminals to work with new systems was challenging and that more information on emerging technology and possible changes to terminal acquisition approaches would help DOD address this challenge. Invest in commercial technologies – DOD found that it lacked detailed technical information on commercial systems' cyber protections and that additional information on such protections would help DOD determine the extent to which they would meet DOD's needs. Such recommendations align with GAO's acquisition best practices for knowledge-based decision-making and have the potential to improve the department's satellite communications acquisitions. However, DOD stakeholders said there is no formal plan to guide and coordinate implementation of the AOA recommendations. Without such a plan, DOD is at increased risk of not having the information it needs to make timely, knowledge-based decisions on future systems to provide critical communications for military operations. What GAO Recommends GAO is recommending that DOD develop a plan to guide implementation of the Wideband AOA recommendations. DOD provided technical comments on a draft of this report, which GAO incorporated as appropriate.
gao_GAO-20-364
gao_GAO-20-364_0
Background Congress passed TRIA in 2002 to address some of the challenges the insurance industry and businesses faced after the September 11 terrorist attacks. For example, after the attacks, insurers left the market, excluded terrorism risk coverage from policies, or steeply increased premiums. The Real Estate Roundtable reported in 2002 that nearly $16 billion of real estate projects in 17 states were stalled or cancelled because of the lack of coverage for terrorism risk (because many businesses are required to have coverage for terrorism risk as a condition for a mortgage loan). The purpose of TRIA is to (1) protect consumers by addressing market disruptions and ensuring the continued widespread availability and affordability of commercial property/casualty insurance for terrorism risk; and (2) allow for a transitional period for private markets to stabilize, resume pricing of such insurance, and build capacity to absorb any future losses, while preserving state insurance regulation and consumer protections. By law, an insurer’s coverage for terrorism losses must not differ materially from the terms, amounts, and other coverage limitations applicable to losses arising from other events. For example, an insurer offering $100 million in commercial property coverage also must offer $100 million in commercial property coverage for certified acts of terrorism. Insurers may charge a separate premium to cover their terrorism risk. TRIA requires insurers to make terrorism coverage on certain lines of property/casualty insurance (such as coverage for fire, workers compensation, and liability) available to commercial policyholders (such as businesses), although TRIA does not require commercial policyholders to buy it. The federal government does not collect an up-front charge from insurers for the government’s coverage of terrorism risk under TRIA. In a 2019 report, we noted that the federal government has multiple programs that can provide compensation to specific third parties if they suffer certain losses from future adverse events, and the federal government may not always charge premiums for accepting this risk of loss. However, under TRIA, the government must recoup at least some of its losses following a certified act of terrorism, as discussed below. TRIA has not caused financial liabilities to the federal government, but it could require large, previously unbudgeted expenditures by the federal government if an event occurred. Certification of an Act of Terrorism for Purposes of TRIA and Claims Processing For insurers to start submitting claims and receiving payments to cover terrorism losses, Treasury must first certify an event as an act of terrorism under TRIA. Certification requires the Secretary of the Treasury to evaluate the event based on two criteria: 1. Did the event meet the nonmonetary definition established under TRIA? Defining an event as an act of terrorism includes determining whether it was “committed by an individual or individuals as part of an effort to coerce the civilian population of the United States or to influence the policy or affect the conduct of the United States Government by coercion.” It also includes determining whether it was a “violent act or an act that is dangerous” to human life, property, or infrastructure, and whether it resulted in damage within the United States or certain areas outside the United States. As part of this determination, the Secretary of the Treasury must consult with the Attorney General and Secretary of the Department of Homeland Security before certifying an event. 2. Did the event cause at least $5 million in insurance losses in TRIA-eligible lines? TRIA prohibits the Secretary of the Treasury from certifying acts of terrorism unless insurance losses exceed this threshold. In 2004 Treasury issued regulations to implement TRIA’s procedures for filing insurer claims for payment of the federal share of compensation for insured losses. Within 7 days after certification of an act of terrorism, a Treasury contractor is to activate a web-based system for receiving claims from insurers and responding to insurers that seek assistance. Loss Sharing under TRIA The Terrorism Risk Insurance Program provides for shared public and private compensation for insured losses resulting from certified acts of terrorism. Under the current program, if an event were to be certified as an act of terrorism and insured losses exceeded $200 million, an individual insurer that experienced losses first would have to satisfy a deductible before receiving federal coverage. An insurer’s deductible under TRIA is 20 percent of its previous year’s direct earned premiums in TRIA-eligible lines. After the insurer pays its deductible, the federal government would reimburse the insurer for 80 percent of its additional losses and the insurer would be responsible for the remaining 20 percent. Annual coverage for losses is capped––neither private insurers nor the federal government cover aggregate industry insured losses in excess of $100 billion. After an act of terrorism is certified and once claims are paid, TRIA requires Treasury to recoup part of the federal share of losses in some instances. Under this provision, when insurers’ uncompensated insured losses are less than a certain amount (up to $41 billion for 2020), Treasury must impose policyholder premium surcharges on commercial property/casualty insurance policies until total industry payments reach 140 percent of any mandatory recoupment amount. When the amount of federal assistance exceeds this mandatory recoupment amount, TRIA allows for discretionary recoupment. Prior TRIA reauthorizations decreased federal responsibility for losses and increased private-sector responsibility for losses, but the 2019 reauthorization of TRIA made few changes to the program. For instance, the 2015 reauthorization required incremental decreases in the federal share of losses over 5 years (to 2020). The 2019 reauthorization extended the program to December 31, 2027 and proportionately adjusted the dates by which the Secretary must recoup policyholder surcharges to the new reauthorized time frame, but it did not change the federal share of losses. TRIA-Eligible Lines of Insurance TRIA covers insured losses in eligible lines that result from a certified act of terrorism (see table 1). Many lines of commercial property/casualty insurance are eligible for TRIA, such as workers’ compensation, fire, and commercial multiple peril (multiperil) lines. States generally require that workers’ compensation insurance covers terrorism risk and do not permit exclusions, including for terrorism, according to Treasury. Workers’ compensation covers an employer’s liability for medical care and physical rehabilitation of injured workers and helps to replace these workers’ lost wages. TRIA also excludes certain lines (such as personal property and casualty insurance and health and life insurance). Terrorism coverage typically is embedded in all-risk property policies but also may be available in stand-alone policies, according to Treasury: Embedded. Most policyholders have terrorism risk insurance coverage embedded in a policy that covers other risks. Embedded policies are subject to TRIA’s “make available” requirements. In the event of a certified act of terrorism, policyholders would be covered if they have not declined terrorism coverage. Stand-alone. Stand-alone terrorism policies provide coverage only for terrorism risks. Insurers may provide stand-alone terrorism coverage through “certified” policies that are subject to TRIA terms and conditions and provide coverage only in the event of a certified act of terrorism. Alternatively, insurers may provide terrorism coverage through “noncertified” policies that do not meet TRIA terms and conditions. Such noncertified policies cover terrorism-related losses regardless of whether Treasury certifies an event, but losses paid by insurers would not be eligible for reimbursement under TRIA. Nonconventional Coverage under TRIA Nonconventional terrorism risks generally include nuclear, biological, chemical, or radiological (NBCR) weapons, as well as cyber risks. Predicting losses associated with nonconventional risks can be particularly challenging because of the difficulty in predicting terrorists’ intentions and the potentially catastrophic losses that could result. TRIA is silent on NBCR and cyber risks, but Treasury has clarified how these nonconventional risks are covered under TRIA. In 2004, Treasury issued an interpretive letter clarifying that the act’s definition of insured loss does not exclude losses resulting from nuclear, biological, or chemical attacks, and does not preclude Treasury from certifying a terrorist attack involving such weapons. According to Treasury’s interpretive letter, the program covers insured losses from NBCR events resulting from a certified act of terrorism. However, for TRIA provisions to apply, insurers must provide coverage for those perils. Most insurers are not required to provide NBCR coverage and generally have attempted to limit their exposure to NBCR risks by largely excluding NBCR events from property and casualty coverage. In December 2016, Treasury issued guidance clarifying that, to the extent that insurers write cyber insurance in TRIA-eligible lines, the TRIA provisions apply. We further discuss Treasury’s guidance on cyber risk later in this report. Program Administration and Reporting Requirements TRIA authorizes Treasury to administer the Terrorism Risk Insurance Program. The Secretary of the Treasury administers the program with the assistance of Treasury’s Federal Insurance Office, according to Treasury officials. TRIA requires Treasury to conduct a biennial study of the effectiveness of the program. The 2015 TRIA reauthorization added a requirement that insurers submit information to Treasury about the coverage they write for terrorism risk, including the lines of insurance with exposure to such risk, the premiums earned on such coverage, and the participation rate for such coverage. The 2019 reauthorization added a requirement that Treasury report on the availability and affordability of terrorism risk insurance, including an analysis specifically for places of worship. Since 2016, Treasury has completed annual assessments of the program, including a report on the effectiveness of the program in June 2018. Treasury’s reports focused specifically on small insurers in June 2017 and June 2019. Treasury conducts an annual data call to collect information for the required studies and for purposes of analysis and program administration. Participation in the data call is mandatory for all insurers that write commercial property and casualty policies in lines of insurance subject to TRIA, subject to two exceptions. Treasury collects data separately for the following four groups of insurers: Small insurers have both a policyholder surplus and prior-year TRIA- eligible direct earned premium of less than five times the program trigger. Nonsmall insurers have policyholder surplus or the specified premiums above the small threshold and are not classified as captive or alien surplus line insurers. Captive insurers are special-purpose insurance companies set up by commercial businesses to self-insure risks arising from the owners’ business activities. Alien surplus lines insurers are foreign insurers that are qualified to do business in the United States through a process administered by NAIC. The Market for Terrorism Risk Insurance Is Currently Stable with the Support of TRIA The market for terrorism risk insurance has been stable in recent years, with coverage both available and generally affordable. According to our reviews of policy language, reports from and interviews with Treasury, researchers, insurers, and other industry stakeholders, the expiration of TRIA and the absence of an alternative backstop to terrorism risk insurance would cause disruptions to the market. Terrorism Insurance Generally Is Available and Affordable in the United States Reports from Treasury and an industry risk-management firm generally suggest there has been a stable market for terrorism risk insurance in recent years, with the coverage available and generally affordable in the United States. According to Treasury’s reports analyzing industry data, the majority of commercial policyholders in the United States purchase terrorism risk insurance, and at a relatively small percentage of total premiums. The market for terrorism risk insurance in the United States continues to remain competitive for most buyers according to 2018 and 2019 reports by Marsh, an insurance risk-management firm. Marsh attributed the competitive market for buyers to a steady decline in the frequency of global terrorist incidents and minimal insurance claims. Take-up Rates Since all insurers must offer terrorism risk insurance, the availability of such coverage can be measured in terms of take-up rates—the rates at which policyholders select terrorism risk insurance. These rates have remained stable in recent years, according to Treasury. However, take-up rates vary by line of insurance, industry sector of the policyholder, geographic location, and type of insurer writing the policies. Terrorism risk coverage is considered available when insurers offer coverage for losses resulting from a terrorism event, and take-up rates are an indication of how insurers are complying with TRIA’s “make available” requirement, according to Treasury. Treasury found take-up rates by insurer category ranged from 62 to 78 percent in its 2018 report, depending on how the rates were measured. According to Marsh’s 2019 report, the take-up rate for terrorism coverage embedded in policies that cover other risks has been around 60 percent for the past several years. Lines of insurance. According to Treasury’s 2018 report, take-up rates across lines of insurance ranged from 43 percent in the products liability line to 83 percent in the boiler and machinery line in 2017, as measured by direct earned premium (see fig. 1). The take-up rate for cyber insurance coverage is in the middle of the range, relative to other lines of coverage. Specifically, the take-up rate in 2018 for terrorism risk insurance under cyber policies (by TRIA-eligible direct earned premium) was 69 percent for stand-alone policies, up from 50 percent in 2017, as reported by Treasury. For coverage that is part of an embedded policy, the 2018 rate was 63 percent, up from 54 percent in the prior year. Industry sectors. Take-up rates across the industry sectors of the policyholders varied widely and ranged from 7 percent in the information sector to 76 percent in the finance and insurance sector in 2017, according to Treasury’s 2018 report (see fig. 2). Marsh found in its 2019 report that commercial policyholders in the education, media, financial, and real estate sectors were the most frequent buyers of terrorism risk insurance in 2018. Geographic location. Take-up rates varied by location, with the highest rates in the Northeast. In Treasury’s 2018 report, the rates ranged from 50 to 75 percent by state (see fig. 3). In its 2018 report, Marsh noted that the Northeast had both the highest rate of purchase and the most expensive coverage, and said that these trends were because of the presence of major metropolitan areas (such as New York and Boston) that have high-value targets for terrorism. Premiums According to Treasury’s 2018 report, premiums associated with terrorism coverage have remained relatively consistent in recent years and are a small part of overall premiums for embedded policies. According to that report, about 80 percent of the market (as measured by terrorism risk direct earned premium) comprises embedded policies and 20 percent stand-alone policies, and the price for each varies. Premiums for terrorism risk insurance embedded in a property/casualty policy are priced at a relatively small percentage of the total premium charged for the policy and typically range from 2.5 to 3.0 percent when a charge is made. In about 30 percent of policies, insurers do not charge for providing terrorism risk coverage. Stand-alone policies vary significantly in terms of cost because of differences in the relative size or nature of exposures covered under each policy, whether the policy was certified, and the type of insurer providing the coverage, according to Treasury’s data. Premiums also varied across lines covered and insurer types, with the most premium collected for workers’ compensation. According to Treasury’s 2018 report, about 36 percent of the total premium collected in TRIA-eligible insurance lines was for workers’ compensation. In stand- alone cyber policies an average 6.2 percent of the total premium was allocated to terrorism risk. See table 2 for more information on how premiums vary across lines of coverage. Small Insurers and Captives Trends for small and captive insurers in many instances are different from trends for nonsmall insurers. Small insurers. Total market share for small insurers within TRIA-eligible lines of coverage declined, relative to nonsmall insurers, over the past decade. The small insurer market share, as measured by direct earned premium, fell from 18.6 percent in 2009 to 12.6 percent in 2018. (Despite that overall decline, there was an increase from 2016 to 2018 as more insurers were defined as small because of the increased dollar amount of the program trigger). In addition, take-up rates tended to be lower for policies written by small insurers, compared to nonsmall insurers, both within most individual lines and across the overall market. Small insurers generally charged less premium for terrorism risk insurance overall than nonsmall insurers, although they may charge proportionally higher premiums in some lines of insurance, such as commercial multiple peril (liability). According to Treasury’s 2019 report, small insurers allocated a lower percentage of direct earned premium for terrorism risk than nonsmall insurers. Furthermore, small insurers also were more likely to offer terrorism risk insurance for free. In addition, small insurers earned a higher percentage of their total program direct earned premium in commercial multiple peril and workers’ compensation lines than did nonsmall insurers. The workers’ compensation market is subject to very high loss amounts with no defined limits of liability and significant potential aggregation risks. Captive insurers. Like small insurers, captive insurers often have premiums that are small, relative to other insurer categories. However, captive insurers generally can offer broader coverage than commercial policies, according to Marsh’s 2019 report. The report states that a captive insurer often offers policies that cost less than policies from commercial insurers, which also often restrict coverage for NBCR or cyber events. In addition, according to Treasury a highly concentrated event affecting only captive insurers (or small insurers) carries a higher likelihood that the affected insurers’ losses would not meet the program trigger, and therefore would not be reimbursed under the program. In this case, captive insurers could incur significant losses. Absence of Federal Program Could Disrupt Markets Based on Analysis of Policies and Selected Stakeholder Perspectives There could be significant disruptions to the insurance market if no federal terrorism risk insurance program existed, according to our reviews of policy language, reports from and interviews with Treasury, researchers, insurers, and other industry stakeholders. As Marsh noted in its 2019 report, TRIA’s federal backstop remains crucial to the continued stability of the terrorism risk insurance market. In its 2018 report, Treasury concluded that TRIA had made the coverage available and affordable, supporting a relatively stable market over the past decade. According to NAIC, TRIA helps foster the existence of a broader market for risks that otherwise would be either largely uninsured or borne by taxpayers. In the absence of a loss-sharing program, insurers likely would limit coverage, exit certain markets, or attempt to increase capacity, according to our review of reports from the federal government, researchers, industry entities, and interviews with industry stakeholders. For example: Limiting coverage. Most insurers begin the process to limit their coverage more than a year before any TRIA expiration by filing conditional exclusions, which, in effect, limits terrorism risk coverage in the event TRIA expired. According to one industry association, insurers have filed conditional exclusions before each of TRIA’s reauthorizations, although they are not commonly used for policies more than a year away from a potential expiration of the law. Our analysis of several policy endorsements filed with conditional exclusions suggests that, in the event of TRIA’s expiration, insurers likely would limit the total losses associated with an attack, and exclude certain types of terrorist attacks. We reviewed a nongeneralizable sample of conditional exclusions provided by the Insurance Services Office, which representatives say are widely used in the industry, and several selected conditional exclusions from individual insurers. These policies suggest that insurers filing conditional exclusions cap coverage for losses associated with an attack at $25 million, and entirely exclude losses caused by NBCR weapons. One policyholder association said that TRIA’s potential expiration and the need to file conditional exclusions results in a chaotic process, with insurers needing to file exclusions in each state in which they operated. Exiting markets. In the absence of a loss-sharing program, some insurers likely would exit certain markets, no longer offering terrorism coverage in specific geographic locations or lines of insurance, according to federal and industry reports and interviews with stakeholders. Small and midsize insurers in particular may withdraw from providing terrorism risk coverage entirely, according to one industry association. Furthermore, insurers providing NBCR or workers’ compensation coverage may decide to limit the policy terms or stop providing coverage, because of the risk of increased losses and potential exposures, according to Treasury. In addition, workers’ compensation risks are greater in large, metropolitan, more densely populated areas, and there are higher aggregation risks for insurers in large metropolitan areas, particularly for events involving NBCR weapons. Small insurers tend to operate on a regional basis in a smaller number of states than nonsmall insurers, and thus have a significant presence in individual local markets, according to Treasury. Options for increasing capacity. Insurers told us that they also likely would increase their premiums and purchase additional reinsurance for terrorism coverage in the absence of a program, although their ability to do so may be limited. One insurer said that premiums likely would go up significantly, although rate increases are subject to state limits. According to another insurer, reinsurance coverage for terrorism risk likely would become more limited, and be provided at notably higher rates. Insurers that are public companies may be able to increase capital through the stock market to build loss-absorbing capacity to help mitigate their increased loss exposures if TRIA expired. However, mutual insurers are not owned by shareholders and therefore cannot raise capital through the sale of shares; instead, they would have to rely on other ways of building capital. Several industry stakeholders pointed to particular challenges for certain insurers and lines of coverage if TRIA expired and Congress did not establish another loss-sharing program. Small insurers. Small insurers may be particularly vulnerable, facing ratings downgrades or otherwise being forced to exit the market for terrorism risk coverage, according to industry stakeholders. In May 2019, AM Best, a credit rating agency that focuses on the insurance industry, said insurers that did not limit exposure to terrorism risk losses before TRIA’s potential expiration in 2020 could face negative ratings pressure. AM Best identified 30 insurers (of about 230 with significant terrorism risk exposure) that failed stress tests, but said in October 2019 that implementation of plans established by these insurers would mitigate concerns about insolvency in the event TRIA expired and a terrorist attack occurred. The 30 insurers generally were small or midsize insurers. Captive insurers. Captives (entities that businesses set up to self- insure) generally require private reinsurance to insure against terrorism risk, and it is unclear if there would be sufficient capacity in the reinsurance market to obtain this coverage without TRIA. Captives tend to insure against a broader range of risks, including NBCR and cyber risks, when that coverage is unavailable or unaffordable in the market. One industry association representing captive insurers noted that captive insurance likely would become a more common way to insure against terrorism risk without a federal loss-sharing program. However, it warned that captive insurers may lack the capacity to ramp up operations quickly enough or secure the necessary reinsurance to fully absorb the risk of increased losses. NBCR coverage. Coverage for terrorism attacks involving NBCR weapons, which is already limited, would be further limited without a federal loss-sharing program, according to industry stakeholders. One industry association of insurance agents said that insurers’ capacity to absorb losses from such an attack would be a challenge without a backstop, as it was during the aftermath of the September 11 attacks, when there was very little capital devoted to coverage for terrorism risk. The representatives said this capacity would be even more limited for an NBCR attack, as losses could be significantly greater and few insurers offer NBCR coverage. Workers’ compensation coverage. The cost of coverage for workers’ compensation likely would increase significantly and availability likely would decrease without a federal loss-sharing program, according to researchers. Insurers have less flexibility to control terrorism exposure in workers’ compensation coverage, relative to other TRIA-eligible lines, according to Treasury. As noted earlier, state laws require employers to have the coverage and prohibit insurers from excluding terrorism risk, including NBCR risks, from workers’ compensation policies, according to Treasury. Insurers might respond to the absence of a federal loss-sharing program by not providing workers’ compensation coverage to employers, particularly those near high-risk targets in major metropolitan areas, according to a 2014 RAND Corporation policy brief issued before TRIA’s 2014 expiration. The brief added that this would force high-risk employers in these areas to obtain the required coverage from the residual market (state-run insurers or mechanisms of last resort), in which premiums are higher. In addition, the absence of a loss-sharing program could disrupt policyholders and the greater economy by stalling new building projects. Some stakeholders noted concerns that new building projects might be stalled if the law expired, similar to concerns in the weeks and months following the September 11 terrorist attacks. At that time, policymakers were concerned that the reduction in coverage by insurers uncertain of future losses would render commercial developers in high-risk areas unable to finance their projects, according to a report by the Congressional Budget Office. An insurance industry association told us businesses might find it difficult to obtain terrorism risk insurance, particularly for high-value projects in cities considered high-risk, such as New York and Washington, D.C. Treasury Has Certification and Claims Processes but Communication on Certification Is Limited Treasury has a process to certify acts of terrorism. However, industry stakeholders said Treasury does not publicly communicate information about the process and the lack of timely information might negatively affect the speed with which insurers respond to policyholder claims. Additionally, Treasury is to consult with DOJ and DHS but DHS’s understanding of its role during the certification process appears inconsistent with Treasury’s purpose, and no agreements document these roles. Treasury also has a process to pay insurer claims and has issued guidance concerning how cyber insurance is treated under TRIA. Treasury Incorporated Flexibility into Its Certification Process Treasury has established a process for certifying an event as an act of terrorism that provides the Secretary a flexible time period for gathering information after an event. Before insurers may submit claims under TRIA, the Secretary must certify an event as an act of terrorism. Congress directed Treasury to study the certification process in the 2015 reauthorization of TRIA, including the establishment of a “reasonable timeline” for a certification determination. In response, Treasury sought and received public comments on the process. Treasury issued its conclusions in an October 2015 report. According to this report, seven of the nine comments received recommended Treasury adopt a timeline governing the certification decision. But Treasury concluded the certification process must provide the Secretary with flexibility to gather information after an event, and thus a “rigid” timeline for certification would not be appropriate. Instead, Treasury concluded that “enhanced public communication” about the status of the Secretary’s assessment of an act may address commenters’ concerns. Treasury established an interim final rule for the certification process in December 2016. Treasury’s process for certification decisions includes an internal review phase and a public review phase before Treasury can make a determination (see fig. 4). Internal review phase. During this phase, Treasury establishes and convenes a certification management team and prepares a brief for the Secretary, according to interviews with agency officials and our review of Treasury documents. Treasury may conclude the internal review of an event without progressing to the public review phase. Public review phase. The public phase of the certification process includes communication requirements set by Treasury’s certification regulations. TRIA regulations direct that within 30 days of the Secretary commencing review of an event, Treasury must publish a notice in the Federal Register informing the public that an act is under review for certification. Treasury also may publish a notice that it is not reviewing an act for certification. The regulation does not establish a timeline by which the Secretary must begin reviewing an event, which leaves the timeline for certification flexible. Treasury’s public announcement that an event is under review begins a series of requirements for public notification and consultation with other agencies, according to TRIA regulations. As of March 2020, Treasury has not conducted the public review phase of its certification process. When the Secretary of the Treasury’s review concludes that an act satisfies the elements of certification, the Secretary then is to consult with the Attorney General and the Secretary of Homeland Security within 30 days, or as soon as practicable. According to our review of Treasury documents, this Secretary-level consultation is to occur immediately before Treasury issues a certification decision. According to interviews with officials and Treasury documents, Treasury engages with staff in specific offices in DHS and DOJ much earlier in the process, during the internal review phase. Coordination with officials in these offices continues throughout both phases of the certification process. For example, Treasury documents state it may hold conferences with DHS and DOJ to discuss factors relevant to making a recommendation to certify an event. No later than 5 business days after the certification determination, Treasury must publish a statement in the Federal Register notifying the public. By contrast, the UK’s terrorism risk insurance program publicly communicates clear timelines by which government entities must certify potential events. The UK Treasury has 21 days to certify an event once the program administrator requests a formal review. This deadline was extended from 10 days in 2015 to allow the police enough time to determine if an event met the definition of terrorism, according to UK Treasury officials. This timeline was chosen to balance providing time for certification with ensuring that businesses would see claims paid quickly. Regular communication with industry stakeholders after an event maintains confidence in the certification process, they said. Treasury’s Internal Review Phase Generally Not Publicly Communicated Treasury’s procedures for certifying an event do not include public communication of its internal review phase. Steps Treasury is to take during this internal review stage include establishing and convening a certification management team and preparing a brief for the Secretary, according to interviews with agency officials and our review of Treasury documents. To date, Treasury has not communicated to industry stakeholders whether it was reviewing events as possible acts of terrorism. Treasury officials told us that after events have occurred, they have looked into the circumstances and the amount of insurance losses caused. These considerations did not progress past the internal review phase of the certification process, which meant Treasury did not publicly communicate that it was reviewing these events for certification. For example, Treasury conducted internal reviews after the Boston Marathon bombing in 2013, but Treasury did not publicly communicate that it was looking into the event or that it had decided not to formally review the event for certification. Treasury ultimately did not certify the event because insured losses from the bombing on TRIA-eligible lines of insurance totaled $2.1 million, which was under the $5 million certification threshold, according to Massachusetts state insurance officials. In interviews and formal public comments on Treasury’s proposed certification rule, some industry stakeholders said the Boston Marathon bombing raised questions about the certification process because they viewed the event as a clear terrorist attack. It was unclear to some industry stakeholders if the event was not certified because it did not reach the monetary loss threshold for certification, which was unknown at the time, or because it did not meet TRIA’s nonmonetary requirement for establishing intent. Insurers and industry stakeholders told us they were uncertain about the length of time Treasury would take after future events to communicate that it was considering certification. All five insurers we interviewed said they would like improved communication from Treasury after an event like the Boston Marathon bombing. Treasury officials said that in response to the Boston Marathon bombing, they documented procedures for certification. However, these procedures do not include steps to communicate publicly during the internal review phase, according to our review of Treasury documents. If a future event analogous to the Boston Marathon bombing were to occur, under Treasury’s current procedures it would not communicate the status of its internal review publicly, and public communication would not occur if it chose to conclude its review before the public review phase began. Implication of Certification of an Act of Terrorism for Terrorism Risk Insurance Act (TRIA) Coverage TRIA is designed to share losses from a certified act of terrorism between insurers and the government. For insurers to receive support from this federal backstop, they must offer insurance for “acts of terrorism” defined in a manner consistent with the law, which requires certification by the Secretary of the Treasury. A certification determination affects policyholders differently, depending on if they purchased or declined terrorism coverage. Specifically, insurers would pay claims from policyholders that purchased terrorism coverage in the event of a certified act of terrorism, whereas insurers would not pay claims from policyholders that declined terrorism coverage. Insurers could face uncertainty about whether to pay claims on both policy types, however, if the Secretary of the Treasury does not make a certification determination. This is because the definition of an act of terrorism in insurance policies for both policy types is often linked to certification. Industry stakeholders and insurers we interviewed said they need to know whether Treasury is considering certifying an event to help provide certainty in paying policyholder claims and receiving reinsurance payments (see sidebar). Policyholder claims. Industry stakeholders and four of five insurers we interviewed said Treasury’s lack of communication about an event’s potential certification can lead to uncertainty about whether to pay claims on policies—both those that include and exclude terrorism coverage. Delays in paying claims while waiting for communication about certification put them at risk of violating their agreements with policyholders and state laws, they said. Insurance policies typically have timeline requirements for the insurer to investigate and pay claims, and some state laws require insurers to pay claims by a certain date, according to NAIC. Treasury officials said state requirements to pay claims by a certain date may receive extensions under state regulation when uncertainty requires that a claim investigation continue. One insurer with which we met said that a statement from Treasury when it was considering an event would help them determine whether to pay claims or not. Reinsurance. Industry stakeholders said uncertainty would delay reinsurance coverage. If insurers delayed paying policyholder claims because of uncertainty about certification of a terrorist attack, reinsurers also might delay payments to insurers. Reinsurance payments are often triggered by the insurer’s payment of a claim to the policyholder. Additionally, some reinsurance contracts may define terrorism specifically as a Treasury-certified act of terrorism, and may be contingent on Treasury making a certification determination. The goals of TRIA are to foster market stability and to protect consumers by addressing market disruptions. In addition, according to federal standards for internal control, management should externally communicate the necessary quality information to achieve the entity’s objectives, including communicating with external parties. Treasury officials said they have not chosen to set a deadline for public communication after a potential terrorist event because they need flexibility to collect accurate information about events whose circumstances can vary widely. In the preamble to its interim final rule on certification, Treasury concluded that public communication about the certification process provides the public with necessary information while avoiding the problems Treasury raised with establishing a strict timeline. However, Treasury’s internal review phase includes no public communication. Additionally, Treasury may conclude its review of an event without progressing to the public review phase and therefore may not issue any public communications on the event. Without public communication about when it is considering certification, Treasury risks contributing to market uncertainty rather than stability after an attack. Treasury Consults with DOJ and DHS, but No Agreements Document the Agencies’ Roles TRIA requires cabinet-level consultation with DOJ and DHS in the public review phase of the certification process, but Treasury officials also conduct staff-level consultations. Treasury officials consult with DOJ’s National Security Division and DHS’s Support Anti-terrorism by Fostering Effective Technologies (SAFETY) Act office during the internal review phase of the certification process and have identified a single point of contact in each office (see sidebar). Consultation Agencies The Department of the Treasury consults with two other federal offices in the Department of Homeland Security (DHS) and the Department of Justice (DOJ), respectively, that have the following responsibilities: The Support Anti-terrorism by Fostering Effective Technologies (SAFETY) Act Office in DHS provides liability protections to manufacturers and sellers of specified anti-terrorism technologies. The Office of SAFETY Act Implementation reviews if an attack meets the SAFETY Act definition of an act of terrorism and if terrorists use such technology in the course of an attack, according to DHS officials. The Secretary of Homeland Security then determines whether an act has met the size and intent definitions of the SAFETY Act. DOJ’s National Security Division also makes recommendations for the International Terrorism Victim Expense Reimbursement Program, which provides funds to compensate victims of international terrorism occurring outside the United States. The Assistant Attorney General for National Security, in consultation with the National Counterterrorism Center, then determines whether to certify an event for the program, according to DOJ officials. determine whether an event meets TRIA’s definition of an act of terrorism. Such information might include things like who claimed responsibility for the event or evidence of the motivation for the attack. Officials said they provide this information upon request within 24 hours after an event. DOJ officials said the process they use to review events for TRIA purposes is similar to that used for DOJ’s International Terrorism Victim Expense Reimbursement Program. In contrast, DHS officials said their office does not provide information about an event to Treasury for purposes of certification, and that they believed DOJ would have the majority of this information. They said DHS informs Treasury about whether the event is being reviewed for the purposes of the SAFETY Act and whether terrorists used SAFETY Act-qualified technology (see sidebar). DHS officials said this is the information Treasury has requested from them and they consult with Treasury because many applicants for SAFETY Act designations have insurance policies backed by TRIA. Treasury officials stated that that they expect these two DHS and DOJ offices to serve as a single point of contact and coordinate with other relevant offices in their agencies as needed. DOJ officials confirmed they see this as their role, and said they would work with other offices in DOJ, including the Federal Bureau of Investigation, to consult with Treasury on certifying an act of terrorism. However, DHS officials said they do not see this as their role. The Secretary of the Treasury must consider, along with monetary requirements, the nature and motivation behind a potential terrorist attack to determine if it meets TRIA’s definition of an act of terrorism, according to TRIA regulations. Coordination among Treasury, DOJ, and DHS allows the Secretary access to critical and timely information relevant to certification, according to Treasury. In addition, according to federal internal control standards, management should use quality information to achieve the entity’s objectives, which includes identifying information requirements and obtaining relevant data from reliable sources in a timely manner. The standards also state that agencies should use methods such as written documentation to internally and externally communicate the information needed to achieve their objectives. In addition, our 2009 report on disaster planning provides an example of the benefits of clearly defined roles among federal agencies. We reported that defining the roles and responsibilities of stakeholders prior to a disaster could help foster collaboration, and that effective recovery plans should identify specific roles and responsibilities among various stakeholders. However, Treasury has not documented DOJ’s and DHS’s roles in certification consultations and instead relies on informal relationships with agency staff. This may contribute to the different perspectives DHS officials had on their role in the process. Treasury officials said although they do not have a written agreement, each agency understands its obligation to consult with Treasury in light of TRIA’s provisions requiring it. Although each agency told us it understood the certification process, DHS officials and Treasury differed in their understanding of DHS’s role in certification. A documented agreement among the agencies would provide procedures on roles and information sharing to which to refer during the potentially chaotic aftermath of a terrorist attack. As agency staff change over time, documenting these roles and information sharing among Treasury, DOJ, and DHS could help ensure continuity of operations if future events occurred. Furthermore, a written agreement would help Treasury access quality information and help ensure a smooth and timely process for certifying events under TRIA. Treasury Has Developed and Tested a Process for Fulfilling Insurer Claims for the Federal Share of Losses Treasury has a process for fulfilling claims that uses a web-based system developed and operated by a contractor. Once the Secretary certifies an act of terrorism, Treasury is to issue a task order to the contractor, which is to make the claims website operational within 7 business days, according to its contract. The claims process begins for insurers when their total insured losses exceed 50 percent of their deductible within a calendar year, at which point insurers must submit a form notifying Treasury. An insurer may claim the federal share of compensation when its total insured losses exceed its deductible for a calendar year, according to TRIA regulations. The responsibilities of Treasury’s contractor include reviewing and testing the web-based claims system; activating and providing ongoing operation of the claims system; receiving and reviewing insurers’ required documents for completeness and accuracy; obtaining information from insurers as needed and answering questions by email and telephone; and recommending Treasury pay claims. Treasury’s contractor has developed operating guidelines that detail work flows and controls for how it will begin processing claims. The operating guidelines include a plan to transfer existing staff from other responsibilities to operate the claims process, as needed. According to the contractor, staff responsible for processing claims in the event of a certified terrorist attack participate in an annual training session. Treasury’s contractor also built quality checks within its web-based system to automatically review submissions. Moreover, Treasury’s contractor has tested the web-based claims system. The contractor said it completed more than 40 rounds of readiness testing since 2004. The contractor must conduct readiness testing at least three times a year and test contingency plans and disaster recovery procedures at least annually, according to the contract. In addition, Treasury’s contractor developed a demonstration website that is publicly available (see fig. 5). Of the five insurers GAO interviewed, one said it used the demonstration website, two said they had not, and two were unsure if anyone in the company had used the website. The contractor said they previously have invited insurers to participate in testing. The website outlines the general claims process and includes the forms insurers would submit in the event of a certified terrorist attack. Most industry stakeholders who were familiar with the claims process told us they found it to be clear. Those stakeholders who were unfamiliar with the process said they had no concerns about it at present. Of the five insurers we interviewed, three said the only concern they had regarding the claims process is how quickly Treasury would certify an event and pay insurers’ claims. One insurer said the claims process was clear, and one said it was unable to comment because it had not tested the process. Guidance on Cyber Coverage under TRIA Is Clear to Selected Industry Stakeholders In December 2016 Treasury issued guidance clarifying that, to the extent that insurers write cyber insurance under an embedded or stand-alone policy in TRIA-eligible lines, the TRIA provisions apply. In our May 2014 report, we found insurers were uncertain about whether TRIA covered risks from a cyberterrorism attack, and recommended that Treasury clarify whether losses that may result from cyberterrorism were covered under TRIA. Treasury’s 2016 guidance included three elements: 1. Treasury considers cyber policies that are reported under the “cyber liability” line for state regulatory purposes to be “property and casualty” insurance under TRIA, and therefore eligible for payment of the federal share of compensation in the event of a certified terrorist attack. 2. Policies only would be eligible if insurers made the same required disclosures to policyholders about the program as other TRIA-eligible lines. 3. Treasury requires insurers to provide disclosures and offers that comply with TRIA and the program regulations on any new or renewal policies reported under the cyber line. Industry stakeholders said that Treasury’s guidance about cyber insurance coverage under TRIA was clear. Some industry stakeholders said that there was some initial confusion about the guidance because it indicated the NAIC created a new line for cyber liability on the property/casualty annual statement, although this was not the case. According to NAIC representatives, changes were made to how insurance products were coded for rate-filing purposes, and these changes did not affect the lines of business reported on the property/casualty annual statement state page. Treasury officials said there may have been some ambiguity in how they communicated the 2016 guidance. NAIC representatives said despite this initial confusion, the industry understood the guidance. Industry stakeholders said that questions remain about what type of cyberattack Treasury would certify as an act of terrorism. TRIA’s definition of an act of terrorism requires an act “to have been committed by an individual or individuals as part of an effort to coerce the civilian population of the United States or to influence the policy or affect the conduct of the United States government by coercion.” However, according to industry stakeholders and industry analysts, the nature of a cyberattack means that tracing and attributing the event to an individual is difficult. Additionally, generally the Secretary of the Treasury may not certify an act if it is committed as part of a war declared by Congress. The Advisory Committee on Risk-Sharing Mechanisms, which provides recommendations to the Federal Insurance Office about risk sharing for terrorism losses, has been researching issues related to cyberterrorism insurance. According to this advisory committee, this group will provide Treasury with recommendations regarding this and other issues in spring 2020. Conclusions Since shortly after the attacks of September 11, 2001, the Terrorism Risk Insurance Program has helped to ensure stability in the market for terrorism risk insurance, with the coverage generally available and affordable. However, insurers and policyholders are not aware of whether, and through what process, Treasury considers certifying an event as a terrorism event. Without public communication about when it is considering certification, Treasury risks contributing to market uncertainty rather than stability after an attack. The purpose of Treasury’s required consultation with DHS and DOJ in certifying an event is to provide Treasury the necessary law enforcement, intelligence, and homeland security information within the two agencies’ authorities and jurisdictions. However, DHS’s understanding of its role in the internal review phase of the certification process appears to differ from this stated purpose. Treasury has established and maintained informal connections with both agencies, but it has not documented these roles. By documenting agreements between Treasury and the two consulting agencies, Treasury can obtain quality information to help ensure a smooth and timely certification process. Recommendations for Executive Action We are making the following three recommendations to Treasury: The Director of the Federal Insurance Office should publicly communicate information about when it is considering certifying an event as an act of terrorism under TRIA. (Recommendation 1) The Director of the Federal Insurance Office should document an agreement with DHS about DHS’s role, and how the agencies share information, during the process of certifying an event as an act of terrorism under TRIA. (Recommendation 2) The Director of the Federal Insurance Office should document an agreement with DOJ about DOJ’s role, and how the agencies share information, during the process of certifying an event as an act of terrorism under TRIA. (Recommendation 3) Agency Comments We provided a draft of this report to Treasury, DOJ, DHS, and NAIC for review and comment. DOJ and NAIC did not have any comments. Treasury provided written comments through the Federal Insurance Office, which are reproduced in appendix II and discussed below. Treasury and DHS provided technical comments, which we incorporated as appropriate and discuss below. We also solicited and received technical comments from the UK Treasury and incorporated them as appropriate. In its written comments, Treasury agreed with our three recommendations and described how it would address them. In response to our first recommendation, Treasury stated that it will consider potential changes to the certification process in conjunction with the results of the review by the Advisory Committee on Risk-Sharing Mechanisms of certification procedures (due in spring 2020). In response to our second and third recommendations, Treasury said that it will further coordinate with DOJ and DHS on their respective roles and evaluate any additional steps to clarify their roles in investigating potential events. In technical comments, DHS questioned our characterization of its role during the certification process. DHS reiterated that it would provide Treasury with information on how DHS handles an incident in relation to the DHS SAFETY Act process, and not information regarding any possible investigation of a terrorist event. DHS stated that this is the information Treasury requested from the office for potential events in the past. However, we found that Treasury has not documented the type of information it expects from each agency during its internal review phase and maintain that information related to the DHS SAFETY Act process is inconsistent with Treasury’s purpose for consultation—to obtain law enforcement and intelligence information. We maintain that documenting the information Treasury expects from each agency would ensure that Treasury obtains the information it needs to make a certification decision. We are sending copies of this report to the Secretary of the Treasury, the Acting Secretary of Homeland Security, the Attorney General, and other interested parties. In addition, the report is available at no charge on the GAO website at https://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology In this report, we use “TRIA” to refer to the Terrorism Risk Insurance Act of 2002 and its subsequent reauthorizations. The objectives of our report were to examine (1) the current market for terrorism risk insurance and TRIA’s role in the market; and (2) the Department of the Treasury’s (Treasury) certification and claims processes, and industry stakeholders’ views on these processes, including guidance on cyber risk coverage. To address these objectives, we reviewed the Terrorism Risk Insurance Act of 2002; Terrorism Risk Insurance Extension Act of 2005; the Terrorism Risk Insurance Program Reauthorization Acts of 2007, 2015, and 2019; and implementing regulations, and congressional records. We also reviewed prior GAO work on this topic. We interviewed officials from the Treasury, National Association of Insurance Commissioners (NAIC), and Congressional Research Service and reviewed relevant reports from these entities. We also interviewed and reviewed reports from an academic researcher and several industry participants to obtain information for all our objectives, including insurers, representatives from insurance trade associations (representing insurers, reinsurers, mutual insurers, and captive insurers), risk modeling firms, and a rating agency. Specifically, we obtained information from five insurers. In all interviews, we asked participants about the potential effects of TRIA’s expiration on terrorism risk coverage, the effect of changes to the program from 2015 to 2020, and their views on Treasury’s certification and claims process, and guidance on coverage for cyberterrorism. We initially contacted eight insurers—four from among the largest U.S. commercial property and casualty insurers in TRIA-eligible lines of business (according to SNL Financial) and four smaller insurers previously recommended by insurance brokers and trade associations during prior GAO work. Five of these eight insurers, all of whom provided terrorism coverage to businesses, responded to our request and agreed to meet with us. Among these five insurers, two were large, two were small, and one was a captive insurer; two provided workers’ compensation and one provided cyber risk coverage. We determined that the information we obtained from these five insurers was sufficient for the purposes of obtaining a range of views of the market, but it is not generalizable to the practices of other insurers not included. To describe the current status of the market for terrorism risk insurance and how the market might be affected if TRIA were to expire, we reviewed annual Treasury reports on the program from 2017, 2018, and 2019, as well as reports from Marsh, an insurance risk-management firm, and other industry stakeholders. We reviewed these reports for information on affordability and availability of terrorism risk insurance, including data on take-up rates, premiums, geographic coverage, and trends over time. We also reviewed language in insurance policies that excluded some terrorism coverage in the event that TRIA was not reauthorized. To assess Treasury’s certification and claims processes, we reviewed documentation on the certification process, including Treasury’s internal policies and websites. We interviewed agency officials and the contractor responsible for operating the claims process after a certified terrorist attack, and we reviewed Treasury’s contract with this operator and the contractor’s internal policies. We also interviewed officials from the Departments of Homeland Security and Justice regarding their role in consulting with the Secretary of the Treasury on certification decisions. We reviewed relevant documents from the Organisation for Economic Co- operation and Development and relevant industry reports from four foreign countries with terrorism risk insurance programs: Australia, Belgium, Israel, and the United Kingdom (UK). We selected these countries because their terrorism risk insurance programs require certification by a government entity to pay claims. We interviewed the terrorism risk insurance pool operator and the certification entity for the UK because this program includes a short (21-day) timeline for certifying terrorist events. Additionally, we interviewed and reviewed documentation from a U.S. company that provides loss estimates, primarily to the insurance-linked securities market, which investors use to determine if a catastrophe bond has been triggered by an event. We compared Treasury’s certification and consultation process against criteria in federal internal control standards on management communication. To determine how cyber terrorism is covered under TRIA and in commercial policies, we reviewed Treasury guidance. We also met with Treasury officials and representatives of the Insurance Services Office, a property/casualty insurance industry association that develops standardized policy language, and reviewed its standard policies for cyber insurance. We also reviewed Treasury reports on cyberterrorism coverage, including data on take-up rates and direct earned premiums for cyberterrorism risks. We conducted this performance audit from April 2019 to April 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of the Treasury Appendix III: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments: In addition to the contact named above, Jill Naamane (Assistant Director), Nathan Gottfried (Analyst in Charge), Anna Blasco, William R. Chatlos, Giselle Cubillos-Moraga, Kaitlan Doying, Karen Jarzynka-Hernandez, May Lee, Barbara Roesmann, Jessica Sandler, Jena Sinkfield, and Rachel Whitaker made significant contributions to this report. Related GAO Products Terrorism Risk Insurance: Market Challenges May Exist for Current Structure and Alternative Approaches. GAO-17-62. Washington, D.C.: January 12, 2017. Terrorism Risk Insurance: Comparison of Selected Programs in the United States and Foreign Countries. GAO-16-316. Washington, D.C.: April 12, 2016. Terrorism Insurance: Treasury Needs to Collect and Analyze Data to Better Understand Fiscal Exposure and Clarify Guidance. GAO-14-445. Washington, D.C.: May 22, 2014. Terrorism Insurance: Status of Coverage Availability for Attacks Involving Nuclear, Biological, Chemical, or Radiological Weapons. GAO-09-39. Washington, D.C.: December 12, 2008. Terrorism Insurance: Status of Efforts by Policyholders to Obtain Coverage. GAO-08-1057. Washington, D.C.: September 15, 2008. Terrorism Insurance: Implementation of the Terrorism Risk Insurance Act of 2002. GAO-04-307. Washington, D.C.: April 23, 2004.
Why GAO Did This Study TRIA created a federal program to help ensure the availability and affordability of terrorism risk insurance. Insurers must make terrorism risk coverage available to commercial policyholders. The federal government and insurers share losses on such policies resulting from a certified act of terrorism causing at least $5 million of insurance losses. Annual coverage for losses by insurers (who have met their insurer deductible) and the government is limited to $100 billion. The program is set to expire December 31, 2027. GAO was asked to review TRIA. This report examines (1) the current market for terrorism risk insurance and the program's role in the market, and (2) Treasury's processes to certify acts of terrorism and fulfill claims. GAO reviewed Treasury reports and related industry studies, Treasury's guidance and procedures for the program, and insurance policy language. GAO also interviewed Treasury officials and industry stakeholders, including a nongeneralizable sample of insurers of different sizes providing various types of insurance. What GAO Found With the support of a program established under the Terrorism Risk Insurance Act (TRIA) in which the federal government and insurers would share losses in the event of a certified act of terrorism, terrorism risk insurance is generally available and affordable in the United States. For example, the majority of commercial policyholders generally purchased terrorism risk insurance in recent years, according to Department of the Treasury (Treasury) data. The insurance market would be significantly disrupted without a loss-sharing program such as that established under TRIA. Specifically, insurers generally would not have to offer terrorism risk coverage and likely would charge higher premiums in the absence of a loss-sharing arrangement and cap on losses, according to GAO's review of policies and interviews with industry stakeholders, including insurers and insurer associations. Without access to affordable coverage, new building ventures could be delayed and employers could struggle to find affordable workers' compensation coverage. Treasury has processes for certifying terrorist events and fulfilling claims under the program, but a lack of communication about aspects of Treasury's certification process could pose challenges for insurers. Some industry stakeholders, such as insurers and representatives of insurer associations, raised issues about Treasury communications on certification. They cited confusion over why the 2013 Boston Marathon bombing was not certified when they clearly viewed it as a terrorist attack. These industry stakeholders also expressed concern that Treasury never communicated whether it was reviewing the event for certification or its reasons for not certifying it. Most insurers GAO interviewed said such lack of communication by Treasury again could lead to uncertainty about whether to pay claims, putting them at risk of violating state laws and their policyholder agreements. TRIA regulations on certifying acts of terrorism include some public notification requirements but do not require Treasury to communicate when it is considering reviewing an event for certification. One purpose of TRIA is to stabilize the insurance market after a terrorist attack. Public communication of when Treasury is considering an event for certification would reduce uncertainty about which claims insurers should pay and lessen potential disruptions to the market after an attack. One step in determining when to certify an event is Treasury's consultation with offices in the Department of Homeland Security (DHS) and Department of Justice (DOJ) to obtain law enforcement, intelligence, and homeland security information. However, GAO found that DHS had a different understanding of its role in this staff consultation process, and Treasury had not documented agreements with either agency. By documenting agreements between Treasury and the two consulting agencies, Treasury can better ensure a smooth and timely certification process. Once an event is certified as an act of terrorism, Treasury has a process for fulfilling claims that uses a web-based system developed and operated by a contractor. As of February 2020, the system had not yet been used because Treasury had not certified any acts of terrorism or paid claims under the program. What GAO Recommends GAO is making three recommendations, including that Treasury publicly communicate when it is considering reviewing an event for TRIA certification and document agreements with both DHS and DOJ on the agencies' roles in the process. Treasury agreed with the recommendations.
gao_GAO-20-2
gao_GAO-20-2_0
Background Navy shipbuilding is a costly and complex endeavor that requires billions of dollars to develop, design, and construct ships. However, the acquisition phase of a ship’s life cycle only accounts for approximately 30 percent of a ship program’s total life-cycle cost. Notionally, the remaining 70 percent of the life-cycle cost of a ship program is incurred after the Navy delivers new ships to the fleet during the phase known as O&S. DOD guidance states that these long-term sustainment costs are determined in large part by decisions made early in the acquisition process. Approximately 80 percent of a program’s O&S costs fixed at the time the shipbuilding program’s requirements are set and the ship is designed. Additionally, we have found that once these decisions are made, it can be very difficult and costly to make changes if sustainment improvements are needed. According to DOD, operational support is a function of several related factors—reliability, availability, maintainability, and cost—that are determined in large part by decisions made before the start of construction. Reliability is the probability that an item, such as a system, can perform a required function under stated conditions for a specified period of time. Availability is a measure of the degree to which an item is in an operable state and can be called upon to work at the start of a mission and at an unknown (random) point in time. In other words, the degree to which a system is operable and available for mission tasking when needed. Maintainability is the ability of an item, such as a system, to be retained in or restored to a specified condition when maintenance is performed by skilled personnel using prescribed procedures and resources, at each prescribed level of maintenance and repair. Cost refers to the O&S costs associated with sustaining the ship. When planning for and executing ship sustainment, DOD guidance states that the program manager’s goal is generally to find a solution that maximizes reliability, availability, and maintainability within cost constraints. As the Navy acquires its ships, it makes a series of decisions that have implications for how a ship class can be affordably sustained, including decisions about engineering, ship design, equipment selection, and planned maintenance approaches. As such, DOD guidance advises acquisition programs, including Navy shipbuilding programs, to plan for and design reliability, availability, and maintainability into the weapon system early in the acquisition effort. For the purposes of this review, we define early in the acquisition process as the time period between the beginning of the program and the start of construction on the lead ship. Giving attention to these sustainment issues early in the acquisition process is intended to help programs ensure that their ships will be sustainable and affordable over their entire life cycle. Conversely, if reliability, availability, and maintainability are not adequately designed into the ship, there is a risk the ship will cost more to own and operate than expected and will not be available for use when needed by the fleet. Since Navy ships are comprised of numerous systems that need to work together, planning for sustainment and designing reliability, availability, and maintainability into a ship is a complicated task. Most Navy ships can accomplish several different missions, and accomplishing these missions usually requires a set of mechanical, electrical, combat, information technology, and other systems to work together. Each of these systems individually needs to be reliable, available, and maintainable in order for the ship as a whole to be sustainable. As such, addressing sustainment during the acquisition process is an effort that requires coordination and input from a variety of officials associated with the program, including the program manager, requirements officials, ship design managers, engineers, PSMs, and others. DOD and Navy Policies for Acquisition Programs DOD acquires new weapons, including ships, for its warfighters through a process described in two key acquisition policies: Department of Defense Directive 5000.01, which establishes the overarching framework for the Defense Acquisition System, and Department of Defense Instruction 5000.02, which implements the Defense Acquisition System. These policies provide management principles and detailed procedures, respectively, for the execution and management of defense acquisition programs. Specifically, these policies establish the phases of the acquisition process, key milestone decision points, required acquisition documentation, and roles and responsibilities for acquisition officials, among other things. Under this framework, shipbuilding programs move through several acquisition phases, including requirements setting, material solution analysis, technology development, ship design, ship construction, deployment, and sustainment. In order to proceed through the acquisition process, shipbuilding programs must be reviewed periodically at key decision points, called milestones, at which a Milestone Decision Authority assesses the program’s progress in achieving its goals. These milestones typically coincide with significant increases in the resources devoted to the program. To ensure senior leadership is well- informed at these decision points, shipbuilding programs are generally required to create or update key acquisition documents for milestone reviews that contain information on the program’s requirements, costs, and schedule, among other things. The Navy has also established its own acquisition policy and processes to supplement the DOD-wide acquisition policies and to oversee acquisitions managed internally to the Navy. The Navy’s acquisition policy, Secretary of the Navy Instruction 5000.2, provides instructions for implementing the Defense Acquisition System within the Navy, as well as additional Navy-specific acquisition procedures. In particular, Navy acquisition policy establishes a series of seven Navy decision points throughout the acquisition process, called Gate reviews, which complement the DOD milestones. These Gate reviews are split into two phases that the Navy calls passes: the first is led by the CNO and focuses on requirements setting and the second is led by the Assistant Secretary of the Navy for Research, Development, and Acquisition (ASN (RD&A)) and focuses on acquisition. As programs move through the acquisition process, Navy leadership—comprised of officials from the acquisition, requirements, resources, and warfighting communities— convenes Gate reviews to conduct oversight and ensure programs are on track to achieve their acquisition and sustainment goals. Each Gate review has a different objective and list of topics that need to be included in the Gate briefing. Lastly, DOD and Navy policy both allow for the Milestone Decision Authority to tailor the acquisition process outlined in these policies. Figure 1 depicts the acquisition process for Navy shipbuilding programs, as established by DOD and Navy acquisition policies. The Navy’s acquisition policy also details the acquisition responsibilities of key Navy officials, including the ASN (RD&A), the CNO, and program managers. The CNO and the ASN (RD&A) are key Navy leaders who chair the Gate review process and approve acquisition documents. For most shipbuilding programs, the ASN (RD&A) also serves as the decision authority to approve the advancement of these programs through the acquisition process at the appropriate milestones. Further, the policy enclosures delineate various elements of acquisition programs, such as systems engineering, testing, and sustainment planning. DOD and Navy acquisition policies both include requirements for shipbuilding programs to consider sustainment throughout the acquisition process. For instance, prior to Milestone A, DOD policy states that sustainment planning and considerations should inform the development of program requirements and early ship design decisions. As programs move into the design and construction phases, programs are to develop a comprehensive product support package and evaluate it through engineering reviews and other tests to ensure it is sufficient to meet sustainment requirements and affordability targets. The planning documents that comprise these support packages, such as life-cycle sustainment plans (LCSPs), are intended to set the foundation for how the fleet will sustain a class of ships. Statutory Changes That Have Increased Attention on Sustainment during the Acquisition Process In addition to the requirements set in DOD and Navy policies, Congress has passed laws related to increasing DOD and Navy attention on sustainment throughout the acquisition process. Chief among these is the creation of the role of the PSM. A PSM should develop and implement a comprehensive product support strategy for weapon systems, among other things. More recently, Congress has directed organizational changes related to DOD and Navy acquisition leaderships’ attention to sustainment. In response, the Navy has added a sustainment function to the ASN (RD&A)’s portfolio. The Navy implemented this direction in fiscal year 2020 with the appointment of a Deputy Assistant Secretary for Sustainment that reports to the ASN (RD&A). Congress has also established several requirements related to DOD and Navy management of acquisition programs’ O&S costs, sustainment planning, and sustainment reporting. For example, statute requires weapon system programs to consider, where appropriate, sustainment in key acquisition documents, such as acquisition strategies, designs, contracting, and cost estimates. Additionally, statute requires DOD to provide Congress with annual Selected Acquisition Reports that have sustainment and life-cycle cost information. Key Documents That Support Sustainment Planning during the Acquisition Process Shipbuilding programs are required to develop a suite of acquisition documents that provide information about the goals of the program and how the program office is developing and executing to these goals, pursuant to DOD and Navy acquisition policies. Many of these key acquisition documents contain information about the program’s sustainment requirements and plans, as discussed below. The Capability Development Document should define the program’s operational requirements, including the program’s key performance parameters. Key performance parameters are the most critical requirements a system must demonstrate to deliver an effective military capability. In 2007, DOD updated its requirements setting policy, called the Joint Capabilities Integration and Development System, to require all programs to establish key performance parameters for sustainment in response to concerns that acquisition programs were not adequately planning for sustainment. This requirement helps ensure that acquisition programs provide a weapon system to the warfighter with optimal availability and reliability at an affordable price. The sustainment key performance parameter is comprised of two measures—operational availability and materiel availability—which addresses the availability of the ship while in operations and under maintenance, respectively: Operational availability measures the probability that a system will be ready for use when expected. This requirement helps programs determine how reliable, maintainable, and supportable a system needs to be. Operational availability is also understood as the percentage of time a ship can perform its primary mission. Materiel availability measures the percentage of the total inventory of a system that is operationally capable based on materiel condition, which for ship platforms, is the percentage of a ship class available for deployment. This metric helps programs determine how many ships to buy in order to meet planned deployment schedules. This requirement should inform decisions that could increase or decrease planned maintenance time for a shipbuilding program. According to DOD guidance, the operational and materiel availability requirements should be considered in tandem to produce ships that work as expected and are available when needed, as shown in figure 2 below. During the acquisition process, the operational availability requirement should inform decisions about how to best increase reliability for systems needed to meet the key performance parameter. To do this, engineers can, among other things: (1) design systems that require less frequent maintenance, (2) add redundancy to key systems, or (3) ensure that systems can be fixed quickly and cheaply. At the same time, the materiel availability requirement should inform how many ships are purchased based on the quantity needed to accomplish missions at any one time. It also informs acquisition decisions that could affect the length of maintenance availabilities, such as maintenance time needed to repair or replace key components. The Life-Cycle Cost Estimate should provide information on the total estimated cost to develop, build, deploy, sustain, and dispose of a ship class over its life cycle, regardless of funding source. The life- cycle cost estimate is based on program objectives and operational requirements for the ship class. It should reflect a realistic appraisal of the program’s risks and the level of cost most likely to be realized. The life-cycle cost estimate includes O&S costs, which provide information on the estimated costs for crewing, operations, maintenance, sustaining support, continuing system improvements, and indirect support. The Acquisition Program Baseline (APB) is an overarching acquisition document that describes the shipbuilding program and presents the program’s approved cost, schedule, and performance goals. The APB is a formal, binding agreement between the Milestone Decision Authority, Program Manager, and their acquisition chain of command to be used for tracking and reporting on the program. The Life-Cycle Sustainment Plan (LCSP) should document the program’s product support strategy and governs planning for sustainment during the acquisition process, as well as the execution of sustainment activities after ships are delivered to the fleet. The LCSP describes the efforts necessary to develop and integrate sustainment requirements into the ship’s development, design, procurement, testing, fielding, and operation. It also lists the activities necessary for the shipbuilding program to develop, implement, and deliver a product support package that maintains affordable operational effectiveness over the ship’s life cycle. For example, the LCSP should contain information on sustainment engineering, O&S cost estimates and affordability constraints, reliability analysis, and sustainment contracts, among other things. The Independent Logistics Assessment (ILA) should be an impartial analysis of a program’s sustainment planning and execution to determine its ability to meet established performance and sustainment requirements. The ILA is intended to assess the adequacy of the program’s product support strategy, product support risks that are likely to drive future O&S costs, changes to system design that could reduce O&S costs, and effective strategies for managing O&S costs. According to DOD guidance, programs should use the results of the ILA to improve sustainment outcomes. Acquisition and Sustainment Stakeholders for Shipbuilding Programs There are a large number of Navy stakeholders involved in the effort to design, build, and support a ship class over its life cycle. In general, the acquisition community is led by the ASN (RD&A), while the operations and sustainment community is led by the CNO. Naval Sea Systems Command (NAVSEA) provides support to both the acquisition and sustainment communities and is comprised of experts across multiple disciplines. Figure 3 provides more information on the various acquisition and sustainment stakeholders that support the Navy’s ship classes. The ASN (RD&A) acts as the Navy Service Acquisition Executive and oversees the Navy’s shipbuilding program offices. Program Executive Offices are responsible for the life cycle management of their assigned programs. The program executive office is led by a program executive officer who, according to DOD’s updated acquition policy, should balance the risk, cost, schedule, performance, interoperability, sustainability, and affordability of a portfolio of acquisition programs and deliver an integrated suite of mission capability to users. For ships, there is a shipbuilding program office that is responsible for acquiring ships and an in-service program office that supports ships in sustainment. In some cases, these program offices are located within the same program executive office while, in other cases, these offices are split between different Navy organizations (typically the program executive office and NAVSEA). As such, the Navy’s shipbuilding programs and some program executive offices do not have responsibility for ship programs throughout their life cycle. The shipbuilding program offices manage their assigned shipbuilding programs through program initiation, technology development, ship design, construction, testing, and delivery. Acquisition program managers lead shipbuilding program offices and are responsible for the management of a program. Acquisition policies delineate a number of sustainment-related responsibilities for acquisition program managers, such as: developing and implementing an LCSP to inform acquisition and sustainment phases of the program; developing strategies for managing intellectual property; using systems engineering to identify tradeoffs between life-cycle costs and performance requirements during design and construction; implementing a comprehensive reliability and maintainability engineering program; developing an obsolescence management plan; monitoring the program’s performance against its sustainment requirements and developing strategies to improve operational availability, O&S affordability, maintainability, and reliability, as necessary; and working with a PSM, among other things. Product Support Managers (PSMs) generally work with the acquisition program manager and are tasked with developing and implementing a comprehensive product support strategy for their assigned programs. PSMs are supposed to ensure that a comprehensive product support strategy is developed and implemented. The CNO is the senior military officer of the Department of the Navy, overseeing the Navy’s fleet and NAVSEA, among other organizations. The CNO also has acquisition responsibilities, such as approving a shipbuilding program’s requirements and determining whether to accept delivery of ships from the shipbuilders. The Office of the Chief of Naval Operations (OPNAV) is a collection of offices under the purview of the CNO responsible for various functions necessary for the operation of the Navy. For example, there are divisions within OPNAV that manage the Navy’s budget, logistics, and requirements setting process, among other things. The operational fleet forces (fleet) of the Navy, including operational units and type commands, assume full financial responsibility for operating and maintaining ships. Naval Supply Systems Command provides supply and services support to the Navy by managing supply chains and inventory for Navy aircraft, surface ships, submarines, associated weapons systems, and non-nuclear ordinance stockpiles. NAVSEA is responsible for providing expertise in designing, engineering, building, buying, and maintaining ships, submarines, and combat systems to meet the fleet’s operational requirements. NAVSEA is comprised of directorates and warfare centers that specialize in these areas of expertise. NAVSEA reports to the CNO, but also supports the shipbuilding program offices, and is organized by the following specialties, among others: Naval Sea Systems Command Engineering Directorate (NAVSEA 05) is an engineering command that is comprised of cost estimators, ship designers, systems engineers, and other technical experts. Among other things, this office is responsible for the development of life-cycle cost estimates and systems engineering for ships. Naval Sea Systems Command Acquisition and Commonality Directorate (NAVSEA 06) is a command that brings together personnel dedicated to bridging communication gaps between government and industry, in order to enable cost reductions and commonality throughout the acquisition life cycle. Among other things, this office leads the Navy’s ILA process. Naval Warfare Centers are a group of centers that offer services on a fee-for-service basis, including: obsolescence mitigation, in-service engineering, and data analysis, among many other tasks. Navy Spends Billions to Fix New Ships That Are More Difficult and Costly to Sustain than Shipbuilding Programs Initially Planned Shipbuilding program officials did not identify or mitigate sustainment risks during the acquisition process that subsequently resulted in significant and costly problems for the fleet. During the course of our review, the fleet identified 150 problems that affected multiple ships in a class. These problems resulted in more effort and cost for the fleet in sustainment than expected. In particular, we estimated that the Navy’s fleet has spent or is planning to spend at least $4.2 billion to mitigate and correct approximately 30 percent of these problems beyond what was planned for during the acquisition process. We could not quantify the cost impact of the remaining 70 percent of problems because the Navy was unable to provide data on the cost to correct them. Examples from the SSN 774, LPD 17 Flight I, and LHD 8/LHA 6 ship classes illustrate how shipbuilding program officials did not identify and mitigate sustainment risk during the acquisition process, which resulted in significant and costly maintenance paid for by the fleet once realized. The Fleet Identified over a Hundred Problems with New Ships That Required More Maintenance Effort than Planned for During Acquisition The fleet identified 150 sustainment problems affecting multiple ships in a class that required more sustainment effort than planned for during acquisition, which we verified through Navy data and documentation. Officials in the fleet, such as operators, maintainers, and engineers, reported these problems to us as major class-wide problems requiring more sustainment effort than planned. These problems manifested after ships were delivered and most of these problems have yet to be resolved. Where data were available on the cost to correct the problems, we estimated that the fleet paid at least $4.2 billion and had to perform more onerous maintenance than planned. These problems stemmed from shipbuilding program officials not identifying or mitigating sustainment risks in sustainment planning during the acquisition process, before ships were delivered to the fleet. Figure 4 summarizes the number of problems among multiple ships in the same class that required more sustainment effort than the shipbuilding programs’ had planned. It also reflects the costs associated with fixing these problems for the 30 percent of the problems where we could identify these costs based on available data. According to fleet leadership, these problems contribute to the fleet’s inability to maintain ships at planned cost and schedule, which we have previously found is a significant Navy-wide issue. In part to accommodate this extra effort, the Navy has experienced maintenance delays and has had to defer planned maintenance for ships in operations that the fleet determined was not as urgent as other maintenance needs. For instance these problems have contributed to: nearly 5,300 total days of delays to planned maintenance availabilities since 2012 on ships built during the last 10 years, new ships deferring planned maintenance, and insufficient funding to meet maintenance needs. To generate the list of 150 problems, we interviewed operators and maintainers for the shipbuilding programs in our review and asked them to discuss problems that occurred across multiple ships in the same class. We then verified these problems with available Navy data on system reliability and equipment failures. The list of problems only includes those that stemmed from risks that were not identified, evaluated, or mitigated by the shipbuilding program offices in their sustainment planning during the acquisition process. The list does not include problems that can be attributed to normal wear and tear or problems caused by sailor error. The estimate of $4.2 billion in additional costs to address these problems includes the fleet’s cost to correct or mitigate problems, but excludes costs associated with day-to-day maintenance that the fleet must perform. If the Navy’s fleet chooses to correct a problem, it typically requires the Navy to replace systems on ships that have already been delivered to the fleet or are under contract, which can be a costly undertaking. According to fleet maintenance officials, if a permanent correction is not implemented, the Navy’s operators and maintainers typically have to incorporate a more onerous maintenance approach than expected. The effects of more onerous day-to-day maintenance costs are hard to quantify using available Navy data. For example, the Navy used a brand new toilet and sewage system on the CVN 77 and 78, similar to what is on a commercial aircraft, but increased in scale for a crew of over 4,000 people. To address unexpected and frequent clogging of the system, the Navy has determined that it needs to acid flush the CVN 77 and 78’s sewage system on a regular basis, which is an unplanned maintenance action for the entire service life of the ship. According to fleet maintenance officials, while each acid flush costs about $400,000, the Navy has yet to determine how often and for how many ships this action will need to be repeated, making the full cost impact difficult to quantify. We generally did not include these types of ongoing costs in our calculation. In our cost calculation, we also excluded costs associated with adding sailors to ships to address maintenance gaps because sailors have been added for many reasons, making it difficult to isolate the money spent on sailors to address equipment problems. For instance, we omitted the $225 million that the fleet plans to spend to add sailors to LCS class ships, even though the Navy is taking this action in part, to ensure that the ship’s crew can perform necessary maintenance. The Fleet Experienced Problems as a Result of Risks That Were Not Identified, Evaluated, or Mitigated in Shipbuilding Programs’ Sustainment Planning We determined that the 150 problems identified by the fleet generally fall into three categories: (1) problems maintaining commercial equipment on ships, (2) ship design that did not effectively consider maintainability, and (3) untested sustainment assumptions that turned out to be incorrect after ships were delivered to the fleet. We found that nearly all Navy shipbuilding programs we reviewed experienced problems in each of these three categories, as shown in figure 5. The following examples illustrate each of the three categories of problems: Problems maintaining commercial equipment on ships. Dozens of primarily commercial systems on multiple SSN 774 class submarines are experiencing unexpected failures. During the acquisition process, the Navy based sustainment planning decisions on the assumption that these parts would last for the life of the submarine without the need for any maintenance. According to officials, the Navy did not verify these assumptions and now at least 16 of these systems require scheduled maintenance and several more systems need periodic updates that were not previously planned. As a result, as we have previously found, operators and maintainers have had difficulty obtaining the spare parts, accomplishing this planned maintenance within resource constraints. Ship designs that did not effectively consider maintainability. The Navy used a new design for CVN 77’s stores elevators, which are used to move provisions between decks. However, among other issues, the elevators are too small to fit a standard sized pallet jack. Thus, provisions cannot be loaded or unloaded with a pallet jack or a forklift and must be manually unpacked and stacked by hand on to the elevator. Unloading is further complicated, according to the ship’s crew, because the elevator doors are so small that the average sailor cannot stand up as they enter and exit the elevator. The fleet has mitigated a few of these problems, but a redesign of the elevator would be necessary to fit standard pallets and fully resolve the other problems. Untested sustainment assumptions that turned out to be incorrect after ships were delivered to the fleet. The Navy had originally planned to use a contractor to conduct the majority of LCS maintenance. However, the fleet determined that a heavy reliance on contracted support is inefficient for maintaining and sustaining the LCS and is in the process of establishing maintenance teams comprised of Navy personnel. Since it planned to use contractor support, the LCS shipbuilding program officials stated that they did not purchase the technical documentation necessary to maintain the commercial equipment used on the ship. As a result, fleet engineers told us that they are now attempting to buy and develop the necessary maintenance data, which adds cost and complexity to the maintenance process. The following section highlights four of the 150 problems identified by the fleet. Other examples from among these 150 issues are discussed throughout the report when appropriate to illustrate how the acquisition process contributed to sustainment problems. A full listing of the 150 problems is in a version of this report that is for official use only. Example One—LPD 17 Flight 1 Titanium Piping In an effort to improve sustainment of the LPD 17 class ships, the Navy decided to install titanium piping to carry seawater for firefighting and to cool machinery instead of copper-nickel piping because of its lighter weight and increased durability. However, instead of saving effort in sustainment, these pipes required more maintenance effort than planned and, in many cases, eventually had to be replaced. Early in the acquisition process, the Navy studied this decision and discovered that unlike copper-nickel piping, titanium piping carrying seawater is susceptible to “biofouling”—meaning sea life such as shellfish grow inside the pipes—as shown in figure 6. To prevent biofouling, Navy engineers determined that a chlorination system—which adds chlorine to seawater entering the ship in order to kill biological material in the water—and a dechlorination system—which removes the chlorine before the seawater is dumped from the ship— would be needed and included specifications for the shipbuilder to install these systems. Then, according to shipbuilding program officials, after the ship was on contract, the shipbuilder reported to the Navy that it could not find suitable chlorination and dechlorination systems. The program office decided to proceed with ship construction absent these systems and evaluate the extent of the biofouling problem after ship delivery. We reviewed the LPD 17 program’s sustainment planning documents and found that a discussion of this sustainment risk was not included in any of the maintenance planning documents, and, according to the fleet, this risk was not communicated to the Navy’s maintenance organizations. In July 2009, about one year after the lead ship was provided to the fleet, Navy operators and maintainers began to notice biofouling in the piping system. Biofouling degraded the functionality of a number of other systems on the ship that depend on the water delivered by the piping system, resulting in overheating of main and ship service engines and loss of electric power generation, among other problems. To address these and related issues across the LPD 17 class, the Navy’s fleet spent at least $250 million to: (1) buy and install new copper-nickel piping that is now costlier, heavier, and not as durable as titanium; (2) install chlorination systems that were later found to be unreliable, requiring significant maintenance; and (3) conduct unplanned maintenance and replace systems that broke due to shellfish contamination, among other interventions. Example Two—SSN 774 Special Hull Treatment The Navy’s attack submarines utilize a special covering on the hull. However, as shown in figure 7, portions of the hull-covering have de- bonded from the hull resulting in additional maintenance requirements during scheduled availabilities. Shipbuilding program officials told us that, during the acquisition process, they did not analyze how long the special hull treatment would last even though it is a critical technology. According to the program office, they now have identified the root cause and have continuously conducted engineering analysis to monitor and improve the material and construction processes. Due to the 5 to 6 year process of building a submarine, the time from identification to proven success can be 8 to 10 years, which is a long time to wait to know if a potential solution works in operations. However, in the meantime, the shipbuilding program has continued to deliver submarines to the fleet without knowing how long the special hull treatment will adhere to the vessel. As a result, maintainers cannot effectively plan for special hull treatment replacements in advance and, instead, are replacing material as needed. Performing timely and necessary maintenance is further complicated because it takes up to two years to receive this material after the Navy orders it from the manufacturer. Currently, Navy maintainers are budgeting $735 million to address the missing hull treatment on 11 of the 14 submarines constructed prior to implementing the potential solution. Example Three—LHD 8 and LHA 6 Machinery Control System To enable reduced crew sizes and sustainment costs, the Navy chose to use an automated machinery control system on LHD 8 and LHA 6. Sailors describe the machinery control system as a vital software-based system that controls the operation of 92 percent of shipboard systems. The Navy initially sought to purchase a highly-automated commercial system that would perform tasks previously completed by the ship’s crew. However, according to the shipbuilding program, during the acquisition process, it verified reliability testing conducted by the manufacturer of the system. At the end of the shipbuilding process, the Navy discovered that this system required more maintenance and sustainment effort than planned. Specifically, the Navy’s Board of Inspection and Survey—the organization that inspects ships prior to delivery—discovered problems with this system on LHD 8 in March 2009. The Board of Inspection and Survey identified false alarms and a lack of technical documentation as a serious defect. Specifically, the test report found that the system’s spurious and numerous alarms created an environment whereby the ship’s sailors would be conditioned to ignore alarms and that more sailors would be needed to monitor the ship’s systems. Nevertheless, the CNO decided to take delivery of the ship and the shipbuilding program did not correct these problems prior to providing the ship to the fleet. Additional problems emerged on LHD 8’s first deployment, such as overheating that led to failure of the electrical distribution system resulting in loss of power on multiple occasions. However, the technical data provided by the manufacturer, according to Navy engineers, was insufficient for the sailors to operate, troubleshoot, and repair the system. Further, according to ship engineers and the shipbuilding program, 9 of 28 critical components within the machinery control system hardware were obsolete when LHA 6 was delivered to the Navy. As a result, fleet officials told us that it has been difficult to obtain replacement parts. The Navy has spent over $90 million to repair the software and replace key components of the system on LHD 8, LHA 6, and LHA 7. Example Four—LPD 17 Flight I Knuckleboom Crane The LPD 17 Flight I knuckleboom crane carries boats and cargo (such as ammunition) from the ship to the water and back again, and is pictured below in figure 8. However, according to Navy reliability data, this system only works 30 percent of the time it is supposed to and has been difficult for sailors to use and maintain since the lead ship was delivered in 2005, nearly 15 years ago. There are a number of challenges in sustaining this crane that the Navy did not identify or sufficiently mitigate during the acquisition process. For example, the fleet does not have the necessary technical data to operate and fix the system, spare parts can be difficult to find or take many months to obtain, and pieces of the system are obsolete. According to fleet officials who use the data, the shipbuilding program office did not acquire sufficient technical data nor conduct sustainment planning for this large and complicated crane primarily because they planned to contract for the maintenance of the entire ship, including this system. The Navy subsequently discovered that contracting for the maintenance of the whole ship was cost prohibitive and maintenance responsibility was transferred back to the Navy. However, because there had not been adequate sustainment planning, the fleet did not have necessary resources, such as technical data, to effectively maintain the system. Additionally, as the fleet has been developing the capacity to maintain this crane, the shipbuilding program office continues to accept cranes with unmitigated risks leading to unplanned fleet effort. For example, across the eleven LPD 17 Flight I ships that have been delivered, there are four different versions of the crane, which further complicates maintainability because it increases the types of spare parts needed and the knowledge required of the sailors to fix the system. Specifically, officials stated that sailors who learned to maintain a crane on one ship cannot transfer all of their knowledge to other ships in the class. Due to the numerous sustainment challenges the fleet has experienced with this crane on LPD 17 Flight I, LPD 17 program officials told us that the Navy has since revised its new construction crane requirements for LPD 17 Flight II. According to the shipbuilding program office, these requirements allow the shipbuilder to use a more standard crane, which will be easier to sustain. While we could not calculate the added costs of maintaining this crane, we found that the Navy has spent over $10 million on the following actions: (1) contracting with the original equipment manufacturer for repairs, (2) replacing key components of the system, and (3) making changes to improve the system. DOD Policy for Shipbuilding Sustainment Requirements Results in Inadequate Information for Acquisition Decisions and Reporting That Is Misleading DOD policy that the Navy uses to set sustainment requirements does not capture factors that affect whether ships are reliable and maintainable. This results in shipbuilding programs having ineffective sustainment requirements that do not support sound acquisition decisions. When sustainment requirements are used to inform acquisition decisions, they can help ensure that shipbuilding programs design and build reliable ships that can be effectively sustained within planned costs. The effectiveness of a shipbuilding program’s sustainment requirements depends on how the requirements are set, used, and reported. Setting the sustainment requirements. We found that weaknesses with specific portions of DOD’s requirements policy resulted in the Navy setting sustainment requirements that are poorly defined and not representative of the availability of the ship during operations and sustainment. Using the sustainment requirements. To achieve the requirements, shipbuilding programs need to incorporate the requirements into decisions made throughout the acquisition process, such as developing the ship design. Due to problems setting the requirements, shipbuilding programs cannot incorporate the sustainment requirements into acquisition decisions. Reporting on the sustainment requirements. Statute requires that programs report on the status of these requirements on a regular basis. However, the Navy’s reporting on these requirements is misleading because it is based on the Navy’s deficient sustainment requirements and it does not reflect the fleet’s experience. Navy Ship Sustainment Requirements Reflect Weaknesses with How DOD Policy Defines Requirements for Ships The Navy sets sustainment requirements based on definitions for ships established by DOD policy, called the Joint Capabilities Integration and Development System, but the shipbuilding programs’ requirements are not robust even when they follow DOD policy. This is because the definitions for ship sustainment requirements in DOD requirements setting policy do not capture all factors that reduce the ability of ships to achieve their missions. For example, the definitions of operational and materiel availability in this policy exclude key factors and failures that reduce ship availability, such as catastrophic failures of mission-critical systems and unplanned maintenance. DOD policy states that the purpose of sustainment requirements is to ensure ships work when expected and are available when needed. But because the definitions of these requirements for ships do not capture all factors that can influence operational or materiel availability, the specific definitions for setting sustainment requirements for ships do not support the achievement of this goal. DOD’s requirements setting policy has designated these metrics to be key performance requirements since 2007, which means that they are one of a small number of mandated critical requirements that a weapon system must demonstrate. Without a definition for ship sustainment requirements in DOD policy that accounts for all factors that make Navy ships unavailable for operations, Navy shipbuilding programs cannot reasonably ensure that they are setting sustainment requirements that will result in reliable, maintainable, and available ships. In 2015, DOD added guidance to its policy that instructed shipbuilding programs to establish operational and materiel availability requirements based on the extent to which ships are expected to experience major failures, referred to as category 4 casualty reports. The fleet writes casualty reports when there are significant equipment failures that contribute to the ship’s inability to perform its missions. There are three categories of casualty reports (2, 3, and 4), with category 4 being the most severe. According to Navy guidance, category 3 and 4 casualty reports indicate degradation to critical mission capability that needs immediate repair, while category 2 reports contain failures that are important to the fleet but do not affect the ship’s core missions. In particular, DOD policy was updated to define operational and materiel availability for ships as follows: Operational availability (work when expected) is the percentage of time an operationally deployed ship is not in a category 4 casualty report state over a given operating period. The Navy typically sets this requirement at approximately 80 percent for shipbuilding programs. Materiel availability (ready when needed) is the portion of a ship class available for tasking. Ships are typically not available for tasking when in a planned maintenance availability or have an open category 4 casualty report. The Navy followed DOD requirements setting policy by establishing these key performance parameters for the four shipbuilding programs we reviewed that established requirements since fiscal year 2015—SSBN 826, FFG(X), DDG 51 Flight III, and LPD 17 Flight II. Prior to 2015, there were no ship-specific definitions in DOD requirements setting policy. Shipbuilding programs that set requirements prior to 2015 have generally adapted the definitions in JCIDS for calculating and reporting operational and materiel availability, which is why we include examples from these programs as appropriate The following two sections discuss shortfalls with DOD’s policy for setting sustainment requirements for Navy shipbuilding programs. Setting Operational Availability Requirements for Shipbuilding Programs DOD’s definition of operational availability for ships in its policy is problematic because it defines operational availability: (1) using category 4 casualty reports and (2) for the entire ship with a single metric. As a result, the operational availability requirement does not capture all critical failures that reduce a ship’s ability to perform mission-critical tasks. Category 4 casualty reports. DOD’s operational availability definition for ships counts only the most severe casualty reports—category 4. The definition excludes category 3 casualty reports, which also represent a severe degradation to the Navy’s primary missions. According to several fleet officials, category 4 casualty reports are typically used only in rare instances when the entire ship is out of commission. Fleet officials added that category 3 casualty reports can also represent severe mission-critical casualties that affect the ability of the ship to perform primary missions. In addition, the Navy’s categorization of casualty reports tends to be subjective or based on other factors than the severity of the defect, such as, according to maintenance officials, communicating a maintenance priority. In other words, there are additional deficiencies that could be mission-critical that may not be captured by category 3 or 4 casualty reports. Of the 11 ship classes in our review, six have delivered ships and have casualty report data available. We reviewed Navy casualty report data for 18 ships from these six ship classes and found that all of these ships had near-perfect operational availability when using only category 4 casualty reports. However, when we calculated operational availability using category 3 casualty reports, we found that 14 of these 18 ships fell short of their operational availability targets. Table 2 summarizes the category 3 and 4 casualty reports during two LCS missions as an example of how major failures are captured as category 3, and not category 4, equipment casualties. Therefore, by using category 4 casualty reports to define operational availability, the Navy is developing a requirement that does not accurately account for all ship failures that affect whether or not a ship works as expected. Setting operational availability at a whole ship level. DOD requirements setting policy specifies that shipbuilding programs should establish a single metric for the entire ship. However, when set at the ship level, the operational availability requirement is not effective at capturing the probability of whether or not a ship and its systems will work as expected. This is because ships are comprised of hundreds of systems that are of varying importance to achieving missions. For example, a ship may have an air-defense mission that requires a select group of systems—such as an air-search radar and a missile system—to work together to achieve the mission. However, a ship-level requirement is set using a single metric for the entire ship, which does not account for the fact that some systems are critical to achieving a ship’s primary missions while some systems are not as critical. Further, a ship level requirement is difficult to calculate. According to a Naval Sea Systems Command operational availability manual, it is improbable that the operational availability of hundreds of complex systems within a ship can be accurately calculated and represented in a ship level requirement. Figure 9 below illustrates how setting requirements pursuant to DOD requirements setting policy resulted in an operational availability requirement for the FFG(X) program that the fleet considers unacceptable. According to Navy handbooks and manuals on using operational availability during ship design, the operational availability requirement is a more effective input for acquisition decisions when it is set at the mission level. Since ships have multiple missions, this would result in multiple operational availability requirements instead of a single ship-level requirement. The Navy’s operational availability handbooks and manuals endorse this approach because a mission-level requirement is focused on a smaller group of systems that support the mission and, therefore, allows the Navy to prioritize availability for these key systems. Setting operational availability requirements by mission area would provide shipbuilding programs with information about how to identify and prioritize key systems for additional reliability analysis or sustainment planning to ensure that they will be sufficiently available to meet mission needs. Also, even though this would likely result in several operational availability requirements for each ship class, it would simplify the calculation of these requirements, which could make them more helpful inputs for acquisition decisions. Setting Materiel Availability Requirements for Shipbuilding Programs We found that DOD’s definition of materiel availability for Navy ship classes in its requirements setting policy does not ensure that ships will be ready when needed—the purpose of the materiel availability requirement. This is because DOD requirements setting policy for ships does not specifically account for other factors that affect materiel availability—such as unplanned maintenance, unplanned losses, and training—during which times ships may not be available for operations. Unplanned maintenance. Unplanned maintenance can occur when planned ship maintenance lasts longer than expected or a mission- critical failure occurs during deployment that needs immediate attention. As our prior work has found, Navy ships experience significant levels of unplanned maintenance. For example, from fiscal year 2012 through fiscal year 2018, the Navy has reported over 3,900 days of unplanned maintenance across the ships we reviewed. Unplanned losses. Unplanned losses are instances when a ship is out of commission for an extended length of time due to severe damage or when a vessel was not prioritized for maintenance. For example, we have previously reported that due to heavy shipyard workload, some submarines are waiting significantly longer than planned—in some cases several months or years—to enter maintenance periods. Training. The Navy also conducts several training periods, and the DOD requirements setting policy does not address whether or not a ship is considered available or unavailable during these training periods. Six of the 11 shipbuilding programs we reviewed developed their program requirements since DOD made sustainment requirements mandatory in 2007. One of these six programs—LHA 6—did not established a materiel availability requirement as required by DOD requirements setting policy. LHA program officials told us that materiel availability does not apply to ships, which is not reflected in DOD requirements setting policy. Four shipbuilding programs—DDG 51 Flight III, LPD 17 Flight II, FFG(X), and LCS—developed materiel availability requirements that generally align with DOD’s requirements setting policy and, as such, do not specifically account for unplanned maintenance, unplanned losses, and training. The remaining shipbuilding program—SSBN 826—went above and beyond DOD requirements setting policy by incorporating these additional areas that could affect materiel availability. Program officials stated that sustainment requirements are more critical to achieving the SSBN 826’s missions than other shipbuilding programs. However, DOD and Navy guidance clearly state that materiel availability is a mandatory critical requirement for all programs. Since DOD’s definition for materiel availability does not include all factors that could result in a ship being unavailable for operations, shipbuilding programs cannot ensure that ships will be ready when needed. Sustainment Requirements Are Inadequate to Support Well-Informed Decisions during the Acquisition Process Because of how DOD policy defines sustainment requirements for ships, these requirements do not provide the information needed to support acquisition decisions. In particular, the Navy’s sustainment requirements developed according to DOD policy rarely provide adequate information about how reliable, available, and maintainable ships need to be, which is necessary to support well-informed decisions pertaining to ship concept development, design, and construction. For example, during the acquisition process, shipbuilding program offices make decisions that transform top-level requirements—like operational and materiel availability—into detailed, low-level requirements that can be achieved with available resources. We found that ongoing and new shipbuilding programs continue to make acquisition decisions that influence sustainment without the information that could be provided by better- defined sustainment requirements. Since shipbuilding programs cannot use these requirements to inform acquisition decisions, they cannot ensure that ships will be sufficiently reliable and available. The following two sections discuss the Navy’s issues with using sustainment requirements when making acquisition decisions for its shipbuilding programs. Using Operational Availability Requirements in Acquisition Decisions The Navy’s operational availability requirements for ships—which follow the DOD policy discussed above—do not provide adequate information to support acquisition decisions that affect whether or not ships are reliable enough to meet their missions. For example, in January 2020, we found that engineers can use a variety of activities when designing weapon systems to increase reliability to meet requirements, such as conducting failure analysis and adding redundant systems. In order for these engineering decisions to be successful, the requirements that inform the process must be firm, well-defined, feasible, and affordable. However, when the operational availability requirements do not adequately describe the needed reliability and maintainability for key systems—as is the case for most of the shipbuilding programs we reviewed—Navy engineers cannot ensure that the ship’s design supports the program’s top-level operational availability requirement. Further, they cannot identify aspects of the design that could put the requirement at risk. Instead of using the operational availability requirement to inform decisions across all key ship systems, Navy ship engineers told us that they interpret the requirement to only apply to catastrophic failures that put the entire ship out of commission. Therefore, in practice, shipbuilding program officials told us that they only apply this requirement to systems that the ship needs to get underway, such as the main engines and propellers. As such, shipbuilding programs are making engineering decisions during the acquisition process for many mission-critical systems, such as radars, weapons, and systems necessary for launching and recovering aircraft, without understanding how often these systems need to work to achieve key missions. This means the operational availability requirement only applies to the bare minimum of ship systems needed to get underway rather than the full complement of systems needed to meet the ship’s missions. For instance, LPD 17 Flight I ships can often sail away and are considered operationally available even as key systems—such as the knuckleboom crane, davit, air conditioning, and potable water systems among others—work less than 75 percent of the time the ship is at sea, according to fleet databases that track system failures. By interpreting the requirement to only focus on systems needed to move the ship and not accounting for other mission critical systems, shipbuilding programs cannot ensure that all critical systems needed to meet missions will work as expected. In addition, since shipbuilding programs have a ship-level operational availability requirement and interpret this requirement to focus on systems needed to get ships underway, they have not consistently leveraged available data on various ship systems when making engineering and ship design decisions. Navy sustainment experts told us that shipbuilding programs rarely use data on the actual availability of ship systems. If the requirement was set at the mission-level and focused on key systems, the data could show whether or not planned systems, already operating in the fleet, are available enough to meet requirements. Then, if the data shows that these systems are not sufficiently available, shipbuilding programs could make investments in improving the availability, such as improving supply support, making the system more reliable, or adding redundancy. Since shipbuilding programs cannot use operational availability requirements to make informed acquisition decisions, they are at risk of continuing to deliver ships to the fleet that are not as reliable and sustainable as needed. Using Materiel Availability Requirements in Acquisition Decisions Of the five shipbuilding programs we reviewed that had established materiel availability requirements, we found that only one program has a requirement that provides adequate information for acquisition decisions. In particular, the SSBN 826 program’s materiel availability requirement has been a key input in establishing the submarine class’ planned maintenance schedules and procedures. Shipbuilding program officials told us they are using the maintenance period length determined by the materiel availability requirement to inform acquisition decisions—such as adjusting the submarine’s design to facilitate timely maintenance. For instance, the SSBN 826 shipbuilding program assessed the potential effect of new technology on the amount of maintenance that the submarine is planned to undergo. In doing so, the shipbuilding program officials believe that, if the new technology works as planned, the SSBN 826 class will meet the same presence requirement as its antecedent class with two fewer submarines. While this concept is a good example of how materiel availability can be used during the acquisition process, it is too early to know if the Navy’s plan will work for this class of submarines and fleet officials told us that they have doubts that the Navy can achieve this goal as planned. Officials from other shipbuilding program offices told us that they are not using the materiel availability requirement to inform maintenance decisions. Further, according to these shipbuilding program offices, the materiel availability requirements do not connect with the ship class’ planned maintenance schedules and, therefore, they do not make decisions to ensure that planned maintenance can be achieved within specific time frames. Program officials from several of these programs stated that the materiel availability requirement is not critical to performance goals, and, as such, it is not a priority to achieve this requirement. Without improving how the Navy defines and uses materiel availability requirements, shipbuilding programs are missing opportunities to make informed acquisition decisions about how ships are maintained and, therefore, cannot ensure that ships are available for operations when needed. Reported Sustainment Requirements Do Not Reflect the Fleet’s Experience The Navy’s reports to Congress are misleading because they do not reflect all of the failures and factors that reduce ship operational and materiel availability once ships are in the fleet. Shipbuilding programs report all key requirements in their Selected Acquisition Reports to Congress, including operational and materiel availability. According to DOD guidance for executing Selected Acquisition Reports, DOD program offices should provide accurate information to Congress to aid in determining if the program is meeting its key requirements. We reviewed the December 2018 Selected Acquisition Reports for the five shipbuilding programs that reported one or both of these sustainment requirements to Congress. We found that the Navy reported that these shipbuilding programs were meeting or surpassing their sustainment requirements. However, based on our analysis of data on mission-critical failures after ships were delivered, we found failures that would prevent these ships from conducting critical missions. Hence, the Navy’s reports to Congress do not reflect the actual availability of ships in the fleet. As a result, Congress does not have full insight into whether shipbuilding programs are on track to meet their operational and materiel availability requirements. The following two sections further discuss the Navy’s issues with reporting sustainment requirements for its shipbuilding programs. Reporting on Operational Availability Requirements to Congress We found three out of seven shipbuilding programs report on operational availability in their Selected Acquisition Reports. These three programs all stated that they were meeting or exceeding their requirements, but these reports often did not match the fleet’s experience. For example: For one vessel class, the Navy reported that it was exceeding its operational availability goal by over 10 percent. At the same time, however, several mission critical systems are unreliable. Officials from the fleet stated that critical ship equipment is consistently failing. The Navy is reporting that another ship class—that has yet to finish construction —is exceeding its operational availability target by 5 percent. This ship class has already experienced several catastrophic failures that limit its ability to conduct primary missions during its limited at-sea periods. These examples demonstrate how reporting based on a ship-level operational availability requirement does not provide insight into reliability and maintainability problems that the fleet is experiencing and that prevent ships from meeting missions. Consequently, Congress is not receiving accurate information on the results of its investments and the sustainment problems the fleet is experiencing. Reporting on Materiel Availability Requirements to Congress We found that two of the Navy’s shipbuilding programs we reviewed currently report materiel availability in Selected Acquisition Reports to Congress. One other shipbuilding program that has materiel availability as a key requirement in its approved baseline does not report this requirement, contrary to DOD guidance. For example, the LCS shipbuilding program indicates that it is meeting the requirement despite evidence of issues with materiel availability. The Navy’s Selected Acquisition Report for the LCS states that the program is meeting its materiel availability requirement even though internal DOD reports state that the LCS’ materiel availability is significantly below its requirement. Further, fleet officials stated they are worried the maintenance workload required for the LCS class ships may result in additional unplanned maintenance delays that further reduce materiel availability. The Navy has also chosen to take steps that will reduce the materiel availability of the ship class throughout the ship class’ service life, such as assigning the first four ships as test ships, making one of every four LCS a training ship on a rotating basis, and increasing planned maintenance days, among other things. Since several of the Navy’s shipbuilding programs do not report information to Congress on this critical requirement, Congress does not have insight into whether or not ships are as available as intended. Shipbuilding Programs Do Not Consistently Identify and Evaluate Sustainment Costs and Risks in Acquisition Documents The shipbuilding programs included in our review did not consistently conduct effective sustainment planning when developing three key acquisition documents: life-cycle cost estimates, life-cycle sustainment plans (LCSPs), and independent logistics assessments (ILAs). According to DOD and Navy acquisition policy, these documents, along with other documents, help programs ensure the ships they are acquiring can be sustained affordably and adequately over their life cycle. However, for the shipbuilding programs in our review, we found that these documents did not provide a thorough assessment of the sustainment implications and risks for many of the programs’ acquisition decisions. Specifically, we found that: 1) O&S costs in shipbuilding programs’ life-cycle cost estimates did not account for major sustainment risks and grew significantly; 2) LCSPs rarely included information needed to demonstrate ships could reliably meet sustainment requirements at an affordable cost; and 3) ILAs did not consistently identify major sustainment risks that were subsequently realized by the fleet. Because shipbuilding programs are not effectively using these acquisition documents to plan for sustainment, they are passing unmitigated sustainment risks on to the fleet. O&S Cost Estimates Have Significantly Increased, Largely Because Life- Cycle Cost Estimates Did Not Account for Sustainment Risks We found that shipbuilding programs’ current estimates of O&S cost are significantly higher than initial estimates. This is largely because the Navy cost estimators based their initial estimates for the shipbuilding programs in our review on unproven sustainment assumptions without assessing the potential cost risk of the assumptions. According to shipbuilding program officials, O&S cost estimates grew after shipbuilding programs revised their sustainment assumptions, such as by increasing the number of crew required to operate and maintain the ships or by changing the level of maintenance needed for various ship systems. We compared programs’ initial life-cycle cost estimates for the six shipbuilding programs in our scope that had available estimates to current cost estimates that were updated after programs delivered ships to the fleet. As shown in table 3, we found that the shipbuilding programs’ estimates of O&S costs increased by over $130 billion from the initial estimate to the most recent estimate. Navy cost estimators stated that up to 20 percent, $26 billion, of the cost estimate growth could be accounted for by process changes that resulted in including more indirect costs, such as health and child care for sailors, into O&S estimates. Further, we adjusted our analysis to account for any program quantity changes over time. Even accounting for these changes, the Navy still experienced over $100 billion in O&S cost growth. The O&S cost growth for these six shipbuilding programs is likely higher than the $130 billion that we calculated in table 3. This is because the Navy has not updated these estimates to reflect actual O&S costs for several of the ship classes. For example, the LCS program, in its initial O&S cost estimate, projected $7.1 million (in fiscal year 2019 dollars) per year per hull for maintenance. However, thus far, the average LCS seaframe currently costs $21 million (in fiscal year 2019 dollars) per hull per year to maintain—an increase of over $13 billion if these higher than planned maintenance costs continue over the life of the ship class. We found that the shipbuilding programs we reviewed underestimated initial O&S costs, largely because cost estimators used unproven O&S assumptions without assessing the sensitivity of those assumptions on potential cost growth, as discussed below. Unproven O&S Assumptions The O&S costs estimates we reviewed had grown primarily because initial unproven assumptions turned out to be optimistic. O&S cost estimates for four of the six shipbuilding programs we reviewed were based on a Navy-wide effort that began in the early 2000s to reduce crew sizes on Navy ships and lower O&S costs by, among other things, replacing some sailors with automated systems. We found that cost estimators used the shipbuilding program offices’ unverified assumptions regarding crew size to develop the initial O&S estimate for four of these six programs. Over time, the Navy found that the automated systems were not as reliable as planned and, therefore, reduced crewing levels were not realistic. To address this and other issues, the Navy added sailors back on to ships—resulting in increases in O&S cost estimates. For example, cost estimators for the CVN 78 class initially estimated a 15 to 23 percent decrease in crewing levels compared to the previous class of carriers in order to create O&S savings. However, the Navy is now in the process of adding crew back on to the ship, even before its initial deployment, thereby contributing to increased O&S cost estimates, as shown in Table 4. Similarly, DDG 1000, LCS, and LPD 17 program officials also reported that increasing crew sizes was a major contributor to higher sustainment costs for these programs. Further, the shipbuilding programs we reviewed made assumptions based on unproven initiatives, in conjunction with reducing crew sizes that ended up having a greater effect on the cost of maintaining ships than initially estimated. For example, for four ship classes—SSN 774, DDG 1000, LPD 17 Flight I, and LCS—the Navy originally planned to use a maintenance initiative called performance-based logistics, which called for the use of contractors to conduct maintenance instead of sailors on board the ships. In 2001, DOD policy recommended that all weapon systems use performance-based logistics and Navy shipbuilding programs subsequently anticipated that this strategy would reduce maintenance costs. Based on our review of shipbuilding program cost estimates, we found that Navy cost estimators included cost savings from these new and unproven approaches—assuming that they would work as expected. Shipbuilding program officials stated that the Navy has now largely abandoned this approach after attempting to contract for performance-based logistics and discovering that it was much more costly than planned. Another initiative that began in the early 2000s involved the Navy using more shipbuilder-provided commercially-bought systems on ships rather than systems the Navy developed and provided to the ship. However, maintaining commercial systems has been more expensive than anticipated for a variety of reasons, such as systems becoming obsolete and challenges acquiring manufacturer support. For example, the SSN 774 shipbuilding program made an effort to use commercial equipment that it assumed would never need repair or replacement—meaning that these parts would last the life of the submarine—without evaluating whether these parts actually had no repair needs. Further, SSN 774 program officials told us that the program office did not plan for the Navy to support many of the submarine’s commercial components because they initially planned to contract for logistics support. In all, the SSN 774 program asserted that over 4,000 parts on the submarine class would not need maintenance for the duration of the submarine’s life. However, since the submarines have been operating, many of these parts are failing, which has created unanticipated expenses. For example, Navy maintenance officials stated that they are planning to pay $360 million over the next 12 years to maintain a part of the propulsion system that it wrongly assumed would not need any maintenance at the time O&S costs were established. No Risk and Sensitivity Analyses for Key Assumptions A key reason that shipbuilding programs underestimated O&S costs is that the Navy’s cost estimators did not test the sensitivity of key O&S cost assumptions to quantify risks. According to DOD and Navy guidance and GAO-identified cost estimating best practices, cost estimates should include risk and sensitivity analyses to understand how changing program assumptions can affect cost—including O&S costs. However, for the six cost estimates that we reviewed, the Navy did not conduct risk and sensitivity analysis on key sustainment assumptions, such as unproven crewing and maintenance assumptions. The Navy’s cost estimators told us that they typically only conduct sensitivity analysis on the acquisition portion of a life-cycle cost estimate and not the O&S portion of the estimate. Instead, cost estimators told us that they use shipbuilding program office assumptions about the crew and how the ship class will operate as defined requirements that will not change. However, as discussed throughout this report, we found numerous instances in which incorrect maintenance assumptions resulted in billions of dollars of O&S cost growth. As a result, Navy’s cost estimators had reduced estimated O&S costs to reflect the programs’ presumed sustainment efficiencies without accounting for and quantifying the corresponding risk inherent in these assumptions. As such, in several cases, shipbuilding programs had optimistic estimates of O&S cost that later grew when unproven assumptions did not pan out as anticipated. According to shipbuilding program officials, their programs experienced significant O&S cost growth because the initial cost estimate did not sufficiently account for the risk of major changes to the program, such as revisions to the shipbuilding program’s assumptions about sustainment, that were realized once ships were provided to the fleet. For example, on the shipbuilding programs that adopted reduced crewing initiatives, Navy cost estimators reduced O&S costs due to fewer planned sailors on board, but did not determine how the O&S costs would be affected if automation did not achieve its intended efficiencies and the Navy had to add additional sailors to the crew. If the Navy’s cost estimators had conducted risk and sensitivity analyses of the O&S costs early in the acquisition process, shipbuilding programs could have had better insight into how much their O&S costs might increase if the key sustainment assumptions were not correct. Such insight into the potential sustainment cost impact could help shipbuilding programs identify the assumptions most likely to drive O&S cost growth. In turn, this information could help shipbuilding programs justify allocating additional resources during the acquisition process to ensure these sustainment assumptions are achieved, such as investing in additional testing to ensure the reliability of automated systems needed to reduce crewing levels. See figure 10 for an example of how unproven assumptions that were not evaluated using risk and sensitivity analyses led to optimistic O&S cost estimates for the DDG 1000 program. Navy officials told us that they are considering several pilot programs to improve cost estimators’ ability to conduct sensitivity analyses of maintenance costs, but have yet to provide details on these programs or the time frame for implementing them. While it is not possible for shipbuilding programs to predict future O&S costs with complete certainty, risk and sensitivity analyses could help shipbuilding programs’ better identify potential drivers of cost growth. In the absence of this cost analysis, shipbuilding programs will lack a clear assessment of the range of O&S costs their ships may require after they are delivered to the fleet. Additionally, without this O&S cost information, shipbuilding programs cannot provide Navy leadership with full insight into the range of resources that will potentially be required to sustain new ship classes over their lifetime or support recommendations for additional resources during acquisition to achieve sustainment assumptions. Shipbuilding Programs’ LCSPs Rarely Include Information Described in Policy and Guidance and Are Not Used to Inform Acquisition Decisions Five of the eleven shipbuilding programs we reviewed do not have LCSPs, and we found that the six programs that have LCPS do not use them to inform acquisition decisions that could help ensure ships are sustainable at an affordable cost. As of a September 2011 policy memorandum, DOD guidance requires every acquisition program we reviewed to have a LCSP. Shipbuilding programs, according to DOD acquisition policy, should develop and maintain LCSPs beginning at Milestone A, which is early in the acquisition process. According to DOD guidance, these plans should be the basis for all of the programs’ sustainment efforts. In particular, shipbuilding programs’ LCSPs should include information that demonstrates how a ship class can be affordably operated and maintained while meeting its sustainment requirements. To do so, DOD guidance describes that shipbuilding programs should use LCSPs to establish connections between life-cycle costs, reliability requirements, and crew size estimates, and identify and address sustainment issues, among other things. With nearly half of its shipbuilding programs not having completed LCSPs, the Navy is making acquisitions decisions without the context of a comprehensive sustainment planning document to help identify and mitigate the sustainment effect of its decisions. Figure 11 provides an example of a sustainment issue with the CVN 78 advanced arresting gear, which was identified during testing but not addressed in a LCSP. Officials from two of the five shipbuilding programs that do not have LCSPs stated that they had drafts of the plan, in some cases for several years, which leadership has yet to approve. In another case, shipbuilding program officials stated that they were not required to complete an LCSP even though DOD’s 2011 guidance directed them to create these plans immediately. For the six shipbuilding programs that had LCSPs, we found several challenges with how the programs develop and use these documents. Specifically, we found that the LCSPs: (1) rarely included a business case analysis, as required, that analyzed the relationship between life-cycle costs, reliability requirements, and crew size estimates; and (2) rarely identified and addressed sustainment issues in line with guidance. No Business Case Analyses We found that none of the six LCSPs we reviewed contained business case analyses as required by DOD acquisition policy and guidance. According to DOD’s acquisition policy, an acquisition program’s LCSP should include a business case analysis annex, which should contain relevant assumptions, constraints, and analyses used to develop the product support strategy to the LCSP. According to DOD’s guidance for PSMs, who are responsible for developing and maintaining LCSPs, acquisition programs should use a product support business case analysis to help establish a product support package that balances sustainment costs against required sustainment outcomes. As such, the LCSP’s business case analysis is a tool to help programs assess the costs, benefits, and risks of key acquisition decisions from a sustainment perspective. Additionally, the LCSP should contain information on the activities needed to achieve the sustainment key performance parameters and a discussion of how much funding is required for those efforts. For example, Navy leadership approved the LCSP for FFG(X) in March 2019 even though the plan lacked the required sustainment business case analysis. Instead, the FFG(X) LCSP contains ship-level sustainment requirements and O&S cost information from the program’s life-cycle cost estimate, but no accompanying business case analysis demonstrating how the desired sustainment requirements (operational and materiel availability) can be achieved within these costs. As another example, several ship classes were designed with highly automated systems to enable reduced crew sizes and lower O&S costs, such as the LHD 8/LHA 6 machinery control system discussed earlier in this report. However, the LCSPs for these programs did not analyze the extent to which meeting O&S estimates and sustainment requirements were reliant on the reliability of these automated systems and the risks associated with using automation. Without connecting life-cycle costs to key sustainment factors such as reliability and crew size estimates, the Navy will not know if its sustainment planning is achievable within cost constraints until ships are provided to the fleet and have been operated for a significant period of time. We have previously found that it is often too expensive or time- consuming to make meaningful changes to the ship at this point in the shipbuilding process. Limited Identification and Mitigation of Sustainment Issues LCSPs we reviewed rarely identified and proposed a plan to address programs’ sustainment issues, as described by guidance. According to DOD’s LCSP guidance, acquisition programs should assess their progress, challenges, and corrective actions when developing a plan to sustain a ship class. Two shipbuilding programs identified some sustainment risks and only one of the six LCSPs included plans for mitigating or correcting these risks. In the absence of proactively identifying and mitigating sustainment risks in the LCSP during the acquisition process, as described by guidance, we found that the Navy discovered and mitigated many of its sustainment challenges only after ships were delivered to the fleet. Without creating LCSPs that identify sustainment risks and proposing a plan to mitigate these risks, the Navy cannot ensure that it is making acquisition decisions that support ship sustainment. Two examples of significant sustainment risks that were experienced by nearly all of the programs we reviewed, but not identified or mitigated in LCSPs are: (1) insufficient technical data and (2) the use of performance- based logistics. Technical data. The LCSPs we reviewed that included an intellectual property strategy, as required by DOD acquisition policy during the operations and support phase, did not consistently address the full spectrum of potential intellectual property related issues, such as attaining intellectual property needed to repair and replace ship systems. According to DOD’s acquisition policy, shipbuilding programs should document the intellectual property strategy initially in the acquisition strategy and later in the LCSP to assess technical data needs and determine what intellectual property deliverables and license rights the program needs to acquire from contractors. Nearly all of the LCSPs we reviewed stated, in general terms, that the Navy would obtain the technical data to which it had rights. However, in these LCSPs, the Navy did not address how this strategy met the Navy’s needs for competitive and affordable acquisition and sustainment over the life cycle of a ship class, such as to ensure maintenance could be carried out as planned by a ship’s crew. Without ship programs fully planning for acquiring needed intellectual property to maintain ship systems in the LCSP, we found that the fleet was often not aware that certain ship systems were considered proprietary and only discovered what intellectual property was unavailable after ship systems were broken and Navy maintainers could not repair them. At this point, fleet maintainers stated that it is often too late to implement proactive strategies, such as working on an agreement with the manufacturer. Instead, after ships are delivered, fleet maintainers told us that they have several options, all of which are expensive and time- consuming. Fleet maintainers can (1) purchase these data on an expensive sole-source basis from the original equipment manufacturer; (2) spend significant time and effort reverse-engineering the system to be able to repair it; or (3) pay the manufacturer to conduct maintenance. Performance-based logistics. For three shipbuilding programs that planned to use performance-based logistics, the shipbuilding programs assumed it would work as expected and did not identify the risks associated with this maintenance approach or develop any mitigation plans. For example, as stated earlier in figure 10, the DDG 1000 program adopted a performance-based logistics approach during the acquisition process in an attempt to reduce sustainment costs. As such, the program’s LCSP stated that a contractor would be responsible for maintaining the ships in the class, including a number of new and unique systems installed on the ships. However, the LCSP also noted that the DDG 1000 program had not been able to determine how much the performance-based logistics approach was likely to cost or what sustainment outcomes the Navy could expect from this approach, in large part due to the number of new systems installed on the ships. After the shipbuilding program delivered the first ship in the class from the shipyard, DDG 1000 program officials determined that the fleet and other Navy maintenance organizations would instead be responsible for the maintenance that the shipbuilding program previously planned to execute by hiring a contractor. According to fleet officials, since taking over maintenance responsibility, the Navy has also determined that these systems are difficult to sustain, citing lack of commonality, missing technical data, and other challenges. In some cases, the fleet is now replacing DDG 1000’s unique systems after delivery with systems common to other Navy ships in an effort to mitigate sustainment cost growth and readiness effects. Despite these critical changes in the sustainment approach, the DDG 1000 program has not updated its LCSP since 2009. ILAs Do Not Consistently Evaluate Shipbuilding Programs’ Sustainment Planning While the Navy has conducted an ILA on nearly every shipbuilding program we reviewed, we found that many of these assessments did not identify key sustainment issues or make recommendations to mitigate them. ILAs are conducted by assessment teams comprised of officials from across the Navy. The Navy ILA teams often validated program office sustainment assumptions contained in the LCSPs and other sustainment planning documents without evaluating those assumptions and identifying key areas of risk—even when programs introduced new sustainment concepts. DOD acquisition policy establishes that ILAs should provide an independent assessment of the shipbuilding program’s sustainment planning, including the identification and evaluation of issues that are likely to drive future O&S costs, design changes that could reduce O&S costs, and the adequacy of the product support strategy, among other things. ILAs are also supposed to make recommendations for mitigating the issues identified in the report, according to DOD and Navy guidance. Statutory requirements similarly emphasize the role of ILAs in identifying and mitigating sustainment risks that could increase O&S costs, and require DOD to establish guidance that requires the Navy to conduct ILAs prior to key acquisition decision points, including milestone decision events. ILAs for the shipbuilding programs included in our review did not sufficiently identify and evaluate the program offices’ sustainment assumptions and risks during the acquisition process. This was the case even when Navy testers had identified sustainment risks in early assessments conducted prior to the development of the LCSPs and ILAs. The following examples discuss instances in which Navy testers or maintainers identified sustainment risk before the ILA was conducted that have since caused sustainment challenges for the fleet, but the ILA team did not identify or make recommendations to address these problems. The SSN 774 shipbuilding program. As early as 2014, supply officials identified delays in over 1,000 supply orders for spare parts—many of these orders were in excess of 5 months old. However, in 2016, the ILA team rated this area as low risk and found that the supply support planning and execution was “outstanding.” Since supply support was rated as “low risk,” the ILA team did not make any recommendations to improve this planning. Subsequently, the SSN 774 class has experienced significant supply support issues. For example, the Navy’s maintainers routinely cannibalize hundreds of parts in 2017 and 2018 from SSN 774 class submarines to prepare other submarines for deployment, at an estimated rework cost of $2-3 million per year. The CVN 78 shipbuilding program. In 2013, testers stated that the number of berthing spaces on CVN 78 class carriers may not be sufficient to accommodate the planned crew size, particularly for the life of the carrier. When conducting its ILA in 2016, the ILA team rated crewing as low risk and the assessment noted extensive analysis had been conducted to validate the platform crewing profile. However, the ILA team did not document validation of the assumptions underpinning this analysis, such as whether or not automated systems needed to reduce crew levels would work as intended. The crewing concerns identified in 2013, but for which the ILA team did not make recommendations, are now a problem for the Navy’s fleet. For example, the Navy has already increased the size of the planned crew to the maximum allowed by the ship’s design. Nonetheless, additional crewing concerns persist for key systems—including the weapons elevators, advanced arresting gears, the machinery control system, among others—that are not yet well understood and may require additional sailor support to operate and maintain. The LCS shipbuilding program. In 2005 and 2006, Navy testers expressed significant concerns about the validity of the assumptions necessary to execute the program’s logistics support plan, specifically that the design of the new logistics system failed to include needed features to enable this logistics approach. In 2012, the Navy ILA team rated this area as low risk, specifically noting that the LCS program had developed a wide-range of well-written, informative, and comprehensive logistics planning documents. However, in part, since the ILA team did not recognize that the underlying issues previously identified by the testers had not been mitigated, the program provided ships to the fleet that had logistics issues. Specifically, the CNO conducted a study in 2016 that found the shipbuilding program’s logistics approach to be unstable and overly complex. As a result, the Navy is undertaking an overhaul of the LCS logistics approach, by taking actions such as creating Navy-led maintenance teams. The DDG 1000 shipbuilding program. The Navy requires significant volumes of technical data to manage the systems on the DDG 1000. In 2005, Navy testers noted that there were many details absent from the technical data management plan, including multiple sections that were left blank. In 2011, the Navy’s ILA team found that technical data management was low risk and stated that the requirements for technical data were well-written and clearly identified. According to fleet engineers and maintainers, as of September 2019—more than 3 years after lead ship delivery—all of the manuals remain in draft and are accurate enough for the sailors to acquaint themselves with systems, but not sufficient for supporting these systems. For example, fleet maintenance officials stated that several key documents for operating and maintaining critical ship systems, which were identified in the ILA as sufficiently complete, are not suitable for crew use. Several Navy officials across NAVSEA and shipbuilding program offices told us that ILAs are largely a document compliance check and vary significantly depending on the competency of the lead assessor. Therefore, in practice, according to Navy officials responsible for conducting these assessments, ILAs are not a thorough assessment of a ship classes’ logistics planning. This falls short of the purpose of ILAs, stated in Navy guidance, which is to provide acquisition programs with an effective measure of the program’s product support planning and execution. Officials from the NAVSEA organization responsible for ILA guidance also told us that they are in the process of improving how the Navy conducts ILAs for ships, such as by developing a new handbook and refocusing ILAs to better assess the quality of the sustainment planning. Specifically, these officials discussed the following five improvements: 1) starting ILAs as early as preliminary design; 2) tying the ILAs more closely to programs’ systems engineering efforts; 3) increasing focus on analytics, modeling, and simulation; 4) giving the Navy’s fleet and maintainers approval authority over the assessment; and 5) making investments to ensure that assessments are always led by officials with appropriate skills and expertise. If the Navy makes changes such as these or others, it would be a positive step toward making ILAs a more thorough and effective assessment of shipbuilding programs’ sustainment planning early in the acquisition process. However, these officials also stated that there is pushback from Navy program offices regarding these improvements because a more robust ILA requires more time and money from shipbuilding programs. Navy officials also noted that implementing the planned improvements is predicated on finding evaluators to conduct ILAs with appropriate skill sets, which has been a challenge. Until the Navy evaluates and implements proposed changes or other changes to improve the ILA process, the Navy will continue to be at risk of not identifying and resolving shipbuilding programs’ sustainment challenges during the acquisition process, before ships are provided to the fleet. Navy Leadership Does Not Ensure Shipbuilding Programs Effectively Consider Sustainment and Congress Does Not Have Full Insight into Sustainment Cost Growth We found that the senior leaders responsible for shipbuilding program oversight—the ASN (RD&A) and the CNO—have generally prioritized acquisition outcomes during Gate reviews, without considering how acquisition decisions affect sustainment outcomes. Navy acquisition policy states, however, that programs should be managed from a life- cycle perspective, with attention to both acquisition and sustainment outcomes. In an effort to increase senior leaders’ and shipbuilding programs’ attention on sustainment outcomes and to be responsive to Congressional efforts to improve weapon system sustainment, the Navy recently began pursuing two new initiatives—a Gate 7 for sustainment and the sustainment program baseline. These are promising steps that could help increase leadership insight into shipbuilding programs’ sustainment outcomes once ships are delivered to the fleet. However, we found that some of these efforts will likely not address the underlying need for Navy leadership to improve its consideration of shipbuilding programs’ sustainment goals early in the acquisition process as programs are making the decisions that have a long-term effect on ship sustainment. In addition, Congressional decision makers do not have full insight into sustainment cost growth. Navy Leadership Has Not Consistently Considered Sustainment in Gate Reviews, and Some Recent Changes Will Not Address Existing Shortfalls Navy leadership has not consistently reviewed shipbuilding programs’ sustainment planning at acquisition Gate reviews. According to senior Navy policy officials, in an effort to increase leadership attention on program sustainment, the Navy recently updated its acquisition policy to add a Gate for sustainment, called Gate 7. However, this recent change will not address the need for leadership to more consistently assess sustainment during earlier Gates. In addition, the Navy established a Deputy Assistant Secretary for Sustainment within the ASN (RD&A)’s office who will be responsible for managing the Navy’s sustainment funding and life-cycle management policies. However, it is too soon to assess the role that this official may have in the acquisition process. The Navy’s acquisition policy states that participants in Gate reviews should review program health and discuss and resolve areas of concern. Additionally, shipbuilding programs should be overseen and executed from a life-cycle perspective—in other words, with attention paid to balancing near-term acquisition outcomes and long-term sustainability. In support of this goal, the policy establishes required sustainment-related briefing content or actions for each Gate. While Gate 7 will function as the dedicated Gate for sustainment, all of the earlier Gates have sustainment- related requirements as well, as shown in Table 5 below. These Gate reviews offer Navy leadership opportunities to conduct oversight of shipbuilding programs’ sustainment planning during early phases of the acquisition process when key program decisions about requirements, design, and contracts are being made. Navy Leadership Has Not Consistently Used the Gate Process to Review Shipbuilding Programs’ Sustainment Planning and Outcomes Navy acquisition policy establishes that leadership should be briefed on a number of sustainment factors at Gate reviews, with a program’s life- cycle sustainment strategy/plan and O&S cost drivers being the minimum amount of sustainment information required for nearly all Gate reviews, as presented in table 5. We analyzed briefings and meeting minutes prepared for the 22 Gate reviews held for the shipbuilding program in our review between fiscal year 2014 and 2018. We found that Navy leadership had not assessed shipbuilding programs’ life-cycle sustainment strategies/plans in approximately 86 percent of Gate reviews and had not assessed O&S cost drivers in approximately 64 percent of Gate reviews, as shown in figure 12. According to Navy acquisition policy, this sustainment information should have been evaluated during all 22 of the Gate reviews held between fiscal year 2014 and 2018 for the shipbuilding programs included in our review. Instead, we found that the Gate reviews most often discussed acquisition updates. While a focus on acquisition updates during Gate reviews is appropriate, by infrequently devoting attention to how acquisition decisions affect sustainment, Navy leadership is missing an opportunity to assess the comprehensiveness and validity of shipbuilding programs’ sustainment plans and cost estimates, among other sustainment factors. As we previously discussed, shipbuilding programs’ LCSPs and O&S cost estimates were incomplete or insufficient, and, therefore, did not provide a thorough assessment of the programs’ sustainment risks. Additionally, Navy leadership is not consistently using Gate reviews to communicate to shipbuilding programs that achieving sustainment goals is a high priority. For pre-construction Gate reviews (Gates 1-5), Navy leadership evaluated three of the programs included in our report—SSBN 826, FFG(X), and DDG 51—in the 5-year period between fiscal year 2014 and 2018. These Gate review briefings included some discussion of program sustainment but did not meet all of the objectives and goals described by Navy acquisition policy for sustainment briefing content, as presented in table 5. As such, the Gate reviews did not provide a complete assessment of whether the programs’ acquisition decisions about sustainment would support the delivery of ships that could meet sustainment requirements at an affordable cost. Officials from the majority of programs included in our review told us that these early phases of the program are critical because it is at this point in the program where decisions are made that can have a long-term effect on ship sustainment, and it is difficult to make significant changes to sustainment outcomes after these key decisions are made. For example, when Navy leadership reviewed the SSBN 826 program at a Gate 4 review in November 2015 and a Gate 5 review in September 2016, the briefing discussed the SSBN 826 program’s sustainment costs in detail, including O&S cost goals, cost drivers, and contract incentives for O&S affordability. However, among other things, the Gate 4 briefing did not include a review of the program’s life-cycle sustainment strategy, and the Gate 5 briefing did not verify that all critical technical data and intellectual property issues had been addressed, which fleet and engineering officials stated are known sustainment issues for the Virginia class of submarines. Officials from the SSBN 826 program stated that some sustainment information that was not discussed in the Gate reviews was addressed in other forums. For example, leadership approved the program’s LCSP in August 2016, between the Gate 4 and Gate 5 reviews. In another example, when Navy leadership reviewed Flight III of the DDG 51 program at a combined Gates 4 and 5 review in March 2014, none of the required sustainment topics were included in the briefing. By not thoroughly assessing and resolving the sustainment effect of early acquisition decisions during its Gate reviews, Navy leadership is missing opportunities to ensure that shipbuilding programs are adequately considering sustainment goals and is at risk of allowing programs to proceed through the acquisition process without verifying that there is adequate planning for sustainment. For Gate 6 reviews held between fiscal year 2014 and 2018, we similarly found that Navy leadership did not consistently discuss sustainment, even as programs began ship construction and delivering ships to the fleet. Our analysis of Gate 6 documentation showed that the primary focus of most Gate 6 briefings and meeting minutes was acquisition outcomes, such as construction progress or follow-on ship contract awards. In particular, we found that 16 of the 18 Gate 6 reviews we assessed for eight shipbuilding programs did not include information about both the program’s life-cycle sustainment plan and O&S cost drivers, which are part of the required briefing content for every Gate 6 review. Officials from most of the programs in our review confirmed that leadership placed greater emphasis on acquisition updates than sustainment during Gate 6 reviews. For example, the SSN 774 program is pursuing a reduction in total ownership costs initiative for its Block IV submarines, but the program’s recent Gate 6 briefings included only limited details on design changes that the program was pursuing to improve sustainment and no information on the anticipated O&S cost savings from the effort. Officials from this program confirmed that leadership has historically focused only on acquisition issues during the Gate 6 reviews. Additionally, we found that Navy leadership issued sustainment-related action items to only three of the eight programs in the Gate 6 reviews we assessed, even though all of these programs had ongoing sustainment challenges, as discussed earlier in this report. Although nearly 90 percent of the Gate 6 reviews we assessed did not include briefing content on the program’s life-cycle sustainment plan and O&S costs, as required, nearly all of the Gate 6 reviews included a discussion of at least one ongoing sustainment challenge affecting the ship class. In these cases, the discussion centered on mitigating realized sustainment issues already being experienced by the fleet after ship delivery. For example, all of the LPD 17 Gate 6 reviews over the past 5 years included updates on the activities of the LPD 17 Strike Team and its progress in resolving class-wide design and construction issues that negatively affected the ships’ operational availability and reliability after they began fleet operations. While Gate 6 can be used as a venue to discuss sustainment issues that are already being experienced by the fleet, until Navy leadership more consistently reviews programs’ sustainment planning and expected outcomes during earlier Gates, programs will continue to be at risk of delivering ships to the fleet that have unmitigated sustainment risks or are unaffordable. Recent Gate Process Changes Enhance Sustainment Focus, but Do Not Address the Need to Consider Sustainment Issues Earlier in the Acquisition Process The Navy recently updated its acquisition policy to expand the scope of its Gate process and add a new Gate 7 for sustainment. Effective March 2019, the Gate 7 reviews will begin 5 years after shipbuilding programs achieve initial operational capability and recur every 5 years thereafter. As such, Navy officials told us that the scope of the Gate 7 review will be oversight of programs that are well into production and delivering ships to the fleet. According to the Navy’s acquisition policy, Gate 7 will evaluate the effectiveness of a program’s product support strategy, compare actual sustainment costs to estimates, discuss fleet-identified sustainment issues, and assess sustainment risks and mitigation measures, among other things. Senior officials told us that the Navy developed a Gate 7 for sustainment for two reasons. First, similar to our findings, officials stated that the Navy recognized sustainment was generally not being discussed during existing Gate reviews, particularly during Gate 6 reviews as ships were starting to be delivered to the fleet, even though this was required briefing content for Gate 6 in the Navy’s acquisition policy. Second, in the National Defense Authorization Act for Fiscal Year 2017, Congress directed the military services to conduct sustainment reviews on major weapons systems—such as the shipbuilding programs included in our review—within 5 years of the weapon system achieving initial operational capability and then periodically throughout their life cycles. Such sustainment reviews are to assess the weapon system’s product support strategy, performance, and O&S costs. Based on our analysis of the Navy’s revised acquisition policy, the new Gate 7 appears responsive to the Congressional requirement for sustainment reviews and, if implemented as planned, will provide an oversight forum for addressing realized sustainment challenges. However, we found that adding a new Gate to the end of the acquisition process is too late to drive meaningful improvements to sustainment outcomes and is not sufficient to address current shortfalls in how the Navy’s acquisition process addresses sustainment concerns. Senior Navy officials we spoke to who had knowledge of this change expressed doubt that a Gate 7 for sustainment would be an effective means of holding programs accountable for addressing acquisition-related sustainment issues, since it occurs late in the acquisition process. Whereas the Gate 7 for sustainment will occur at the end of the acquisition process, the decisions that influence sustainment outcomes, such as decisions about ship design and the planned sustainment strategy, are made much earlier in the process, normally between Gates 1 and 5. Thus, while Gate 7 will provide leadership with insight into the execution of ship sustainment and any challenges being experienced by the fleet, it does not address the need for Navy leadership to evaluate shipbuilding programs’ efforts to design and plan for sustainable ships during earlier Gates, when key long-term decisions are being made. According to a senior Naval Sea Systems Command official, Gate 7 is timed well for being able to “sit back and admire the problem” as opposed to preventing the issue. Until Navy leadership brings attention to sustainment during earlier Gate reviews, it will continue to miss opportunities to proactively ensure shipbuilding programs are acquiring sustainable ships before they are provided to the fleet. Acquisition Program Baselines Currently Include Few Sustainment Goals and Ongoing Improvements Lack an Accountability Mechanism We found that acquisition program baselines (APB)—which are intended to be binding agreements between leadership and the program manager and document the program’s goals—currently include limited information about sustainment. While the Navy is developing a new initiative to create a dedicated baseline for sustainment, it does not have a mechanism for holding shipbuilding programs accountable for sustainment goals during the acquisition process. Like all major weapon systems, shipbuilding programs have APBs that summarize the programs’ cost, schedule, and performance goals and set the baseline from which programs must, as appropriate, obtain approval from agency leadership to deviate and must report certain changes to Congressional defense committees. Statute requires that baselines will contain information on the program’s cost estimate, schedule, performance, and supportability, among other factors. In practice, for shipbuilding programs in our review, we found that the program goals established in the APB are largely focused on acquisition cost, acquisition schedule, and performance requirements, with limited information provided on sustainment. In particular, the sustainment information provided is generally limited to a high-level O&S cost estimate and the sustainment key performance parameters, if the program has them. A Congressionally established panel, called the Section 809 panel, charged with making recommendations to improve the efficiency and effectiveness of DOD’s acquisition process, among other things, recently studied challenges with the sustainment of major weapon systems. It similarly found that the APB does not provide sufficient governance of the sustainment phase of an acquisition program since it is focused on acquisition cost, schedule, and performance goals. The panel further noted that program success has been measured against the achievement of the APB’s acquisition goals, so program managers have generally prioritized the achievement of acquisition outcomes and deemphasized sustainment. As a result, the panel recommended the creation of an additional program baseline, called the sustainment program baseline (SPB), to help ensure programs are held accountable for sustainment- related outcomes and establish balance between acquisition and sustainment priorities. In March 2019, the Navy initiated an effort to begin developing an SPB framework. Senior officials stated that the Navy intends to pilot the SPB with a few aviation programs in fiscal year 2020 before expanding the initiative to ship classes that are already in sustainment, and then finally to programs that are still in the acquisition process. According to Navy officials involved with this initiative, the SPB is intended to complement the APB, and Navy leadership will use the two program baselines to review and approve the acquisition and sustainment aspects of a program throughout the acquisition process. The shipbuilding program should draft the initial SPB early in the acquisition process to support Milestone A and update it as the program matures. Officials in the office of the ASN (RD&A) told us that the Navy plans for the SPB to be grounded in a program’s sustainment key performance parameters for operational and materiel availability and include targets for various other sustainment metrics, such as sparing, equipment failure rates, mission capable time, and logistics time, among others. The SPB should also provide detailed information about all of the costs and funding sources that will support sustainment. Navy officials identified a number of potential improvements the SPB could offer for how shipbuilding programs consider sustainment, such as devoting additional time and resources to the development of sustainment metrics early in a shipbuilding program, assessing the sustainment effect of acquisition decisions, creating a common understanding of a program’s sustainment goals across disparate stakeholders, and providing a more accurate accounting of sustainment funding. If the Navy implements the SPB as described, it will likely be a positive step toward ensuring shipbuilding programs are increasing their focus on sustainment planning during the acquisition process. While the SPB could potentially provide increased attention on program sustainment, we found that developing this new baseline may not fully address the underlying challenge of shipbuilding programs managing to the APB’s acquisition goals and the lack of consideration of sustainment in acquisition decision-making. This is because, according to current proposals, programs will continue to be measured against the APB during the acquisition process, with the SPB not serving as the governing baseline until later in the program life cycle during the sustainment phase. Instead, during the acquisition process, the Navy’s efforts related to the SPB will be limited to initially developing the SPB and updating it as the program matures. While updates to the SPB during the acquisition process could provide more transparency into the sustainment effect of various acquisition decisions within the program and to leadership, this approach primarily documents the sustainment effect of a decision. Because the APB will remain the governing baseline during the acquisition process and the program will not be measured against the SPB until the sustainment phase, shipbuilding programs will continue to have an incentive to prioritize acquisition outcomes over sustainment when making acquisition decisions. Congress Does Not Have Insight into Shipbuilding Programs’ O&S Cost Growth during the Acquisition Process DOD does not provide Congress with detailed information on the extent and causes of shipbuilding programs’ O&S cost growth during the acquisition process. For example, a mechanism for Congressional oversight of major defense acquisition programs’ unit cost growth, called the Nunn-McCurdy statute, is focused on acquisition costs and not sustainment cost estimates. A Nunn-McCurdy breach is triggered by increases in a program’s unit cost estimates against the acquisition unit cost goals established in the program’s APB. The Nunn-McCurdy statute provides Congress greater visibility into major defense acquisition programs’ estimated acquisition cost growth and encourages DOD to manage costs by requiring programs in a breach to include acquisition cost estimates in Selected Acquisition Reports and notify Congress of a breach. While the APB also includes O&S cost estimates, the Nunn- McCurdy statute does not require reporting of O&S cost growth to Congress. The Nunn-McCurdy statute also requires DOD to take a series of actions whenever a program experiences critical acquisition cost growth, which is growth in the program acquisition unit cost estimate of at least 25 percent over the current baseline estimate documented in the APB or of at least 50 percent over the initial baseline estimate. Among other things, these actions include (1) conducting a root cause analysis of the cost growth, (2) reassessing program costs, and (3) terminating the program or taking other steps that include restructuring the program. If DOD decides not to terminate a program that has critical cost growth, the Secretary of Defense must restructure the program in a manner that addresses the root cause of the cost growth, rescind the program’s most recent Milestone decision, and review the program regularly, among other tasks. As stated earlier, we found that leadership oversight during Gate reviews and program execution is primarily focused on acquisition outcomes. Additionally, as the Section 809 panel noted, the Nunn-McCurdy breach provided a strong incentive for major defense acquisition programs to control acquisition cost, but that there was not an equivalent incentive for controlling sustainment costs. As such, the shipbuilding programs’ acquisition decisions and Congress’ oversight mechanisms have focused on acquisition cost outcomes, without a comparable focus on sustainment cost outcomes during the acquisition process. For example, when the DDG 1000 program experienced a critical acquisition cost growth breach, the Nunn-McCurdy statute required DOD to reassess and certify to Congress the need for the program at the increased cost levels. DOD was also required to identify and address the cause of the acquisition cost growth when reassessing the program and conduct additional program oversight, among other things. According to DDG 1000 program officials, DOD and the Navy recognized that the acquisition decisions leading up to and following the breach would have a sustainment effect. For example, the decision to reduce the number of ships in the class to manage acquisition cost growth has contributed to higher per ship O&S costs, as the investment needed to sustain this new class is now spread across fewer ships than initially planned. However, the focus of their restructuring efforts was on addressing the acquisition cost growth. By contrast, there was not a similar effort to manage growth in the program’s O&S cost estimates, which have increased by more than 50 percent on a per ship per year basis. For the six shipbuilding programs with O&S cost estimates we were able to assess, we found that four experienced cost growth greater than 50 percent for their average annual O&S per hull cost, as compared to the programs’ original estimates. Table 6 shows the extent of these shipbuilding programs’ O&S cost estimate growth over time. This level of cost growth for acquisition costs would have constituted a Nunn-McCurdy breach. While the Selected Acquisition Reports for these programs include some information on shipbuilding programs’ O&S costs, this reporting does not provide Congress with detailed information about the causes of the cost growth and potential program changes to address it and, therefore, does not facilitate the same level of oversight as is given to acquisition unit cost growth. In particular, DOD was not required to notify Congress that the programs had experienced high levels of O&S cost growth above a certain threshold, and DOD was not required to identify the root cause of the O&S cost growth and restructure the programs to address the cost growth. As a case in point, for the programs we reviewed, Navy leadership only directed one of the programs—LPD 17—to identify opportunities to reduce O&S costs following a Gate 6 review. For other programs that had extensive O&S cost growth, the programs were not required to take additional steps during the shipbuilding process to manage these costs and mitigate the long-term sustainment effect of their acquisition decisions. The LCS program, for example, has seen the highest rate of per ship O&S cost growth among the shipbuilding programs included in our O&S cost analysis, but Congress and agency leadership have not required the shipbuilding program to take action to address these issues. Instead, the shipbuilding program continues to deliver ships to the fleet that are significantly more expensive to maintain than initially planned and which have significant maintenance and logistics challenges, according to sustainment officials. The fleet is now undertaking its own efforts to improve sustainment outcomes for LCS, such as changing its crewing and maintenance approaches, which are further adding to the O&S cost growth for the program. According to DOD and Navy acquisition policy, program managers should be the single point of accountability for the full life cycle of ship programs. However, without a mechanism to provide Congress with more detailed information about shipbuilding programs’ O&S cost growth and the drivers of such cost growth, Congress cannot know if shipbuilding programs are accounting for the full life-cycle implications of their acquisition decisions. In particular, without such a mechanism, Congress will continue to lack full insight into the extent to which shipbuilding programs’ O&S cost estimates have grown over time and what steps DOD and the Navy could take to better control O&S cost growth during the acquisition process. Product Support Managers Have Limited Influence in the Acquisition Process Congress directed DOD to establish PSMs as key sustainment managers for weapons systems, such as shipbuilding programs. However, we found that PSMs in the shipbuilding program offices have limited influence on decisions made during the acquisition process that affect ship sustainment. In 2009, Congress passed legislation that required DOD to appoint PSMs to support each major weapon system. According to DOD guidance, PSMs are senior sustainment officials in program offices who are tasked with ensuring that DOD weapon systems, including Navy ships, are reliable and can be maintained effectively at an affordable cost. The guidance states that PSMs should be involved in the acquisition decision-making process to ensure the weapon system—in this case a ship—can be supported throughout its life cycle. All but one of the shipbuilding programs included in our review have a dedicated PSM. However, we found that these sustainment experts have generally had limited involvement in key acquisition decisions, such as developing sustainment requirements and estimating O&S costs, because: (1) Navy acquisition policy does not ensure that PSMs are involved early in the acquisition process when key decisions that affect sustainment are made, and (2) their responsibilities to support sustainment outcomes during the acquisition process are often at odds with the program office’s overarching focus on acquisition cost and schedule outcomes. Navy Policy Does Not Require PSMs to Be Involved Early in the Acquisition Process Navy acquisition policy does not ensure that PSMs are appointed early enough to inform key acquisition documentation and initiate sustainment planning early in the acquisition process. Until recently, Navy acquisition policy did not specify when PSMs should be involved in the acquisition process. However, a March 2019 update to the Navy’s acquisition policy established that Navy leadership should assign PSMs by initiation (normally Milestone B). We found that this timing is too late in the acquisition process, as critical acquisition decisions that have significant repercussions for sustainment are made before Milestone B, such as developing the program’s requirements and initial sustainment strategy. For example, according to DOD’s PSM guidance, PSMs need to be involved prior to initiation of the program. Among other things, the PSM should provide a sustainment perspective into key decisions such as developing the acquisition strategy and setting requirements. This guidance also states that the PSM is responsible for authoring or providing input on key program documents, such as the LCSP, which are required by Milestone A. The Navy policy, therefore, does not facilitate the early contributions of PSMs to key documents as described by DOD guidance, and it does not help ensure PSMs are appointed to shipbuilding program offices early enough to influence key decisions about the program’s sustainment. For its two most recent shipbuilding programs, which began after the enactment of the PSM legislation in 2009, Navy acquisition policy has not ensured that PSMs are involved early enough in the acquisition process to influence decisions that affect sustainment. As a result, the programs have appointed PSMs at different points in the acquisition process and their ability to influence key decisions has varied, with the PSM appointed earlier able to affect more decisions related to sustainment. For example, the SSBN 826 program’s PSM was appointed before the program reached Milestone A. This is in line with DOD guidance but before Navy acquisition policy requires the PSM to be appointed. As a result, the SSBN 826 PSM stated that he was involved in the setting of the program’s sustainment requirements and has subsequently used those requirements to ensure sustainment is being considered in the acquisition process, including during the development of the submarine’s design. By contrast, the FFG(X) program, which began in 2017, does not yet have a dedicated PSM as the program approaches the Milestone B review. While this is permitted by the Navy’s acquisition policy, the program has now made critical sustainment decisions, such as developing the sustainment strategy, the maintenance and training schedule, and the sustainment key performance parameters, without a PSM. For the nine shipbuilding programs in our review that started prior to 2009, key acquisition decisions were made without the input of a senior sustainment official who has the responsibility and authority of a PSM. Nearly all of the PSMs for these nine programs stated they that they were not involved in or did not have insight into key acquisition decisions that took place early in the acquisition process, such as ship design. Instead, PSMs told us that their job has been to implement decisions that were already made. For example, one PSM said that “the die has been cast” once major decisions about automation, crew size, and service life are made, and after that all the PSM can do is “try to undo the sustainment harm that has been caused.” Given these results, officials from nearly all of the shipbuilding programs we spoke with stated that shipbuilding programs should assign PSMs at the very beginning of the program when key decisions are being made about how and what to acquire. In particular, program officials stated that the PSM should be appointed at the start of the program to ensure early decisions consider sustainment. Such decisions include establishing the sustainment requirements, developing the acquisition strategy, and designing the ship. We previously found that Navy PSMs considered early appointment of the PSM critical to ensuring they can influence their programs’ sustainment considerations. If shipbuilding programs do not appoint PSMs early in the acquisition process, the programs will continue to make critical decisions that affect sustainment without the input of the programs’ senior sustainment official. Without revising its acquisition policy to establish that PSMs should be appointed to shipbuilding programs at the beginning of the acquisition process, the Navy cannot ensure PSMs are involved early enough to influence key decisions that affect sustainment, such as requirements setting and the drafting of the LCSP. PSM Responsibilities Can Be at Odds with Shipbuilding Program Cost and Schedule Objectives Since PSMs focus on sustainment and the shipbuilding programs focus on managing acquisition outcomes, the PSMs’ roles and responsibilities are at times at odds with the goals and priorities of the program office in which they work. A Navy working group recently found that the effectiveness of PSMs is limited because the PSM’s goals do not always align with the shipbuilding program’s acquisition cost and schedule goals. The Navy issued a strategic plan for fiscal years 2018 to 2023 that was focused on strengthening the life-cycle logistics workforce that supports acquisition programs, including PSMs. The strategic plan established a working group on product support authority, which found that program manager and PSM roles and responsibilities are often in conflict and misaligned, reducing the authority and effectiveness of PSMs. As a result, the working group is assessing possible changes to improve the effectiveness of PSMs, such as revising Navy policy to better reflect the PSMs’ statutory authority or increasing PSMs’ independence by creating an additional reporting chain of command outside of their acquisition program. We similarly found that the ability of PSMs to influence key acquisition decisions may be limited because their focus on improving sustainment outcomes can be at odds with the shipbuilding programs’ emphasis on achieving acquisition goals, such as acquisition cost and schedule. As discussed above, Navy leadership has generally only focused on shipbuilding programs’ acquisition outcomes during the Gate process, without considering how acquisition decisions affect sustainment. In turn, program officials from all of the shipbuilding programs we reviewed reported that Navy leadership had directed them to prioritize the achievement of acquisition outcomes, such as acquisition cost goals, during the execution of their programs, and none had been directed to devote additional attention to sustainment. Additionally, officials in many of the shipbuilding programs we reviewed told us that a key ASN (RD&A) memorandum on managing acquisition costs framed their decision- making, including decisions about program changes to improve sustainment. This focus on managing acquisition costs can run counter to PSMs’ efforts to improve sustainment outcomes, such as increasing system reliability or providing adequate technical documentation, as these efforts frequently require investment of additional shipbuilding funds. Rather than investing acquisition funds to improve sustainment outcomes, we found that shipbuilding programs instead have an incentive to delay sustainment improvements until after ships are delivered to the fleet and funding sources other than those managed by the shipbuilding program can be used for these purposes. According to officials from 16 different acquisition, engineering, and sustainment offices, because shipbuilding programs are only responsible for ships until they are provided to the fleet, the Navy’s shipbuilding programs have an incentive to delay sustainment improvements until after ships are delivered to the fleet, when other parts of the Navy take over responsibility for funding them. As one fleet official explained, shipbuilding programs are not incentivized to address sustainment issues because the shipbuilding programs are held responsible only for the achievement of acquisition cost goals and not for sustainment cost goals. Some Navy officials characterized this dynamic as throwing sustainment concerns “over the fence” once ships are provided to the fleet. Further, we found that Navy leadership made decisions, in some cases, even though PSMs expressed concerns about the feasibility of implementing the decision from a sustainment perspective. Figure 13 provides an example of when LCS sustainment officials in the shipbuilding program expressed concern about the feasibility of the LCS crew size. While it is important for shipbuilding programs to manage acquisition cost and schedule, focusing only on these acquisition outcomes reduces the effectiveness of the PSM and increases the risk that ships will have long- term sustainment challenges. Conclusions The quantity and breadth of issues identified in this report—resulting in billions of dollars in unexpected costs, maintenance delays, and unreliable ships—suggest that existing policies and guidance have not ensured that new ships are reliable and can be sustained as planned. Recently, due to some of these problems, DOD and the Navy have recognized the importance of considering the requirements and costs of sustainment during the acquisition process, and Congress has passed legislation related to sustainment planning. This report, along with other DOD initiatives discussed in this review, demonstrate that the Navy needs to take many steps to infuse its acquisition decision-making with a greater focus on sustainment outcomes. Systemic changes are needed to improve shipbuilding programs’ sustainment outcomes, including: setting clear sustainment requirements that are useful for acquisition decision-making and reporting the results to Congress, improving O&S cost estimates, sustainment planning, and logistics assessments, and involving the PSM early in the acquisition process. However, these changes will only be successful if Navy leadership commits more time, attention, and resources to ensuring that sustainment is thoroughly considered throughout the acquisition process. Until the Navy resolves these issues, its shipbuilding programs will continue to pass costly sustainment risk to the fleet that results in ships and submarines that experience major sustainment problems. Matter for Congressional Consideration Congress should consider developing an oversight mechanism for evaluating shipbuilding programs’ sustainment cost estimate growth during the acquisition process, with requirements for the Navy to: (1) report sustainment cost estimate growth information to Congress and (2) reassess shipbuilding programs that are experiencing a high level of sustainment cost estimate growth. Recommendations for Executive Action We are making the following 11 recommendations to DOD: The Secretary of Defense should change its definition for setting operational availability for ships in its Joint Capabilities Integration and Development System policy by adding information that defines the operational availability requirement by mission area in addition to the ship level and includes all equipment failures that affect the ability of a ship to perform primary missions. (Recommendation 1) The Secretary of Defense should change its definition for setting materiel availability for ships in its Joint Capabilities Integration and Development System requirements policy to include all factors that could result in a ship being unavailable for operations, such as unplanned maintenance, unplanned losses, and training. (Recommendation 2) The Secretary of the Navy should direct the ASN (RD&A) and the CNO, once DOD requirements setting policy is revised, to update existing operational availability requirements for ongoing shipbuilding programs. When revising these requirements, the Navy should set operational availability requirements that: (1) are based on failures that affect the ability of a ship to perform primary missions and (2) are set at the mission level instead of ship level. (Recommendation 3) The Secretary of the Navy should direct the ASN (RD&A) and the CNO, once DOD requirements setting policy is revised, to update the materiel availability requirements for ongoing shipbuilding programs. When developing or revising these requirements, the Navy should set materiel availability requirements that fully capture all factors that could preclude a ship from being ready when needed. (Recommendation 4) The Secretary of the Navy should direct the ASN (RD&A) and the CNO, once the Navy revises its sustainment requirements, to ensure that shipbuilding programs report operational availability and materiel availability requirements in Selected Acquisition Reports, and alternatives to the Selected Acquisition Reports, for Congress. (Recommendation 5) The Secretary of the Navy should direct the Commander of Naval Sea Systems Command to ensure that cost estimators follow current guidance and GAO-identified best practices and conduct sensitivity analyses and other analyses to improve their assessment of cost risk in the O&S costs in shipbuilding programs’ life-cycle cost estimates. (Recommendation 6) The Secretary of the Navy should direct the ASN (RD&A) to ensure all shipbuilding programs develop and update LCSPs, in accordance with DOD policy, that demonstrate how a ship class can be affordably operated and maintained while meeting sustainment requirements, including associated business case analyses and identifying sustainment risk. (Recommendation 7) The Secretary of the Navy should direct the Commander of Naval Sea Systems Command to evaluate and implement changes to the ILA in order to position the ILA to effectively identify key sustainment risks and make recommendations for risk mitigation, which may include existing Navy proposals to change the ILA process. (Recommendation 8) The Secretary of the Navy should direct the ASN (RD&A) and the CNO to ensure sustainment-related briefing topics prescribed by the Navy’s acquisition policy are consistently discussed at Gate reviews. (Recommendation 9) The Secretary of the Navy should direct the ASN (RD&A) and the CNO to implement the sustainment program baseline initiative for shipbuilding programs and, in so doing, develop a mechanism that ensures that sustainment outcomes are a factor in shipbuilding programs’ decision- making during the acquisition process. (Recommendation 10) The Secretary of the Navy should revise SECNAVINST 5000.2 and other associated guidance to ensure PSMs are assigned to shipbuilding program offices in time to inform early acquisition decisions, including development of the program’s sustainment requirements and LCSPs. (Recommendation 11) Agency Comments and Our Evaluation We provided a draft of our report to DOD for comment. DOD’s written comments are reprinted in appendix III of this report. DOD concurred with 8 recommendations and partially concurred with 3 recommendations. However, for at least 5 of the recommendations in which DOD partially concurred and concurred, DOD did not describe the specific actions it is planning to take to address our recommendations. These are discussed below. In response to our first and second recommendations on operational and materiel availability requirements, DOD stated that the Navy and Joint Staff would revisit requirements definitions for shipbuilding programs to better ensure that they are traceable to a ship’s mission and can be used across ship development and fielding. DOD also agreed that it will align the sustainment definitions with how the Navy defines critical failures for ship programs. While these are important steps, they do not fully address our recommendations. Specifically, DOD officials told us that the department plans to continue defining operational availability with a single metric for an entire ship or ship class. While this approach is appropriate for materiel availability, as we state in the report, it is misaligned with Navy guidance for operational availability, which states that such an approach is not mathematically feasible for ships. Until DOD ensures that its sustainment requirements for ships are well-defined and usable during acquisition and sustainment, shipbuilding programs will continue to implement requirements that do not result in reliable and available ships. In response to our third and fourth recommendations, DOD agreed to incorporate changes to its requirements-setting policy into new shipbuilding programs. However, DOD and the Navy may miss key opportunities to improve the Navy’s sustainment requirements for existing programs, including at least four ship classes that have plans for a new flight, block, and/or major modification. This approach also excludes existing programs that have established requirements but have yet to start design or construction. Changing these requirements, in line with our recommendation, would help ensure that more rigorous sustainment requirements inform Navy ship designs. For example, as we discuss in the report, the current FFG(X) operational availability requirement would allow the ships to be out of service for extraordinary lengths of time. Until the FFG(X) requirement and those for other existing ships (such as DDG 51 Flight III) are remedied, the sustainment requirements will continue to be poorly defined and unable to influence design decisions in a manner that results in more reliable ships. In response to our fifth recommendation, DOD concurred with the recommendation because it stated that it already reports the status of both sustainment requirements in its Selected Acquisition Reports. However, as we state in our report, implementing this recommendation is dependent on the Navy changing the definition of its sustainment requirements to improve the accuracy of its reporting to Congress. Since DOD only agreed to modify material availability requirements for existing ship programs as it deemed appropriate, its Selected Acquisition Reports could continue to be misleading for many of its ship programs because they may not reflect all of the failures and factors that reduce operational and materiel availability once ships are in the fleet. In addition to DOD’s response, the Navy’s ASN (RD&A) also submitted a letter stating that he generally agreed with the recommendations and indicated that his office has already started making some changes over the last 10 years to improve consideration of sustainment while acquiring ships. The Navy also sought to add context to some of our report findings. We respond to the ASN RD&A’s letter in appendix III. DOD and the Navy also provided technical comments that we incorporated as appropriate. We are sending copies of this report to the Secretary of Defense, Secretary of the Navy, interested congressional committees, and other interested parties. This report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at oakleys@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This review assesses the extent to which: (1) the Navy’s shipbuilding programs deliver ships to the fleet that can be sustained as planned; (2) the Navy develops and uses effective key sustainment requirements during the acquisition process; (3) shipbuilding programs effectively identify and evaluate sustainment costs and risks in key acquisition planning documents; (4) Navy and Congressional leadership have insight into and effectively consider programs’ sustainment planning and outcomes; and (5) the shipbuilding programs leverage Product Support Managers (PSMs) during the acquisition process. The scope of our review included all shipbuilding programs for warships that had ships under construction or in development in the last 10 years, from fiscal years 2009 through 2019. We defined a shipbuilding program as under construction if any ship in the class was under construction in the last 10 years. We defined a shipbuilding program as in development if the Navy had awarded a development or design contract for the class in the last 10 years. We excluded military sealift command vessels and other Navy vessels with logistics missions from this review to help ensure that our resources matched the scope of our review. We assessed LHD 8 and CVN 77 as their own classes for the purposes of our review because the Navy considers them to be transitional designs between antecedent classes. These parameters resulted in 11 ship classes for inclusion in our review. We also selected several ships within these classes to serve as case studies for additional analysis. To select these ship-specific case studies, we reviewed all warships delivered from fiscal years 2007-2018 and selected up to four of the most recent hulls within this time frame from each class as case studies. We selected these ships for additional analysis because they are still relatively new, but the fleet has had experience maintaining them and could discuss sustainment challenges for those ships, if any. All ship classes and case study hulls in scope are listed in table 7. Over the course of this audit, we interviewed officials from over 100 Navy organizations involved in designing, building, inspecting, testing, sustaining, and operating Navy ships to gain an understanding of the extent to which they are involved in the acquisition process and how they consider and manage sustainment risk during the acquisition process. These interviews also provided information on the nature and magnitude of sustainment issues being experienced by the fleet on recently delivered ships. These included approximately 30 interviews with organizations reporting to the ASN (RD&A), 31 interviews with organizations reporting to the Chief of Naval Operations, 29 interviews with organizations within Naval Sea Systems Command (NAVSEA), interviews with shipbuilders that have been awarded multiple Navy shipbuilding contracts, and three interviews with other Department of Defense (DOD) entities. We conducted these interviews in several locations throughout the United States, including Washington, D.C.; San Diego, CA; Norfolk, VA; Philadelphia, PA; and Mechanicsburg, PA. During visits to naval bases, we toured DDG 111, DDG 1000, LHD 8, LPD 22, LCS 3, LCS 4, CVN 77, and CVN 78. To identify the extent to which ships can be sustained as planned, we interviewed shipbuilding program officials, in-service program officials, engineers, and fleet organizations, as well as analyzed ship and system performance data from many Navy organizations. Through this assessment, we identified and analyzed 150 significant class-wide issues across the shipbuilding programs in our scope that required more sustainment resources than planned. Such issues include systems or parts that exhibited poor design, construction, reliability, or planning; systems that were obsolete before or soon after ship delivery; and systems that could not be maintained by the fleet due to vendor or manufacturer proprietary information. We counted only issues that were class-wide, meaning they were related to ship design, equipment used across the class, or construction procedures, rather than hull-specific issues that could be caused by a unique accident or sailor error. We also did not assess issues related to fleet preference. For example, one ship’s crew told us they did not prefer the location in which consoles for operating a certain system were installed, as they are typically installed in a different location on other ship classes. However, because the consoles were installed in the location specified in the design, we eliminated this issue from our analysis. We also eliminated issues if maintenance and other work on the affected system were accounted for during the acquisition process in the program’s initial Operating and Support (O&S) cost estimate, rather than being an unexpected expense. For example, program offices can address expected obsolescence by budgeting for future system modernizations or purchasing quantities of spare parts that will last for the ship’s entire life cycle. To identify the costs associated with fixing problems that are the result of not being able to sustain ships as planned, we reviewed documentation from Navy organizations, budget justifications, and estimates provided by Navy officials. We were able to collect cost information for 30 percent of the problems reported to us by the fleet. To assess the extent to which maintenance schedules are executed as planned, we analyzed Navy data on regularly scheduled, depot-level maintenance periods for surface ships—including those maintained at overseas homeports and in the United States. NAVSEA collects and manages data on these maintenance periods—known as Chief of Naval Operations maintenance availabilities—for surface ships, submarines, and aircraft carriers. We obtained the data on surface ship depot-level maintenance periods used by NAVSEA’s Surface Maintenance Engineering Planning Program and the Commander, Navy Regional Maintenance Center. We reviewed the data we obtained for inconsistency errors and, when possible, obtained multiple documents that discussed the same problem for validation. We then discussed these problems with multiple officials across the Navy, including officials involved in ship maintenance and operation. From these efforts, we determined that these data are sufficiently reliable for the purposes of this report. To assess the extent to which shipbuilding programs develop and use effective sustainment requirements during the acquisition process, we reviewed DOD requirements setting policy and determined the extent to which shipbuilding programs set requirements in accordance with this policy. In doing so, we assessed the extent to which DOD policy aligned with fleet experience and captured all factors that influence ship availability and analyzed any discrepancies. We then assessed the extent to which the Navy set sustainment requirements that contributed to well- informed decision-making throughout the acquisition process and in accordance with DOD policy and Navy guidance. To assess how accurately the Navy measures operational availability and materiel availability outcomes, we reviewed the Navy’s operational availability measurements as reported in Selected Acquisition Reports to Congress, and compared these values to fleet reliability data and casualty reports, as well as information about the ships’ performance obtained in interviews with Chief of Naval Operations and NAVSEA officials. To assess the extent to which shipbuilding programs effectively identify and evaluate sustainment costs and risks in key acquisition planning documents, we evaluated the Navy’s development and use of life-cycle cost estimates, Life-Cycle Sustainment Plans and Independent Logistics Assessments. To evaluate the Navy’s development of O&S cost estimates, we reviewed the life-cycle cost estimates created when programs were in development and compared them to updated estimates of O&S costs reported in Selected Acquisitions Reports and Navy provided data. We adjusted program estimates for quantity to more accurately capture cost growth between initial and current O&S estimates. Further, we adjusted the estimates for inflation to compare the O&S estimates as accurately as possible. For programs that experienced O&S cost growth, we interviewed program officials and Navy cost estimators to determine the process that the Navy’s cost estimators used to build O&S cost estimates for shipbuilding programs and to discuss the reasons for cost growth. We also reviewed DOD cost estimation guidance to determine whether the cost estimators and programs complied with its requirements. While we have previously found issues with the reliability of the Navy’s cost estimates, we believe that the cost estimates we reviewed are sufficiently reliable for the purposes of this report. To evaluate the Navy’s use of key sustainment planning documents, we reviewed LCSPs and ILAs for programs in our scope. We interviewed program, NAVSEA, fleet, and maintenance officials to determine the extent to which the LCSPs and ILAs for those programs were used to plan for sustainment, including whether these documents identified and mitigated sustainment risks. We compared the results of the ILAs to realized ship sustainment problems that we identified through interviews shipbuilding program officials, in-service program officials, engineers, and fleet organizations, as well as to analyses of ship and system performance data from many Navy organizations. To evaluate the extent to which Navy and Congressional leadership has insight into and considers sustainment planning and outcomes, we examined the Navy’s Gate review process and Congress’ Nunn-McCurdy breach process. To assess the Navy’s use of the Gate review process, we reviewed Navy acquisition policy governing the reviews, as well as the briefings and meeting minutes from reviews for programs in our scope from fiscal years 2014 through 2018. We compared the content of the briefings and meeting minutes to the acquisition policy to determine the extent to which required sustainment topics were briefed and discussed at each review and identified other mentions of sustainment issues that were outside the scope of the policy requirements. We also reviewed a recent revision to Navy acquisition policy that creates a Gate 7 review for sustainment and interviewed senior Navy officials to obtain their perspectives on how Gate 7 will affect ship sustainment. To assess Navy leadership’s effectiveness in holding shipbuilding programs’ accountable for achieving sustainment outcomes using Acquisition Program Baselines (APB), we reviewed statute that established the APB as well as the findings of the Section 809 Panel, which recommended the creation of the SPB to supplement the APB. We also interviewed Navy officials involved in developing the SPB framework in accordance with the Panel’s recommendations to obtain information on their work. To determine what information Navy shipbuilding programs are required to provide to Congress about sustainment cost issues during the acquisition process, we reviewed the statutory requirements found in Nunn-McCurdy, a key Congressional oversight tool requiring information about baselines and cost estimate growth. We also assessed how the Nunn-McCurdy breach influenced programs’ management of acquisition and sustainment costs by interviewing Navy officials in the shipbuilding program offices, Office of the Chief of Naval Operations, and ASN (RD&A) offices, among others. Additionally, we reviewed O&S cost growth for programs in our scope and compared the percent increase to the 50 percent cost growth threshold used for Nunn-McCurdy acquisition cost breaches to determine if the sustainment cost growth was of a magnitude the Congress considers critical for acquisition costs. To assess how shipbuilding programs leverage PSMs during the acquisition process, we reviewed DOD and Navy acquisition guidance governing the roles and responsibilities of program offices, program managers, and PSMs. We interviewed officials from shipbuilding programs in our scope about their priorities and responsibilities throughout the life cycle of a ship class. Further, we reviewed legislation creating the PSM role, DOD and Navy acquisition guidance regarding PSMs, prior GAO reporting on PSMs, and interviewed PSMs from programs in our scope. We compared the key acquisition activities that legislation requires PSMs to participate in with the activities the PSMs reported they had participated in. We also compared DOD and Navy guidance on assigning PSMs to a program office to when program officials told us the PSMs needed to be assigned to be effective. We also reviewed findings that NAVSEA logistics officials reached about the authority and effectiveness of PSMs. We conducted this performance audit from April 2018 to March 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: List of Fleet-Identified Ship Class Problems That Required More Sustainment Resources than Planned GAO Response to ASN (RD&A)’s Letter In addition to responding to our recommendations, the Assistant Secretary of the Navy for Research, Development, and Acquisition (ASN (RD&A)) provided observations on a number of issues related to the findings in our report. In his letter, the ASN (RD&A) agreed with our recommendations but sought to add context to our report’s conclusion that the Navy can save billions by improving its consideration of sustainment throughout the acquisition process. Our response to the ASN (RD&A)’s letter is as follows. In his letter, the ASN (RD&A) highlighted a number of changes that the Navy has instituted over the last 10 years to improve sustainment planning, including policies pertaining to life cycle sustainment plans and independent logistics assessments, strengthening the role of the Product Support Managers, and establishing a new Gate 7 review focused on sustainment. We agree that the Navy’s framework for including sustainment planning in the acquisition process offers promise and we discuss these policies and processes in depth in this report. However, we found considerable weaknesses in the Navy’s application of its own policies. Specifically, we found that the Navy did not provid a thorough assessment of the sustainment implications and risks in its LCSPs and ILAs and Product Support Managers aften are not assigned until well into a shipbuilding program thereby limiting their influence on early acquisition decisions. While adding a Gate 7 offers benefits, it is not a substitute for discussions about sustainment concepts during earlier Gates, when key long-term decisions are being made. Our findings and recommendations demonstrate that DOD and the Navy should better use the policies and processes it currently has, including the Gate reviews, as well as Product Support Managers, LCSPs and ILAs, to improve their understanding of how their acquisition decisions will affect sustainment. In his letter, the ASN (RD&A) stated that many of the Navy’s ship programs were designed with sustainment initiatives early in the acquisition process and, further, acknowledged that these initiatives did not achieve efficiencies as initially planned. We agree with both of these points, as we discuss in depth in this report. Whereas the ASN (RD&A) indicated in his letter that leadership, philosophical, and technology changes can lead to outcomes that were not originally envisioned, we found that these initiatives largely failed because, early in the acquisition process, the Navy did not sufficiently assess the costs or evaluate the risks associated with pursuing these initiatives. Absent such analysis, the Navy did not mitigate the risks that threatened their success. The ASN (RD&A) highlighted several examples of sustainment initiatives considered early in the acquisition process for several ships. We believe that these examples (many of which we discuss in our report) serve to further highlight our findings. For example: The ASN (RD&A) discussed the use of a “full service contractor,” meaning performance-based logistics for LPD 17 class ships. According to the ASN (RD&A), while this approach had been successfully used for aircraft, the Navy had never applied it to ships. As we state in our report, in attempting to use performance-based logistics for several shipbuilding programs including LPD 17 class ships, the Navy did not consider the challenges in implementing this radical departure from traditional ship maintenance and did not consult the fleet on this change until after ships were delivered. The Navy’s life-cycle sustainment plans and cost estimates for several shipbuilding programs did not articulate how much the performance-based logistics approach was likely to cost or what sustainment outcomes the Navy expected. For instance, for three out of the four programs that pursued performance-based logistics, the Navy learned that this approach was cost-prohibitive once it began seeking contractors to sustain its ships. The ASN (RD&A) stated that the Navy’s focus on Ford class sustainment has reduced sustainment costs and labor by an estimated $4 billion across the Ford class carriers compared to the previous class of carriers. However, it is too early to tell how much the Navy will save compared to the cost of its previous class of carriers because the Navy’s fleet has yet to operate the new carrier. Further, while the O&S estimated for the Ford class may currently be lower than the previous carrier class, our report notes that the O&S costs for the Ford class carrier program are nearly $46 billion more than initially estimated. Finally, in his letter, the ASN (RD&A) stated that the Navy plans to correct the vast majority of CVN 78 sustainment problems (including those we identified in this report) with ship construction funding—and these cost will not be passed on to the fleet. The $4.2 billion to address the 150 problems that we identified in this report already excludes all ship construction funding and also excludes corrections on CVN 78. Our calculation of $4.2 billion only includes the costs to correct the problems that are not funded using ship construction funding. We agree with the ASN (RD&A)’s assertion that external factors can take place over the lengthy time needed to design and build a new ship that can lead to changes that were not initially envisioned. While the Navy cannot prepare for all of the unknowns, it can critically evaluate sustainment assumptions that form the basis of its shipbuilding programs early in the acquisition process. Such analysis could significantly improve the Navy’s ability to response to changes over time and increase the likelihood of success. Further, critical analysis could also help decision makers determine when an initiative is too risky before implementing it on an entire shipbuilding program. In its letter, the ASN (RD&A) also states that a careful reading of the early program documentation demonstrates that sustainment stakeholders were involved in the acquisition process. We reviewed available acquisition documents for 11 shipbuilding programs in the last 20 years and found that sustainment leadership, specifically the CNO and other in OPNAV, attended meetings and approved sustainment planning documents. However, we found that sustainment was rarely discussed during early acquisition meetings—even when the planned shipbuilding programs sought new sustainment initiatives. Further, we reviewed thousands of Navy documents and met with over 100 Navy organizations and found that sustainment organizations across the Navy that are responsible for ship sustainment have a limited role in the acquisition process, even when having such a role could have likely prevented many of the problems we discuss in the report. As we state in our report, the quantity and breadth of the 150 problems we found— resulting in billions of dollars in unexpected costs, maintenance delays, and unreliable ships—suggest that existing policies and guidance have not ensured that new ships are reliable and can be sustained as planned. We are concerned that the ASN (RD&A)’s letter is an indication that the Navy’s shipbuilding program offices will not take the necessary action to improve sustainment planning during the acquisition process. The ASN (RD&A)’s letter did not mention the recent establishment of a new Deputy Assistant Secretary for Sustainment that we discuss in our report. We believe that this office has the opportunity to contribute to improved outcomes byp providing leadership to ensure that sustainment considerations are critically evaluated during the acquisition process. Absent such leadership, the Navy is at risk of continuing to provide ships to the fleet that are incomplete, unreliable, and cost more than expected to maintain. Appendix IV: GAO contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition the contact name above, the following staff members made key contributions to this report: Diana Moldafsky, Assistant Director; Laurier Fish, Analyst-in-Charge; Jillian Schofield; Sarah Evans; Lori Fields; Ann Halbert-Brooks; Joshua Garties; Laura Greifner; Tara Kumar; Shivarthn Maniam; Alexis Olson; Kya Palomaki; Anne Louise Taylor; and Tonya Woodbury. Carl Barden; Brian Bothwell; Anna Irvine; and Jean McSween also made contributions to this report. Related GAO Products Defense Acquisitions: Senior Leaders Should Emphasize Key Practices to Improve Weapon System Reliability. GAO-20-151. Washington, D.C.: January 14, 2020. Guided Missile Frigate: Navy Has Taken Steps to Reduce Acquisition Risk, but Opportunities Exist to Improve Knowledge for Decision Makers. GAO-19-512. Washington, D.C.: August 9, 2019. DOD Acquisition Reform: Leadership Attention Needed to Effectively Implement Changes to Acquisition Oversight. GAO-19-439. Washington, D.C.: June 5, 2019. Columbia Class Submarine: Overly Optimistic Cost Estimate Will Likely Lead to Budget Increases. GAO-19-497. Washington, D.C.: April 8, 2019. DOD Depot Workforce: Services Need to Assess the Effectiveness of Their Initiatives to Maintain Critical Skills. GAO-19-51. Washington, D.C.: December 14, 2018. Navy and Marine Corps: Rebuilding Ship, Submarine, and Aviation Readiness Will Require Time and Sustained Management Attention. GAO-19-225T. Washington, D.C.: December 12, 2018. Navy Readiness: Actions Needed to Address Costly Maintenance Delays Facing the Attack Submarine Fleet. GAO-19-229. Washington, D.C.: November 19, 2018. Navy Readiness: Actions Needed to Address Costly Maintenance Delays Affecting the Attack Submarine Fleet. GAO-19-192C. Washington, D.C.: October 31, 2018. Navy Shipbuilding: Past Performance Provides Valuable Lessons for Future Investments. GAO-18-238SP. Washington, D.C.: June 6, 2018. Columbia Class Submarine: Immature Technologies Present Risks to Achieving Cost, Schedule, and Performance Goals. GAO-18-158. Washington, D.C.: December 21, 2017. Navy Readiness: Actions Needed to Address Persistent Maintenance, Training, and Other Challenges Affecting the Fleet. GAO-17-809T. Washington, D.C.: September 19, 2017. Naval Shipyards: Actions Needed to Improve Poor Conditions That Affect Operations. GAO-17-548. Washington, D.C.: September 12, 2017. Weapon Systems Management: Product Support Managers’ Perspectives on Factors Critical to Influencing Sustainment-Related Decisions. GAO-17-744R. Washington, D.C.: September 12, 2017. Navy Readiness: Actions Needed to Address Persistent Maintenance, Training, and Other Challenges Facing the Fleet. GAO-17-798T. Washington, D.C.: September 7, 2017. Navy Shipbuilding: Policy Changes Needed to Improve the Post-Delivery Process and Ship Quality. GAO-17-418. Washington, D.C.: July 13, 2017. Navy Force Structure: Actions Needed to Ensure Proper Size and Composition of Ship Crews. GAO-17-413. Washington, D.C.: May 18, 2017. Navy Ship Maintenance: Action Needed to Maximize New Contracting Strategy’s Potential Benefits. GAO-17-54. Washington, D.C.: November 21, 2016. Littoral Combat Ship: Need to Address Fundamental Weaknesses in LCS and Frigate Acquisition Strategies. GAO-16-356. Washington, D.C.: June 9, 2016. Defense Inventory: Further Analysis and Enhanced Metrics Could Improve Service Supply and Depot Operations. GAO-16-450. Washington, D.C.: June 9, 2016. Military Readiness: Progress and Challenges in Implementing the Navy’s Optimized Fleet Response Plan. GAO-16-466R. Washington, D.C.: May 2, 2016. Navy and Coast Guard Shipbuilding: Navy Should Reconsider Approach to Warranties for Correcting Construction Defects. GAO-16-71. Washington, D.C.: March 3, 2016. Acquisition Reform: DOD Should Streamline Its Decision-Making Process for Weapon Systems to Reduce Inefficiencies. GAO-15-192. Washington, D.C.: February 24, 2015. Ford-Class Aircraft Carrier: Congress Should Consider Revising Cost Cap Legislation to Include All Construction Costs. GAO-15-22. Washington, D.C.: November 20, 2014. Weapon Systems Management: DOD Has Taken Steps to Implement Product Support Managers but Needs to Evaluate Their Effects. GAO-14-326. Washington, D.C.: April 29, 2014. Navy Shipbuilding: Opportunities Exist to Improve Practices Affecting Quality. GAO-14-122. Washington, D.C.: November 19, 2013. Trends in Nunn-McCurdy Cost Breaches for Major Defense Acquisition Programs. GAO-11-295R. Washington, D.C.: March 9, 2011. Defense Management: DOD Needs Better Information and Guidance to More Effectively Manage and Reduce Operating and Support Costs of Major Weapons Systems. GAO-10-717. Washington, D.C.: July 20, 2010. Best Practices: High Levels of Knowledge at Key Points Differentiate Commercial Shipbuilding from Navy Shipbuilding. GAO-09-322. Washington, D.C.: May 13, 2009. Defense Logistics: Improved Analysis and Cost Data Needed to Evaluate the Cost-effectiveness of Performance Based Logistics. GAO-09-41. Washington, D.C.: December 19, 2008. Defense Acquisitions: Cost to Deliver Zumwalt-Class Destroyers Likely to Exceed Budget. GAO-08-804. Washington, D.C.: July 31, 2008. Defense Acquisitions: Realistic Business Cases Needed to Execute Navy Shipbuilding Programs. GAO-07-943T. Washington, D.C.: July 24, 2007.
Why GAO Did This Study The U.S. Navy requested over $40 billion each of the last 3 years to build, operate, and sustain its fleet. Acquisition decisions made as ships are developed and built can have a long-term effect on sustainment costs and ship quality. GAO was asked to assess the extent to which DOD considers and plans for sustainment when acquiring weapons. Among other objectives, this report assesses the extent to which: (1) Navy ship programs deliver ships to the fleet that can be sustained as planned; (2) the Navy develops and uses effective sustainment requirements during acquisition; (3) ship programs are effectively identifying and evaluating sustainment risks in planning documents; and (4) leadership considers programs' sustainment planning and outcomes. GAO reviewed DOD and Navy acquisition policy and guidance, evaluated acquisition plans, collected sustainment metrics, and conducted interviews with more than 100 organizations, including program office and fleet units. GAO assessed 11 classes of shipbuilding programs (all nine that delivered warships during the last 10 years, as well as two newer classes of ships). What GAO Found The Navy has delivered warships—such as aircraft carriers, destroyers, and submarines—to its fleet over the past 10 years that require more effort to sustain than initially planned. In assessing how these classes of ships are sustained, GAO found 150 examples of class-wide problems, such as unreliable ship systems. These problems stemmed from shipbuilding programs not identifying, evaluating, or mitigating sustainment risks during the acquisition process. GAO found that it would cost the Navy $4.2 billion to correct just the 30 percent of these problems for which the Navy had data on estimated repair costs. GAO found that shipbuilding programs' requirements for sustainment reflect weaknesses with how Department of Defense (DOD) policy defines these requirements for ships. Sustainment requirements should influence acquisition decisions that determine the sustainability of a ship class, such as the ship's design. However, the Navy's sustainment requirements do not provide key information on how reliable and maintainable mission-critical systems should be and, therefore, cannot adequately inform acquisition decisions. GAO also found that shipbuilding programs did not consistently address sustainment risks in acquisition planning documents. For example, the operating and support costs included in cost estimates did not capture all sustainment risks that could affect costs or evaluate sensitivity to changing sustainment assumptions, contrary to DOD and Navy cost estimating guidance. As a result, for six shipbuilding programs whose costs GAO could assess, the Navy had underestimated sustainment costs by $130 billion, as shown below. The Navy has begun making some changes to its acquisition oversight process, such as developing sustainment program baselines and adding a sustainment oversight review. While positive, these changes focus on considering sustainment after key decisions are made early in the acquisition process. GAO also found that DOD is not required to provide detailed information about shipbuilding programs' sustainment cost growth to Congress. As such, Congress does not have full insight into the extent of shipbuilding programs' cost growth and why such growth occurred. What GAO Recommends GAO is making one matter for Congressional consideration to enhance oversight and 11 recommendations to help DOD and Navy improve ship sustainment. DOD concurred with 8 and partially concurred with 3 recommendations but did not describe specific actions, which GAO believes are necessary to improve sustainment outcomes.
gao_GAO-20-10
gao_GAO-20-10_0
Background Medicare Hospice Benefit Eligibility and Coverage To be eligible for the Medicare hospice benefit, an individual must be eligible for Medicare Part A (which covers inpatient care) and be medically certified as having a terminal illness with a life expectancy of 6 months or less if the illness runs it normal course. For individuals to receive care from a Medicare-approved hospice program, they must elect the hospice benefit by signing a statement indicating they are waiving their rights to Medicare payment for services related to curative treatment of their terminal illness. When enrolling in Medicare hospice care, beneficiaries can receive several different types of services in various settings. Most hospice beneficiaries receive hospice care in their own home, but they can also receive care in other settings, such as a nursing home, assisted living facility, hospice facility, or hospital. The Medicare hospice benefit covers a variety of services and supplies for the palliation and management of the terminal illness, including physician and nursing services, medical equipment and supplies including drugs for pain and symptom management, hospice aide and homemaker services, physical and occupational therapy, and spiritual and grief and loss counseling. A hospice interdisciplinary team (in collaboration with the beneficiary’s primary care provider, if any) works with the beneficiary, family, and caregiver(s) to develop a plan of care that addresses the physical, psychosocial, spiritual, and emotional needs of the beneficiary, family members, and caregiver(s). The hospice provider must make all services under the Medicare hospice benefit available to beneficiaries as needed, 24 hours a day, 7 days a week. Although hospice care is designed for beneficiaries with a life expectancy of 6 months or less, beneficiaries can receive hospice care beyond 6 months if they continue to meet hospice eligibility requirements. In addition, beneficiaries can disenroll from the hospice benefit at any time and re-enroll in hospice care at a later time. Medicare Hospice Payment CMS pays hospices based on the level of hospice care provided to beneficiaries on a given day. There are four levels of hospice care, which are paid at either a daily rate or an hourly rate depending on the location and intensity of services provided. (See table 1.) Each care level has a payment rate that is adjusted for geographic differences in wages, and CMS updates these payment rates annually. The most common level of care is called routine home care (accounting for 98 percent of all Medicare hospice care in 2017), and hospices receive the routine home care payment daily rate regardless of whether beneficiaries receive any services on a given day. In addition, CMS imposes two payment limitations (referred to as caps) on Medicare payment for hospice services—one that limits a hospice’s number of inpatient days and one that limits a hospice’s total Medicare payments in a given year. Hospice Quality Reporting Program In response to requirements in the Patient Protection and Affordable Care Act, CMS established the Hospice Quality Reporting Program, which currently includes two sets of data to assess the quality of hospice providers’ care; CMS publishes these data on its Hospice Compare website. Medicare hospice providers are required to submit these data to CMS for all patients regardless of payer source (e.g., Medicare, Medicaid, or private insurance). The two data sets are the following: Provider-reported quality measure data. This set of data (which CMS refers to as the Hospice Item Set) is used to calculate a hospice provider’s performance on quality measures, which include seven measures that reflect the percentage of all hospice patients’ stays where the provider completed various key care processes, such as screening patients for pain and shortness of breath. CMS also recently implemented an eighth measure, called the composite measure, which calculates the percentage of patients’ hospice stays in which the hospice provider completed all seven care process quality measures. Caregivers’ experience survey data. This set of data (referred to as the Consumer Assessment of Healthcare Providers and Systems (CAHPS®) Hospice Survey) is a national survey that captures, from the caregiver’s (family member or friend) perspective, the patient’s experience with hospice care. The survey includes questions that are used to calculate eight quality measures based on survey responses. For example, one measure scores how well the hospice communicated with the patient’s family. CMS’s Hospice Oversight CMS oversees the quality of Medicare hospice care primarily through inspections—referred to as surveys—which are conducted by state survey agencies contracted by CMS or CMS-approved national private accrediting organizations. These surveys are used to determine whether the hospice is in compliance with federal health and safety requirements detailed in Medicare’s hospice conditions of participation. A hospice must be in compliance with these conditions to participate in the Medicare program. Medicare’s hospice conditions of participation include requirements related to patient care and organizational environment (e.g., the hospice must organize, manage, and administer its resources to provide necessary care). Each condition of participation is composed of standards associated with the condition, and a standard may have associated sub-components. For example, the “patient’s rights” condition includes standards such as “notice of rights and responsibilities” and “rights of the patient.” The “rights of the patient” standard includes sub- components, such as the patient has the right to receive effective pain management and symptom control. There are three main types of survey inspections—an initial certification survey when a provider first seeks to participate in Medicare; a re- certification survey to ensure ongoing compliance; and surveys to investigate complaints or incidents related to federal requirements. If a hospice is found to be out of compliance with hospice health and safety requirements during a survey, CMS cites the provider for non- compliance—referred to as a deficiency. These deficiencies are categorized at one of two levels: Condition-level deficiencies. These deficiencies are the most serious. A condition-level deficiency is one in which the provider violates one or more standards and the deficiencies are of such character as to substantially limit the provider’s capacity to furnish adequate care or which adversely affect the health and safety of patients. When a hospice provider is cited for a condition-level deficiency, CMS places the provider on a 90-day termination track (or 23 days if the situation is determined to pose “immediate jeopardy” to beneficiaries) within which the provider must correct the issue(s) and the correction must be confirmed via a follow-up survey visit. If this does not happen within 90 days of the survey date, CMS terminates the hospice’s Medicare provider agreement; termination is an enforcement remedy CMS uses to ensure compliance. Standard-level deficiencies. These deficiencies are less serious. A hospice provider that has a standard-level deficiency can be certified or re-certified only if the provider has submitted an acceptable plan of correction for achieving compliance within a reasonable period of time. According to CMS officials, standard-level deficiencies must also have follow-up to ensure correction, although the type of follow-up depends on the nature of the deficiency. If a standard-level deficiency is very minor and does not place any beneficiaries at risk, the follow-up may be handled through email or telephone instead of a follow-up visit. According to CMS officials, if a provider fails to submit or implement an acceptable plan of correction within a reasonable period of time acceptable to CMS, the provider is placed on the 90-day termination track noted above. Despite Treating a Similar Number of Beneficiaries as Non- profits, For-profit Providers Received Larger Share of Hospice Payments, Reflecting Differences in Lengths of Stay For-profit and non-profit hospices served roughly the same percentage of the approximately 1.5 million Medicare hospice beneficiaries in 2017, even though for-profit hospices make up about two-thirds of all hospice providers. According to our analysis of CMS data, for-profit providers treated about 50 percent of those beneficiaries and non-profit providers treated about 48 percent in 2017. This distribution has been about the same in each year from 2014 through 2017. For example, for these years, the percentages of beneficiaries treated by for-profit providers ranged from 48.7 percent to 50.2 percent (see additional details in app. I, table 7). When comparing the beneficiary populations treated by for-profit and non- profit hospice providers, we found that they generally had similar demographic characteristics. We identified two primary exceptions to this general finding: (1) non-profit hospices had slightly higher percentages of white beneficiaries, and (2) for-profit hospices had a greater proportion of patients enrolled in both Medicare and Medicaid. See table 2 (for more detailed data, see app. I, table 8). While beneficiary demographic characteristics were generally similar, we found differences in beneficiary diagnoses between for-profit and non- profit hospices. Specifically, for-profit hospices had, on average, a greater percentage of patients with non-cancer diagnoses—77 percent of for-profit hospice beneficiaries compared to 69 percent of non-profit hospice beneficiaries in 2017. Our analysis found that for-profit providers received a higher proportion of Medicare hospice payments than did non-profit providers. For 2017, about $10.4 billion (58 percent) of the $17.9 billion dollars in Medicare payments were made to for-profit providers and $7.2 billion (40 percent) of payments were to non-profit providers. Our analysis found this same pattern in each year from 2014 through 2017. One reason for-profit hospices received a higher portion of Medicare hospice payments for the period we reviewed is because (as previously noted) they had, on average, a greater percentage of beneficiaries with non-cancer diagnoses, and we found non-cancer beneficiaries, on average, had longer lengths of stay. (See table 3.) Since hospices are typically paid a set amount per day of a hospice stay, longer stays generally result in higher payments. Beneficiaries with non-cancer diagnoses can often have longer lengths of stay compared to other beneficiaries because the progression of these diseases (such as dementia) can be harder to predict; this may result in beneficiaries being enrolled in hospice earlier than appropriate (meaning that their projected life expectancy may actually be longer than 6 months). For instance, one study noted that dementia beneficiaries’ decline may include periods of stabilization where their health stays the same or even improves, which differs from a constant and predictable decline in most beneficiaries with terminal cancer. There are likely other factors beyond a greater percentage of beneficiaries with non-cancer diagnoses that contributed to for-profit providers’ higher portion of Medicare hospice payments. We found that for-profit providers had, on average, longer lengths of stay for both cancer and non-cancer beneficiaries compared to non-profit providers. (See table 3.) For example, non-cancer beneficiaries at for-profit providers had an average length of stay of 108 days, while non-cancer beneficiaries at non- profit providers had an average length of stay of 67 days. This suggests other factors besides beneficiary diagnosis contributed to longer average length of stay for for-profit providers. (For more detailed beneficiary diagnosis data from 2014 to 2017, see app. I, table 9.) For-profit and Non- profit Providers Scored Similarly on CMS’s Quality Measures, though Performance Varied on Other Indicators of Quality For-profit and non-profit hospice providers had similar scores on CMS’s current quality measures (provider-reported measures and caregivers’ experience measures assessed through a survey of the beneficiaries’ caregiver). CMS uses these measures to assess the quality of care provided by hospices. In addition to CMS’s current quality measures, researchers we interviewed noted that there are other care indicators that can also be used to assess the quality of care provided by hospices. According to CMS documents, CMS is working to account for other care indicators by developing additional quality measures. We assessed hospice providers’ performance on these indicators and found that performance varied between for-profit and non-profit hospices. For-profit and Non-profit Hospices Had Similar Scores on CMS’s Current Quality Measures, Though For-Profits Were More Often Among Subset with Lowest Scores on Certain Measures Our review of CMS data found that for 2017, both for-profit and non-profit hospices, on average, had similar scores on the seven quality measures that are provider-reported and that CMS currently uses to assess the quality of hospice care. (See table 4.) For six of the seven measures, for-profit and non-profit hospices had average scores of 94.7 percent or better. We also found that for-profits and non-profits had similar scores (83.6 percent and 87.0 percent, respectively) on a new composite measure that CMS implemented in 2017. This composite measure was designed to provide a more comprehensive evaluation of the hospice’s care by determining whether the hospice provider completed all of the applicable parts of hospice care that are measured by the seven quality measures. When looking at the subset of providers with the lowest scores on the composite quality measure, we found that for-profit hospices were more often in this subset, even when accounting for differences in the number of for-profit and non-profit providers: For the composite measure, there were 329 providers (261 for-profits and 68 non-profits) in the 10th percentile of scores or lower, meaning that the providers had a composite measure score of 64.3 percent or lower. Among these providers, we found that for-profits were more likely to be within this grouping, with about 12 percent of all for-profit providers having scores in the 10th percentile or lower compared to 6 percent of all non-profit providers. We also assessed the subset of these 329 providers that had composite measure scores below 50 percent, meaning that they only completed all of CMS’s seven quality measures for half or fewer of the beneficiaries they treated. We found that 130 providers (112 for-profits and 18 non-profits) had scores below 50 percent on this measure. These providers treated over 24,000 beneficiaries. In addition to the provider-reported quality measures, CMS also uses the caregivers’ experience survey to assess quality of care. We analyzed CMS data on caregivers’ experience surveys for 2016 to 2017 and found that caregivers’ reported experience with hospice care was generally similar for both for-profits and non-profits. The survey assesses care in a number of areas, such as communication, training, and help with pain and symptoms. See table 5 (for more detailed data, see app. I, table 10). Although for-profit and non-profit providers’ average scores on the caregivers’ experience survey were generally similar, we found that for- profit providers were more often among those providers with the lowest scores on certain caregivers’ experience measures than were non-profit providers. For example, on the rating measure that asks caregivers to give an overall rating of the hospice, 290 providers (248 for-profit providers and 42 non-profits) had scores at the 10th percentile or lower, meaning that their score was 72 percent or lower. For this measure, lower scores mean that fewer caregivers provided a rating of 9 or 10 on a 10- point scale, with 10 being the highest possible rating. We found that 15 percent of for-profit providers were among providers with scores in the 10th percentile or lower compared to 4 percent of non-profit providers. Performance Varied between For-profit and Non-profit Hospices for Other Indicators of Quality Identified by Researchers We used Medicare claims data to calculate certain measures researchers told us could be indicators of quality of care in hospice settings. (As noted previously, CMS is working to account for other care indicators by developing additional quality measures.) These indicators fall into two categories: (1) the number of beneficiaries discharged prior to death (often referred to as the live discharge rate) and (2) provider visits to provide medical and emotional support to the beneficiary and caregivers near the end of a beneficiary’s life. Researchers told us that such measures can fill gaps in assessing the quality of care provided by hospices, and show greater variability across hospices than CMS’s current quality measures; as previously noted, our data analysis found that providers’ quality measure scores were generally very high. Live Discharges According to researchers we interviewed and studies we reviewed, some discharges from hospice care prior to death should be expected because, for example, patients change their mind about receiving hospice care or their condition improves and they are no longer eligible for hospice care. However, a high live discharge rate could in some cases be an indicator of poor quality of care provided or of provider misuse of the benefit, in that they may be enrolling beneficiaries who are not eligible for hospice. See text box. Live Discharges In some cases, a beneficiary may be discharged alive from hospice care prior to their death. This could be for reasons unrelated to the quality of care provided. For example, beneficiaries may reconsider their decision to start palliative treatment, and therefore leave hospice care to re-start curative treatments. In other instances, a live discharge may indicate quality of care issues. For example, a beneficiary may be unhappy with the quality of care she is receiving from her hospice provider and therefore she leaves that hospice provider to seek treatment from a different hospice provider. Given the various reasons for live discharges, we expect that hospices will have some live discharges, but interpret a high rate of live discharges as potentially suggestive of quality of care issues. We found that for-profits had higher rates of live discharges than non- profits, with 22.1 percent of beneficiaries served by for-profits being discharged alive compared to 12.0 percent of beneficiaries served by non-profits in 2017. This disparity remained true after accounting for whether beneficiaries had a cancer or non-cancer diagnosis. (See table 6; for more detailed data from 2014 to 2017, see app. I, table 11.) We found that 472 hospice providers (462 for-profit and 10 non-profit providers) had live discharge rates of 50 percent or more in 2017, meaning that half or more of their beneficiaries were discharged from hospice care prior to death. These providers provided care to about 6 percent of all beneficiaries discharged alive in 2017. According to researchers we interviewed and one of the studies we reviewed, provider visits near the end of a hospice beneficiary’s life are critical to providing quality care, including for emotional support and for training the beneficiary’s family members or other caregivers on the signs and process of dying. Assessing the number of visits near the end of life may provide insight into the quality of a hospice provider’s care; fewer visits in that time period could indicate poor quality of hospice care. CMS is currently developing a quality measure that assesses the frequency of provider visits at the beneficiary’s end of life. When analyzing CMS claims data, we found that for-profit and non-profit hospices, on average, provided a similar number of provider visits (such as nurse, doctor, social worker, or hospice aide visits) within the last 7 days of a beneficiary’s life. Specifically, in 2017, for-profits and non-profits both averaged about 6 provider visits within the last 7 days of life. We also looked at the average percentage of hospice beneficiaries who received different types of provider visits either within the last 3 days of life or last 7 days of life (consistent with CMS’s new quality measure) and found performance varied among for-profit and non-profit providers: 77 percent of for-profit beneficiaries and 85 percent of non-profit beneficiaries received at least one visit from registered nurses, physicians, or nurse practitioners in the last 3 days of life. 68 percent of for-profit beneficiaries and 57 percent of non-profit beneficiaries received at least two visits from medical social workers, chaplains or spiritual counselors, licensed practical nurses, or hospice aides in the last 7 days of life. We also found more for-profits than non-profits among a subset of hospices that did not provide any visits during the last 3 or 7 days of life in 2017. Specifically, our analysis shows that 83 hospice providers (80 for- profits and 3 non-profits) did not provide any visits in 2017 from registered nurses, physicians, or nurse practitioners in the beneficiaries’ last 3 days of life. This means that all of the 800 hospice beneficiaries treated by these providers did not receive these types of provider visits at the end of life. In addition, we found that 58 providers (55 for-profits and 3 non- profits) did not provide any visits from medical social workers, chaplains or spiritual counselors, licensed practical nurses, or hospice aides in the last 7 days of life in 2017; all of the 613 beneficiaries treated by these providers did not receive these specific provider visits at the end of life. Opportunities Exist to Strengthen CMS Oversight through Increased Use of Information in Survey Process and Expanded Statutory Authority for Enforcement CMS Could Strengthen Oversight of Hospice Providers by Using Additional Information to Enhance the Survey Process In our review of CMS’s oversight of hospice providers, we found CMS does not instruct surveyors to review, prior to surveying hospice providers, providers’ performance on CMS quality measures (those based on provider-reported quality data or caregivers’ experience surveys) or other indicators of quality that could identify potential areas of concern. CMS issues guidance that surveyors use when conducting surveys to assess a hospice provider’s compliance with federal health and safety requirements. According to this guidance, surveyors are to prepare for hospice surveys by reviewing documents of record including licensure records, previous survey findings and complaints, media reports, and other publicly available information about the provider. A representative for an association representing state surveyors confirmed that this is the type of information surveyors typically review prior to a hospice provider survey. However, according to CMS officials and the surveyor association, CMS does not instruct surveyors to review other information such as providers’ performance on CMS quality measures or other indicators of quality that surveyors could use to identify potential areas of concern that they could focus on more closely during a survey. For example, it might be helpful for surveyors to know if a hospice provided no visits during beneficiaries’ last days of life. According to CMS officials, CMS does not use such information to target hospices for additional survey review. Several studies we reviewed and researchers we interviewed noted CMS could strengthen its survey process by incorporating additional information into the survey process, such as information on how hospice providers perform on CMS quality measures or other potential indicators of quality. For example, one study suggested that hospices with poor reported beneficiary experiences based on caregivers’ experience survey data could be identified for more frequent surveys and that such information could be used to identify care processes for closer review during surveys. Another study we reviewed concluded that claims- based measures could help guide surveyors to more closely review key processes of care to ensure Medicare beneficiaries receive high quality hospice care. In addition, a researcher we interviewed suggested when claims data show no visits during the last 2 days of life, the survey team could interview the deceased patients’ families to see if there was any harm done by the lack of visits at the end of life. And, in July 2019, the Department of Health and Human Services’ Office of the Inspector General (HHS OIG) reiterated recommendations from prior HHS OIG work that CMS analyze claims and deficiency data to identify specific patterns identified by the HHS OIG that could indicate potential issues— such as hospices that infrequently provide physician services—and that CMS instruct surveyors to pay special attention to these areas during surveys. In contrast to hospice surveys, home health agency surveyors utilize information in addition to survey findings and complaints to identify potential areas of concern. According to CMS officials and the surveyor association we interviewed, home health surveyors review certain CMS quality measures to focus the survey on specific areas of concern or to identify beneficiaries who experienced potential care issues for a more detailed survey review. According to CMS officials, the agency is considering making changes to the survey process but has not yet made any decisions. CMS officials told us they last updated the survey process in 2010, and since then, they have implemented quality measures for hospice providers (provider- reported measures in 2014 and caregivers’ experience survey measures in 2015). They also said that CMS is “currently monitoring the implementation of these programs and considering the potential benefit of incorporating review of the data into the survey process.” According to federal standards of internal control, agencies must identify, analyze, and respond to risks related to achieving objectives. By not utilizing additional information in the survey process that would allow it to identify providers and areas where risk of noncompliance is greatest, CMS is missing an opportunity to strengthen its ability to identify and respond to such risks and ensure the quality of care that hospice beneficiaries receive. CMS Has Limited Enforcement Remedies Due to Lack of Statutory Authority, Which Could Restrict Its Ability to Ensure Compliance CMS is limited to one hospice enforcement remedy—termination of the Medicare provider agreement. By law, to qualify for payment under the Medicare program, hospice providers must meet the program’s conditions of participation. If the agency finds a provider is not complying with the program’s conditions of participation, CMS may terminate the provider’s participation in the program. In the Medicare program, termination of a provider is the most significant action CMS can take to address provider non-compliance. As a result, CMS generally only terminates a hospice provider on the basis of a deficiency when the provider fails to correct a condition-level deficiency (the most severe) within the required time frame. Our review of CMS hospice survey data found termination happens rarely. Specifically, 19 hospices were involuntarily terminated from 2014 through 2017. This is less than half of 1 percent of the total number of hospices operating during this time period. In contrast to hospice care, where CMS’s enforcement authority is limited to termination, Congress has given the agency authority to impose additional enforcement remedies for other provider types. Additional statutory and regulatory penalties for home health agencies and nursing homes include civil money penalties, denial of payment for all new Medicare and Medicaid admissions, and imposition of training requirements for situations where it is determined that education will likely lead to provider compliance (referred to as directed in-service training). Such remedies, if available, could enable the agency to more effectively address a broader range of hospice risks. For example, additional remedies could be used in situations that warrant a remedy other than termination or that could further incentivize providers to comply with health and safety requirements or improve their quality of care. According to federal standards of internal control, agencies must identify, analyze, and respond to risks related to achieving objectives. Because CMS lacks the authority to establish such additional remedies, the agency’s ability to respond to risks and ensure quality of care for beneficiaries is limited. The HHS OIG and one researcher we interviewed have recommended CMS seek statutory authority to establish additional enforcement remedies for hospices, explaining that less severe remedies could help address performance problems that may not merit termination and incentivize agencies to improve quality of care. CMS agreed with this recommendation in March 2016 and stated it would consider submitting a request that would seek legislative authority to establish additional enforcement remedies through the President’s annual budget proposal to Congress. In a July 2018 HHS OIG report, the HHS OIG again recommended CMS seek this authority. CMS neither agreed nor disagreed with this recommendation and stated again that it would consider this recommendation when developing the agency’s proposals for the President’s annual budget. However, a request for such legislative authority was not included in the President’s fiscal year 2017, 2018, or 2019 budget proposals. The HHS OIG reiterated this recommendation in two July 2019 reports. Conclusions Since 2000, the number of Medicare hospice beneficiaries has almost tripled to nearly 1.5 million in fiscal year 2017. In addition, the number of hospice providers has doubled. Given this growth, it is imperative that CMS’s oversight of the quality of Medicare hospice care keeps pace with changes so that the agency can ensure the health and safety of these terminally ill beneficiaries. While recent steps have been taken to strengthen CMS’s hospice quality oversight, including the requirement that hospices be re-certified every 3 years and CMS’s ongoing development of new quality measures, we identified additional opportunities to strengthen CMS’s oversight. Specifically, our review found that CMS could strengthen oversight by using additional information—based on currently available data—to identify potential quality issues that could focus and enhance the survey process. We also found that CMS’s lack of authority to establish additional enforcement remedies before termination, which CMS rarely uses, limits its ability to ensure hospice providers’ compliance with health and safety requirements and quality of care for beneficiaries. Matter for Congressional Consideration Congress should consider giving CMS authority to establish additional enforcement remedies for hospices that do not meet federal health and safety requirements. (Matter for Consideration 1) Recommendation for Executive Action The Administrator of CMS should incorporate the use of additional information, such as quality measures or other information that could identify potential quality of care issues, into its survey process for overseeing hospice providers. (Recommendation 1) Agency Comments We provided a draft of this report to HHS for review and comment. HHS provided written comments, which are reprinted in appendix II. HHS concurred with our recommendation. HHS stated that it recognizes that meaningful quality measures can also serve as key indicators of provider quality and it will look into ways to incorporate the use of these data into the hospice survey process. In its comment letter, HHS also noted the importance of monitoring patient safety and quality of care to HHS’s hospice oversight efforts and the agency provided an overview of the key efforts it has in place to perform such monitoring. For example, in addition to survey and quality measure requirements, HHS requires hospices to implement a data-driven quality assessment and performance improvement program, intended to have hospices take a proactive approach in improving their performance using objective data. HHS also provided technical comments, which we incorporated into the report as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the CMS administrator, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Additional Data on Medicare Hospice Beneficiaries, Providers, and Payments Dollars Percentage Hospice team treated patient with respect Amount of emotional and religious support provided by the hospice team The patient got the help they needed for pain and symptoms Caregiver received the training they needed Caregiver rating of hospice agency on 10-point scale with 10 being the best hospice care possible Caregiver would recommend the hospice Non-profit hospice providers’ average scores Hospice team treated patient with respect Amount of emotional and religious support provided by the hospice team The patient got the help they needed for pain and symptoms Caregiver received the training they needed Caregiver rating of hospice agency on 10-point scale with 10 being the best hospice care possible Caregiver would recommend the hospice Government-owned hospice providers’ average scores Hospice team treated patient with respect Amount of emotional and religious support provided by the hospice team The patient got the help they needed for pain and symptoms Caregiver received the training they needed Caregiver rating of hospice agency on 10-point scale with 10 being the best hospice care possible Caregiver would recommend the hospice 2.5 survey within three categories (top scores, middle scores, and bottom scores). These data were not available for all hospice providers; our analysis of CMS caregivers’ experience survey quality measure data was for the 2,832 hospice providers that had data for the caregivers’ survey. In general, the top-box scores represent the percentage of caregivers that selected the response of “always” for the particular measure. For the rating measure, the top-box score represents caregivers that rated the hospice provider as a 9 or 10 on a 10-point scale with 10 being the highest rating. For the recommendation measure, the top-box score represents caregivers that responded that they “would definitely recommend the hospice provider.” In general, the middle-box scores represent the percentage of caregivers that selected the response of “usually” for the particular measure. For the rating measure, the middle-box score represents caregivers that rated the hospice provider as a 7 or 8 on a 10-point scale with 10 being the highest rating. For the recommendation measure, the middle-box score represents caregivers that responded that they “would probably recommend the hospice provider.” Percentage of beneficiaries discharged prior to death Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Gregory Giusto, Assistant Director; Christie Enders, Analyst-in-Charge; Todd Anderson, Leia Dickerson, Rob Dougherty, Krister Friday, Barbara Hansen, Jennifer Whitworth, and Chris Wickham made key contributions to this report.
Why GAO Did This Study Since 2000, there has been substantial growth in Medicare payments for hospice services and the number of Medicare beneficiaries using hospice. This growth has been accompanied by an increase in the number of providers (primarily an increase in for-profit providers), reaching approximately 4,500 providers by 2017. GAO was asked to review aspects of Medicare's hospice program. This report, among other things, (1) compares quality scores and other potential indicators of quality for for-profit and non-profit hospices; and (2) examines opportunities for strengthening CMS's oversight of hospice providers. GAO analyzed CMS data on hospice care for 2014 through 2017—the latest years for which full-year data were available at the time of GAO's analysis—and reviewed research on hospice care. GAO interviewed CMS officials, researchers, provider associations, a survey agency association, and a non-generalizable sample of hospice providers selected in part through referrals from other stakeholders. GAO also reviewed relevant statutes, regulations, documents, and enforcement data. What GAO Found Medicare's hospice benefit provides palliative care to beneficiaries with terminal illnesses and a life expectancy of 6 months or less. GAO's review of 2017 data from the Centers for Medicare & Medicaid Services (CMS) found that for-profit and non-profit hospices had, on average, similar scores on CMS's current quality measures that indicate hospice performance in areas such as pain assessment and discussion of beneficiary treatment preferences. However, for-profits were more often among the subset of providers with the lowest scores on certain quality measures GAO reviewed. In addition to analyzing providers' scores on CMS quality measures, GAO analyzed provider performance on other indicators, identified by researchers, that could signal quality issues and found performance varied among for-profit and non-profit hospices. One of the other quality indicators GAO analyzed was the rate of beneficiaries discharged from hospice prior to death, which in some cases could indicate dissatisfaction with care leading to the beneficiary's decision to leave the hospice provider. In addition, GAO examined the number of provider visits to give medical and emotional support within the last few days of a beneficiary's life. With regard to these indicators, for 2017, GAO found the following, among other things: 472 hospice providers (462 for-profits and 10 non-profits) had a high rate of discharging beneficiaries prior to death (50 percent or more were discharged). According to research, a high discharge rate could, in some cases, be an indicator of poor quality of care or of provider misuse of the benefit, in that the hospice may be enrolling beneficiares who are not eligible for hospice care. 83 providers (80 for-profits and 3 non-profits) did not have hospice staff (such as nurses, physicians, or nurse practitioners) visit beneficiaries within the last 3 days of their life—a critical time in providing quality care, according to researchers GAO interviewed. CMS's oversight of the quality of care provided by hospice providers consists primarily of inspections—called surveys—of hospice providers. GAO found that, while CMS instructs surveyors to review previous survey findings and complaints, CMS does not instruct surveyors to use information on providers' performance on quality measures or other potential indicators of quality as part of the survey process. For example, CMS does not instruct surveyors to consider whether a hospice provided staff visits during beneficiaries' last week of life. According to research, this information could be used to enhance the survey process. GAO also found that CMS is limited to one enforcement option—termination of the Medicare provider agreement—which CMS uses rarely and generally only when providers fail to correct within the required time frame the most serious violations of federal health and safety requirements. According to two researchers, additional remedies, such as civil monetary penalties, could enhance CMS's oversight by addressing performance problems that do not merit termination and incentivize agencies to improve quality of care. CMS uses a range of remedies for other provider types, such as home health agencies and nursing homes, but lacks authority to impose such additional sanctions on hospices. What GAO Recommends CMS should incorporate the use of additional information that could be used to identify quality of care issues into its survey process for hospice oversight. Congress should consider giving CMS authority to establish additional enforcement remedies for hospices that do not meet federal health and safety requirements. The Department of Health and Human Services concurred with GAO's recommendation.
gao_GAO-20-336
gao_GAO-20-336_0
Background Key Requirements and Guidance on Agency Analysis of Improper Payments and Corrective Actions to Remediate Them IPIA requires executive branch agencies to take various steps regarding improper payments in accordance with guidance issued by OMB, including the following: 1. reviewing all programs and activities and identifying those that may be susceptible to significant improper payments; 2. developing improper payment estimates for those programs and activities that agency risk assessments, OMB, or statutes identify as being susceptible to significant improper payments; 3. analyzing the root causes of improper payments and developing corrective actions to reduce them; and 4. reporting on the results of addressing the foregoing requirements. Figure 1 illustrates these steps, as well as the major components of analyzing root causes of improper payments and developing corrective action plans to remediate them. IPIA requires agencies with programs susceptible to significant improper payments to report a description of the causes of the improper payments identified, actions that the agency has planned or taken to correct those causes, and the planned or actual completion dates of those actions. It also requires agencies to report program-specific improper payment reduction targets that OMB has approved. OMB M-18-20 provides guidance to agencies for implementing IPIA requirements, including their responsibilities for preventing and reducing improper payments. The guidance directs agencies that have developed estimates for improper payments to categorize them by root causes, including the percentage of the total estimate for each category. According to the guidance, this level of specificity helps lead to more effective corrective actions and more focused prevention strategies. Table 2 summarizes OMB’s root cause categories. OMB M-18-20 directs agencies with programs deemed susceptible to significant improper payments to implement a corrective action plan that responds to their root causes to prevent and reduce them. As such, OMB directs that an agency must understand the true root cause of its improper payments in order to develop targeted, effective corrective actions, which are proportional to the severity of the associated amount and rate of the root cause. OMB M-18-20 also directs agencies to annually measure the effectiveness and progress of individual corrective actions by assessing results, such as performance and outcomes. In performing such measurements, OMB states that agencies should determine if any existing corrective actions can be intensified or expanded to further reduce improper payments and to identify annual benchmarks for corrective actions that agencies implement over multiple years. Agencies may use these benchmarks to demonstrate progress in implementing the actions or their initial effect on preventing and reducing improper payments. Characteristics of Programs Reviewed and Related Improper Payment Estimates The eight programs we reviewed serve a variety of purposes and are administered by various agencies across the federal government, as discussed below. Supplemental Nutrition Assistance Program The Department of Agriculture’s (USDA) Supplemental Nutrition Assistance Program (SNAP) is the largest federally funded nutrition assistance program, providing benefits to about 40 million people in fiscal year 2018. SNAP is intended to help low-income households obtain a more nutritious diet by providing them with benefits to purchase food from authorized retailers nationwide. SNAP recipients receive monthly benefits on an Electronic Benefit Transfer (EBT) card and redeem them for eligible food at authorized food stores. The Food and Nutrition Act of 2008 established SNAP as a federally funded, state-administered program. States, following federal guidelines, are responsible for program administration. States determine applicant eligibility, calculate benefit amounts, issue EBT cards to recipients, and investigate possible recipient program violations. USDA’s Food and Nutrition Service (FNS) pays the full cost of SNAP benefits and shares 50 percent of administrative costs with the states. As part of oversight responsibilities, FNS develops program regulations and monitors states to ensure that they comply with program rules. FNS is also directly responsible for authorizing and monitoring retail food stores where recipients may purchase food. In accordance with IPIA, USDA has annually reported an improper payment estimate for SNAP since fiscal year 2004. In its fiscal year 2019 AFR, USDA reported an improper payment estimate of approximately $4 billion, or 6.8 percent of SNAP outlays of $59.1 billion. Direct Loan Program The Department of Education’s (Education) William D. Ford Federal Direct Loan (Direct Loan) program authorizes Education to make loans, through participating schools, to eligible undergraduate and graduate students and their parents. The Direct Loan program comprises four types of loans: Subsidized Stafford, Unsubsidized Stafford, PLUS, and Consolidation loans. Evidence of financial necessity is required for an undergraduate student to receive a Subsidized Stafford loan; however, borrowers at all income levels are eligible for the other three types. Education originates the loans and disburses them through each borrower’s school. Once a loan is disbursed, Education assigns a servicer responsible for communicating with the borrower, providing information about repayment, and processing payments from the borrower. Education first reported an improper payment estimate for the Direct Loan program in fiscal year 2013. In its fiscal year 2019 AFR, Education reported an improper payment estimate of approximately $483 million, or 0.5 percent of Direct Loan program outlays of $92.9 billion. Pell Grant Program Education’s Pell Grant program—the single largest source of grant aid for postsecondary education—awards federally funded grants to low-income undergraduate and certain post-baccalaureate students who are enrolled in a degree or certificate program and have a federally defined financial need. Students are eligible to receive Pell Grants for no more than 12 semesters (or the equivalent). To qualify, an applicant must, in addition to satisfying other requirements, demonstrate financial need and not have obtained a bachelor’s degree or a first professional degree. Grant amounts depend on the student’s expected family contribution, the cost of attendance (as determined by the institution), the student’s enrollment status (full-time or part-time), and whether the student attends for a full academic year or less. Education first reported an improper payment estimate for the Pell Grant program in fiscal year 2004. In its fiscal year 2019 AFR, Education reported an improper payment estimate of approximately $646 million, or 2.2 percent of Pell Grant program outlays of $28.9 billion. Children’s Health Insurance Program The Department of Health and Human Services’ (HHS) Children’s Health Insurance Program (CHIP) expands health coverage to uninsured children who are ineligible for Medicaid but cannot afford private coverage. The states and the federal government jointly fund CHIP benefit payments and administrative expenses. HHS’s Centers for Medicare & Medicaid Services (CMS) oversees the program; however, each state administers the program and sets its own guidelines regarding eligibility and services according to federal guidelines. HHS first reported an improper payment estimate for CHIP (based on one-third of the states) in fiscal year 2008. In its fiscal year 2019 AFR, HHS reported an improper payment estimate of approximately $2.7 billion, or 15.8 percent of CHIP outlays of $17.3 billion. Earned Income Tax Credit The Earned Income Tax Credit (EITC) administered by the Department of the Treasury (Treasury) is a credit that offsets taxes owed by eligible taxpayers, and because the credit is refundable, EITC recipients need not owe taxes to receive a benefit. If the taxpayer’s credit exceeds the amount of taxes due, the Internal Revenue Service (IRS) issues a refund of the excess to the taxpayer. To claim the EITC, the taxpayer must work and have earnings that do not exceed the phaseout income of the credit. Additional eligibility rules apply to any children that a taxpayer claims for calculating the credit. Among other criteria, a qualifying child must meet certain age, relationship, and residency requirements. Treasury first reported an improper payment estimate for EITC in fiscal year 2003. In its fiscal year 2019 AFR, Treasury reported an improper payment estimate of approximately $17.4 billion, or 25.3 percent of EITC outlays of $68.7 billion. Prosthetic and Sensory Aids Service Through its Prosthetic and Sensory Aids Service (PSAS), the Department of Veterans Affairs’ (VA) Veterans Health Administration (VHA) provides prosthetics to veterans who have experienced the loss or permanent impairment of a body part or function. The items VA provides include those worn by the veteran, such as an artificial limb or hearing aid; those that improve accessibility, such as ramps and vehicle modifications; and devices surgically placed in the veteran, such as hips and pacemakers. In general, veterans enrolled in the VA health care system with a medical need for a prosthetic service or item are eligible; however, additional eligibility criteria for certain services or items may apply. PSAS officials in VA’s central office provide overall administration of VA’s provision of prosthetic items, including allocating funding among various networks, monitoring spending, and establishing and monitoring mechanisms to evaluate the agency’s performance. PSAS processes prescriptions and provides the prescribed items to individual veterans. PSAS government credit card holders, typically at VA medical centers, perform administrative actions—such as obtaining additional information from the prescribing clinician, obtaining price quotes from contractors, and creating purchase orders—to process prescriptions. PSAS also has staff who provide clinical services to veterans, such as evaluating prosthetic needs and designing and fitting artificial limbs. VA first reported an improper payment estimate for PSAS in fiscal year 2017. In its fiscal year 2019 AFR, VA reported an improper payment estimate of approximately $60 million, or 2.1 percent of PSAS outlays of $2.9 billion. Old Age, Survivors, and Disability Insurance Program The Social Security Administration’s (SSA) Old Age, Survivors, and Disability Insurance program (OASDI), collectively referred to as Social Security, provides cash benefits to eligible U.S. citizens and residents. OASDI is financed largely on a pay-as-you-go basis. Specifically, OASDI payroll taxes, paid each year by current workers, are primarily used to pay benefits provided during that year to current beneficiaries. OASDI consists of two separate insurance programs that SSA administers under the Social Security Act. Old Age and Survivors Insurance (OASI) provides benefits to retired workers, their families, and survivors of deceased workers. The monthly benefit amount depends on a worker’s earnings history and the age at which he or she chooses to begin receiving benefits, along with other factors. Benefits are paid to workers who meet requirements for the time they have worked in covered employment—that is, jobs through which they have paid Social Security taxes. Disability Insurance (DI) provides cash benefits to working-age adults who are unable to work because of long-term disability. SSA generally considers individuals to have a disability if (1) they cannot perform work that they did before and cannot adjust to other work because of their medical condition(s) and (2) their disability has lasted or is expected to last at least 1 year or is expected to result in death. Further, individuals must have worked and paid into the program for a minimum period of time to qualify for benefits. To ensure that only beneficiaries who remain disabled continue to receive benefits, SSA is required to conduct periodic continuing disability reviews in certain circumstances. SSA first reported an improper payment estimate for OASDI in fiscal year 2004. In its fiscal year 2019 AFR, SSA reported an improper payment estimate of approximately $2.7 billion, or 0.3 percent of OASDI program outlays of $948 billion. Supplemental Security Income Program SSA’s Supplemental Security Income (SSI) is a federal income supplement program funded by general tax revenues (not Social Security taxes). The program provides payments to low-income aged, blind, and disabled persons—both adults and children—who also meet financial eligibility requirements. For adults, a disability is defined as the inability to engage in any substantial gainful activity because of any medically determinable physical or mental impairment(s) that can be expected to result in death or has lasted or can be expected to last for a continuous period of not less than 12 months. To ensure that only recipients who remain disabled continue to receive benefits, SSA is required to conduct periodic continuing disability reviews in certain circumstances. To be eligible to receive monthly SSI payments, the adult individual’s (or married couple’s) or child’s (and parent’s) monthly countable income has to be less than the monthly federal SSI benefit amount. The amount of the monthly SSI payment is then determined based on the countable income. In most cases, countable income received in the current month affects the SSI payment amount 2 months later. Furthermore, countable resources—such as financial institution accounts—must not exceed the maximum allowable threshold. While recipients are required to report changes in their income and financial resources, SSA also conducts periodic redeterminations to verify that recipients are still eligible for SSI. SSA first reported an improper payment estimate for SSI in fiscal year 2004. In its fiscal year 2019 AFR, SSA reported an improper payment estimate of approximately $5.5 billion, or 9.7 percent of SSI program outlays of $56.9 billion. Selected Agencies Generally Used Improper Payment Estimation Methodology Results as the Basis for Identifying Root Causes of Improper Payments Five Out of Six Agencies Used Improper Payment Estimation Methodology Results as the Basis for Identifying Root Causes of Selected Programs’ Improper Payments We found that five out of six agencies—USDA, Education, HHS, VA, and SSA—used the results of their improper payment estimation methodologies as the basis for identifying the root causes of improper payments for the selected programs we reviewed. Specifically, the agencies generally used a two-step process to identify root causes of improper payments. First, the agencies reviewed a sample of payments to identify which payments were improper and to establish an improper payment rate. Second, the agencies analyzed the improper payment results to determine the causes of error. Further details on each agency’s process are provided below. USDA: According to USDA’s fiscal year 2018 AFR, FNS used SNAP’s Quality Control System to identify improper payments and determine improper payment rates for fiscal year 2018. According to agency officials, SNAP improper payment root causes occur at the state level. According to agency officials, as required by the Food and Nutrition Act of 2008 and subsequent program regulations, FNS requires states to conduct root cause analyses and develop corrective action plans because of the unique circumstances in each state owing to flexibilities under statute and regulations. SNAP’s Quality Control system uses a two-tier approach to report improper payments. In the first tier, each month, state agencies follow federal sampling requirements to select samples of households that participated in SNAP in their states and conduct quality control reviews to determine whether each selected household was eligible and received the right amount of benefits. In the second tier of the process, Federal SNAP staff select a subsample of the state data for review to confirm the validity of the states’ findings. Federal SNAP staff use that subsample data to aggregate the root cause information at a nationwide level in order to categorize the data into the OMB root cause categories for fiscal year 2018 reporting. Education: According to Education’s fiscal year 2018 AFR, Education conducted a risk-based, nonstatistical sample and estimation methodology, which OMB approved, to estimate Pell Grant and Direct Loan improper payment rates for fiscal year 2018 reporting. As part of this estimation process, Education analyzed identified improper payments to determine improper payment root causes. HHS: According to HHS’s fiscal year 2018 AFR, HHS estimated the CHIP improper payment rate for fiscal year 2018 reporting through the Payment Error Rate Measurement (PERM) program. CHIP improper payment root causes were identified at both the agency and state levels. Specifically, to determine improper payment root causes at the agency level, HHS analyzed the issues identified during the PERM review and identified primary drivers of the national PERM rate for CHIP. HHS also provided improper payment results to each state and required them to conduct more in-depth state-level root cause analyses as part of developing their corrective action plans. VA: According to VA’s fiscal year 2018 AFR, VA conducted a statistical sample and estimation methodology to estimate the PSAS improper payment rate for fiscal year 2018 reporting. VA then analyzed the improper payments identified during testing to determine improper payment root causes. SSA: According to SSA’s fiscal year 2018 AFR, SSA conducts stewardship reviews each fiscal year to estimate the improper payment rates for OASDI and SSI. Although SSA considers the stewardship review data sufficient to provide statistically reliable data on the overall payment accuracy of OASDI and SSI, SSA considered deficiency data from the most recent 5 years of stewardship reviews to determine improper payment root causes for each program for its fiscal year 2018 reporting. Treasury Used 2006 through 2008 Tax Year Data to Identify Reported Root Causes of Fiscal Year 2018 EITC Improper Payments Treasury identified the root causes of EITC improper payments for fiscal year 2018 reporting based on the most recent detailed 3-year EITC compliance study IRS conducted, using data from tax years 2006 through 2008. IRS officials acknowledged that using older data creates additional potential for error; however, they stated that IRS is only able to conduct in-depth compliance studies on major refundable income tax credits, including EITC, on a rotating basis. IRS also conducted in-depth EITC compliance studies for tax years 1997 and 1999. These studies and IRS’s 2006 through 2008 compliance study, identified income misreporting and qualifying child errors as the main sources of errors. Therefore, agency officials indicated that Treasury is comfortable with using the 2006 through 2008 data as the basis for determining the root causes of fiscal year 2018 EITC improper payments. However, Treasury has reported changes to the tax environment since 2008, including legislative revisions that may have affected taxpayer compliance behavior. Specifically, EITC-related changes include expanding the credit to a third child, establishing new criteria for claiming a qualifying child, and amending the “age test” for qualifying children, among others. Furthermore, the 2006 through 2008 compliance study did not take into account the Protecting Americans from Tax Hikes Act of 2015 program integrity provisions that required tax filers to provide Form W-2 payer information to IRS for verification earlier than in previous tax years. Federal internal control standards state that management should use quality information to achieve the entity’s objectives. As part of these standards, management obtains relevant data from reliable internal and external sources in a timely manner and uses quality information to make informed decisions and evaluate the entity’s performance in achieving objectives and addressing risks. Quality information is appropriate, current, complete, accurate, accessible, and provided on a timely basis. Although a specific delivery date has not been set, agency officials stated that IRS plans to conduct another in-depth EITC compliance analysis within the next 2 years. We agree with Treasury’s plan to conduct another EITC compliance analysis using more timely data. However, until Treasury conducts an EITC improper payment root cause analysis using more timely data, it will be uncertain whether identified root causes are sufficiently relevant to inform decisions and evaluate risks. Specifically, continued use of outdated information to evaluate EITC improper payments increases the risk that Treasury may not be identifying these payments’ true root causes and therefore will lack quality information needed to develop appropriate corrective actions and reduce them. Most Selected Agencies Developed Corrective Actions That Correspond to Identified Root Causes of Improper Payments Four Out of Six Agencies Developed Corrective Actions That Correspond to Identified Root Causes of Improper Payments for the Selected Programs Four out of six agencies—Education, HHS, VA, and SSA—developed corrective actions that correspond to identified root causes of improper payments for the selected programs we reviewed, in accordance with OMB guidance. Specifically, we found that Education and VA developed corrective actions corresponding to each root cause of improper payments identified for fiscal year 2018 in Education’s Direct Loan and Pell Grant programs and VA’s PSAS, respectively. In addition, HHS stated that it developed corrective actions that corresponded to the root causes it determined to be significant to CHIP improper payments for fiscal year 2018, prioritizing large dollar over smaller dollar value root cause categories. Corrective action plans for CHIP improper payments were developed at both the agency and state levels. According to agency officials, CMS helped individual states develop and implement state-specific PERM corrective action plans to address the errors identified in each state. In addition, because each state’s errors do not necessarily represent errors that are the main drivers of the national PERM rate, CMS developed agency-level corrective action plans focused on those drivers, which typically occurred across multiple states. We also found that SSA’s corrective actions corresponded to root causes of improper payments identified in OASDI and SSI for fiscal year 2018. However, SSA did not develop corrective actions corresponding to three of the six major root causes it identified for OASDI improper payments based on its stewardship review findings. Agency officials explained that SSA’s corrective action development process was decentralized among the different SSA components, and therefore, there was no formalized process for components to develop corrective actions for all identified root causes. SSA has since developed a new standardized improper payment strategy and updated procedures to implement the strategy for fiscal year 2020. Although the scope of our review focused on processes in place for fiscal year 2018, we found that the updated procedures, if effectively implemented, will address our concerns because they include control activities designed to help ensure that corrective actions that SSA develops and implements correspond to the identified root causes of improper payments, as directed by OMB guidance. Specifically, the updated procedures direct SSA components to identify root causes of improper payments and develop mitigation strategies for each; conduct cost-benefit analyses for such strategies; and after considering these analyses, determine and prioritize necessary corrective actions. USDA Did Not Develop Agency Corrective Actions That Correspond to Identified Root Causes of SNAP Improper Payments In contrast to HHS, which developed both agency- and state-level corrective actions for its state-administered CHIP, USDA did not develop agency-level corrective actions corresponding to the root causes of SNAP improper payments. USDA’s IPIA corrective action plan guidance directs its components, including FNS, to develop corrective actions that correspond to the identified root causes of improper payments for programs that are susceptible to significant improper payments. Instead of developing agency-level SNAP corrective actions, FNS requires the states to develop state-level corrective actions. Additionally, FNS provided technical assistance and support to the individual states to help them improve payment accuracy. As part of this assistance, agency officials stated that FNS regional offices provided routine formal training and guidance to the states and conducted site visits. According to agency officials, FNS did not develop agency-level corrective actions corresponding to the root causes of SNAP improper payments because FNS requires the states to develop individual state- level corrective actions. Additionally, because of varying root causes and the uniqueness of issues identified among the states, agency officials believe that state corrective actions may not easily aggregate to the state level. However, FNS’s procedures did not include a process to analyze state-level root causes to identify similarities and develop agency-level corrective actions, if warranted, to help address them. According to agency officials, FNS has made significant improvements in the last few years regarding its controls over SNAP. The officials said that FNS has also implemented major changes in oversight in the last few fiscal years to address previously identified deficiencies among the states. While these changes may be valuable in improving agency oversight and states may have unique circumstances that could lead to varying state-identified root causes of improper payments, FNS is ultimately responsible for preventing and reducing improper payments within SNAP. OMB guidance directs agencies to develop and implement appropriate corrective actions that respond to the root causes of improper payments to prevent and reduce them. OMB guidance also directs agencies to ensure that managers; programs; and, where applicable, states are held accountable for reducing improper payments. Additionally, federal internal control standards state that management should establish and operate activities to monitor the internal control system and evaluate the results and remediate identified internal control deficiencies on a timely basis. As part of these standards, management retains responsibility for monitoring the effectiveness of internal control over the assigned processes that external parties, such as state agencies, perform. Without considering similarities of root causes of SNAP improper payments among the states, USDA will be uncertain whether developing and implementing agency-level corrective actions (in addition to state-level actions) would also help to effectively reduce them. Treasury Did Not Develop Corrective Actions That Correspond to Identified Root Causes of EITC Improper Payments Instead of developing corrective actions corresponding to the identified root causes of EITC improper payments for fiscal year 2018, Treasury addressed improper payments through IRS’s compliance programs and through outreach and education efforts to taxpayers and preparers. According to agency officials, although some of the outreach efforts are indirectly related to root causes identified, it is difficult to link those efforts to the reduction of errors that result from being unable to authenticate eligibility—which Treasury considers the biggest issue in the EITC program—because of the complexity of statutory eligibility requirements. Although Treasury uses information from SSA and HHS to help IRS verify residency and relationship information for parents and children, Treasury’s strategy for addressing the root causes of EITC improper payments does not include continuing efforts to identify and reach out to additional agencies to (1) determine how they verify information for certain eligibility-based programs and whether they use strategies that Treasury could adopt or (2) identify other potential data sources that could be used to verify EITC information or confirm that other data sources do not exist. According to agency officials, such inquiries are not included because the eligibility requirements for EITC are not always the same as requirements for other government programs. Additionally, Treasury’s fiscal year 2018 AFR states that because of the nature of EITC, corrective actions implemented by IRS alone will not significantly reduce EITC improper payments. For example, according to Treasury officials, legislative changes are needed to help address certain EITC improper payments. While Treasury has made certain legislative proposals related to providing IRS greater flexibility to address correctable errors and increasing oversight of paid tax return preparers, it has not made proposals to help address EITC eligibility criteria issues. Additionally, Treasury’s strategy does not include identifying and proposing legislative changes needed to help reduce EITC improper payments related to these or other issues, such as those related to the inability to authenticate taxpayer eligibility discussed above. OMB guidance directs agencies to develop and implement appropriate corrective actions that respond to the root causes of improper payments to prevent and reduce them. Further, federal internal control standards state that management should use quality information to achieve the entity’s objectives. As part of these standards, management designs a process that uses the entity’s objectives and related risks to identify the information requirements needed to achieve the objectives and address the risks and obtains relevant data from reliable internal and external sources in a timely manner based on the identified information requirements. While we recognize the unique eligibility requirements for EITC, until Treasury coordinates with other agencies to identify potential strategies or data sources that may help in determining eligibility, it will be uncertain whether Treasury can leverage additional sources to help verify data. Additionally, without identifying and proposing legislative changes to help resolve such issues, Treasury will be at risk of continuing to be unable to significantly reduce EITC improper payments. All Six Agencies Communicated Improper Payment Corrective Action Plan Information to Internal Stakeholders, but Several Did Not Monitor Progress or Measure Effectiveness All six agencies responsible for the programs we reviewed communicated with internal agency stakeholders regarding their improper payment corrective action plan information, in accordance with OMB guidance and federal internal control standards. However, as shown in table 3, three of the four agencies—Education, HHS, and SSA—that developed corrective actions corresponding to the identified root causes either did not establish planned completion dates, monitor the progress, or measure the effectiveness of their corrective actions. In fact, we found that VA was the only agency that measured the effectiveness of each corrective action for the selected program (PSAS) that we reviewed. As previously discussed, USDA and Treasury did not develop agency corrective actions corresponding to the identified root causes of improper payments for their selected programs and therefore did not establish planned related completion dates, monitor progress, or measure the effectiveness of such corrective actions. Selected Agencies Have Processes in Place to Communicate with Internal Stakeholders regarding Corrective Action Plan Information All six agencies we reviewed communicated information regarding the selected programs’ corrective action plans to internal stakeholders, consistent with OMB guidance and federal internal control standards. OMB M-18-20 directs agencies to ensure that managers, accountable officers (including the agency head), and program officials are held accountable for reducing improper payments. Additionally, federal internal control standards state that management should internally communicate the necessary quality information to achieve the entity’s objectives. As part of these standards, management communicates quality information down, across, up, and around reporting lines to all levels of the entity. We found that the six agencies communicated information, at least annually, to such internal stakeholders, including the relevant agency head, chief financial officer (CFO), and program managers. For example, some selected agencies—Education, HHS, VA, and SSA—provided briefings to the agency head and the CFO’s office regarding the status of the selected program’s improper payment corrective action activities during fiscal year 2019 for the corrective actions reported for fiscal year 2018. USDA and Treasury required their components to annually submit deliverables to the office of the CFO and coordinate accordingly with the Office of the Secretary as part of their fiscal year 2018 AFR reporting process. Two Agencies Established Planned Completion Dates for the Selected Programs’ Corrective Actions We found that two of the six agencies we reviewed—Education and VA— established planned completion dates for the selected programs’ corrective actions. Two agencies—HHS and SSA—did not consistently establish planned completion dates for all the selected programs’ corrective actions, as required by IPIA. Two agencies—USDA and Treasury—did not develop agency corrective actions corresponding to the identified root causes of improper payments for their selected programs and therefore did not establish planned completion dates for such corrective actions. Further details on each agency’s process are provided below. USDA: As previously discussed, FNS did not develop corrective actions at the agency level to address SNAP’s root causes of improper payments and, as a result, did not have planned completion dates for such corrective actions. However, in the event that FNS develops agency-level corrective actions, USDA’s IPIA corrective action plan guidance includes a directive for each corrective action to have an estimated completion date. Education: Education established planned completion dates for all Direct Loan and Pell Grant corrective actions that were not legislative proposals. For example, in fiscal year 2018, Education did not report a planned completion date for Federal Student Aid’s (FSA) corrective action related to proposed legislative changes, as the timeline for the legislative process is subject to external factors outside of Education’s control. HHS: HHS did not consistently establish planned completion dates for agency-level CHIP corrective actions. According to agency officials, most agency-level CHIP corrective actions are unlikely to have completion dates because the work is ongoing. We agree with HHS’s determination that establishing completion dates for ongoing corrective actions was not relevant. HHS provided a spreadsheet of CHIP’s corrective actions, which included a column of target completion dates. However, this column was not consistently filled out for actions that were not considered either ongoing or voluntary state processes. HHS officials stated that although HHS has a process for its improper payment corrective action plans, this process is not documented in formal policies and procedures. Instead, HHS uses OMB guidance as its policies and procedures. Lack of formally documented policies and procedures may have contributed to the inconsistencies in HHS establishing planned completion dates for agency-level CHIP corrective actions. Treasury: As previously discussed, instead of developing corrective actions to address root causes of EITC improper payments, Treasury addressed improper payments through IRS’s compliance programs and through outreach and education efforts to taxpayers and preparers. According to agency officials, Treasury did not establish planned completion dates for its compliance programs and outreach efforts because these activities were ongoing in nature and completed every year as part of IRS operations. We agree with Treasury’s determination that establishing completion dates for EITC ongoing compliance activities was not relevant. In the event that Treasury develops corrective actions for EITC improper payments, Treasury’s corrective action plan guidance includes a directive for each corrective action to have an estimated completion date. VA: VA established relevant planned completion dates for each PSAS corrective action. In addition, each task associated with each corrective action had a planned completion date. SSA: SSA did not consistently establish relevant completion dates for each OASDI and SSI corrective action. For example, SSA’s corrective action plans included sections for “target completion.” However, based on our review, these sections were not filled out consistently. According to agency officials, the process for developing and implementing its corrective actions was inconsistent because of SSA’s decentralized corrective action plan process. As previously discussed, SSA developed a new standardized improper payment strategy that if effectively implemented will address these concerns. Specifically, SSA’s procedures to implement this strategy include control activities designed to help ensure that the agency establishes planned completion dates for each corrective action, as required by IPIA. IPIA requires agencies to report on the planned or actual completion date of each action taken to address root causes of improper payments. Federal internal control standards state that management should design control activities to achieve objectives and respond to risks and implement control activities through policies. Further, federal internal control standards state that management should remediate identified internal control deficiencies on a timely basis. As part of these standards, management monitors the status of remediation efforts so that they are completed on a timely basis. Additionally, federal internal control standards state that management should implement its control activities through policies. Without documented policies and procedures for its improper payment corrective action plan process, including the establishment of planned completion dates, HHS lacks assurance that corrective action plan–related activities will be performed consistently. Additionally, without planned completion dates, HHS cannot demonstrate that it is effectively implementing and completing corrective actions timely and therefore cannot ensure that they will help reduce improper payments. Two Agencies Currently Do Not Have a Documented Process in Place to Monitor the Progress of Implementing the Selected Programs’ Corrective Actions Three of the four agencies—Education, HHS, and VA—that developed corrective actions corresponding to the identified root causes monitored the progress of the selected programs’ corrective actions, in accordance with OMB guidance. However, HHS’s process was not documented in policies and procedures. SSA did not monitor the progress for all relevant OASDI and SSI corrective actions but has since implemented policies and procedures to monitor such progress. USDA did not develop corrective actions at the agency level that corresponded to the identified root causes of improper payments for SNAP and therefore did not monitor the progress of such corrective actions. In addition, USDA’s corrective action plan guidance does not direct the agency to monitor the progress of its corrective actions. Although Treasury did not have corrective actions that corresponded to the root cause of improper payments, it did monitor the progress of its compliance and outreach efforts that are intended to help reduce EITC improper payments. Further details on each agency’s process are provided below. USDA: As previously discussed, FNS did not develop corrective actions at the agency level to address SNAP’s root causes of improper payments and, as a result, did not monitor the progress of such corrective actions. In addition, USDA’s IPIA corrective action plan guidance does not direct the agency to monitor the progress of its corrective actions. Without agency-level corrective actions to address the root causes of SNAP improper payments and a documented process to monitor the progress of implementing such agency-level corrective actions, USDA may miss opportunities to reduce SNAP improper payments. Education: Education monitored the progress of implementing each Direct Loan and Pell Grant corrective action. We found that Education maintained a spreadsheet to track the implementation status of each corrective action annually. Specifically, the status of each corrective action was updated to either “complete” or “open” for the annually recurring and long-term, multiyear corrective actions. The actions marked as “complete” had actual completion dates. Actions that Education considered ongoing, such as needed updates to help clarify verification requirements to the “Question and Answer” section of FSA’s website, were updated as “not applicable.” HHS: HHS monitored the progress of implementing each of its agency-level CHIP corrective actions. Specifically, HHS tracked the progress of implementing the corrective actions in a spreadsheet that included status updates for each agency-level corrective action. Agency officials stated that this information was updated approximately two to three times each fiscal year through an online interface; however, this process was not documented in policies and procedures. Without a properly documented process and related control activities, HHS is at increased risk that it may not consistently monitor the progress of CHIP corrective actions and has less assurance that such actions are implemented and completed timely. Treasury: Treasury did not develop corrective actions that corresponded to the root causes of EITC improper payments and, as a result, did not monitor the progress of such corrective actions. However, Treasury did monitor its compliance programs and outreach efforts that are intended to help reduce EITC improper payments during fiscal year 2018. VA: VA monitored the progress of implementing each PSAS corrective action. Specifically, we found that VA monitored the progress for each corrective action each month by calculating a completion percentage based on the status of tasks associated with each corrective action. SSA: SSA did not monitor the progress of implementing each OASDI and SSI corrective action. According to agency officials, the monitoring of corrective actions was inconsistent and evaluation of corrective actions was limited because of SSA’s decentralized corrective action plan process. As previously discussed, SSA developed a new standardized improper payment strategy that if effectively implemented will address these concerns. Specifically, SSA’s procedures to implement this strategy include control activities designed to help ensure that the agency monitors the progress of its corrective actions, as directed by OMB guidance. OMB guidance directs agencies to measure the progress of each individual corrective action annually. Federal internal control standards state that management should establish and operate activities to monitor the internal control system and evaluate the results and remediate identified internal control deficiencies on a timely basis. As part of these standards, management monitors the status of remediation efforts so that they are completed on a timely basis. Additionally, federal internal control standards state that management should implement its control activities through policies. Without monitoring the progress of its corrective actions, USDA cannot demonstrate that it is effectively implementing and completing its corrective actions timely and therefore cannot ensure that they will contribute to a reduction in improper payments. Further, unless HHS documents its process in policies and procedures, it will lack assurance that the progress of its corrective actions is monitored consistently and that such actions are implemented and completed timely. One Out of Six Agencies Measured the Effectiveness of Corrective Actions for the Selected Programs We found that one out of six agencies we reviewed—VA—measured the effectiveness of the selected programs’ corrective actions, including the establishment of reduction targets in accordance with OMB guidance. Education, HHS, and SSA did not measure the effectiveness of their corrective actions for the selected programs. In addition, USDA and Treasury did not develop agency corrective actions corresponding to the identified root causes of improper payments for their selected programs and therefore did not measure the effectiveness of such corrective actions. Further details on each agency’s process are provided below. USDA: As previously discussed, FNS did not develop agency-level corrective actions to address root causes of SNAP improper payments. Instead, FNS provided technical assistance and support to the individual states. According to agency officials, FNS cannot link each technical assistance initiative it provides to the states to the effect these efforts have on reducing payment integrity errors, as the technical assistance provided to the states can vary significantly. Additionally, USDA’s IPIA corrective action plan guidance did not include direction for the agency to measure the effectiveness of its corrective actions. Without agency-level corrective actions to address the root causes of SNAP improper payments and a documented process to measure the effect that agency actions have on improper payments, USDA will be unable to demonstrate whether such actions are effective in reducing improper payments and may risk continuing ineffective actions. In addition, as permitted by OMB, USDA did not establish a reduction target for SNAP improper payments because it lacked a sufficient baseline to accurately project future improper payment rates. USDA plans to reestablish reduction targets for fiscal year 2021 reporting. Education: Education’s policies and procedures state that to measure the effectiveness of the corrective actions, FSA solicits input from the corrective action owner, including, among other items, whether measuring and monitoring of the effectiveness of the corrective action has been established and a description of anecdotal evidence available to confirm the effectiveness of the corrective action. However, based on the procedures, it is unclear how the corrective action owners will conduct this analysis to demonstrate effectiveness. Education provided an example of communication to a corrective action owner requesting, among other items, that the corrective action owner (1) confirm that existing actions are focused on the true root causes of the improper payments and are actually reducing improper payments and (2) verify that existing corrective actions are achieving the intended purposes and results. Education officials informed us that although these items were discussed in stakeholder meetings, FSA was unable and did not attempt to quantify the direct effect of any one corrective action on the improper payment estimates. Education’s fiscal year 2018 AFR states that FSA does not attempt to quantify the reduction of the improper payment estimates in terms of percentage or amount due to Pell Grant and Direct Loan corrective actions. It further states that quantifying of results is not feasible because Education uses a nonstatistical alternative estimation methodology. However, according to Education’s fiscal year 2019 AFR, Education implemented a statistical estimation methodology for the fiscal year 2019 estimates. Education believes that the new methodology will allow FSA to better measure the effectiveness of corrective actions over time as FSA collects a baseline of statistically valid improper payment estimates. According to agency officials, FSA is currently refining its process for measuring the effectiveness of corrective actions based on its new statistical estimation methodology. However, until Education revises and documents its process to include measuring the direct effect that its Pell Grant and Direct Loan corrective actions have on improper payments, it will be unable to demonstrate whether the corrective actions are effective in reducing the associated improper payments and may risk continuing ineffective actions. As part of its overall payment integrity reporting in fiscal year 2018, Education established program-wide reduction targets for Pell Grant and Direct Loan. However, according to agency officials, because it used an OMB-approved nonstatistical methodology, Education’s confidence in using these results to establish reduction targets for the upcoming fiscal year was limited. Specifically, Education’s fiscal year 2018 AFR states that imprecision and volatility in the improper payment estimates continue to limit its ability to establish accurate out-year reduction targets. Therefore, for fiscal years 2016 through 2018, Education set the upcoming fiscal year reduction targets to match the current fiscal year reported improper payment rate for each program. According to agency officials, Education plans to consider the feasibility of setting meaningful reduction targets moving forward with its new statistical methodology. HHS: HHS did not measure the effectiveness of its corrective actions for CHIP improper payments. In addition, as discussed above, HHS does not have formal documented policies and procedures for its improper payment corrective action plan process. According to agency officials, establishing a one-to-one relationship between specific corrective actions and resulting changes in the improper payment rates is difficult because of the complexity of factors involved that lead to them. However, until HHS develops and implements a documented process to measure the effect that CHIP corrective actions have on improper payments, it will be unable to demonstrate whether the corrective actions are effective in reducing the associated improper payments and may risk continuing ineffective actions. As permitted by OMB’s implementing guidance, HHS did not establish a program-wide reduction target for CHIP improper payments for fiscal years 2019 or 2020, and does not anticipate setting one for 2021 because it lacks a sufficient baseline to accurately project future improper payment rates. According to agency officials, HHS plans to establish a CHIP reduction target for fiscal year 2022 reporting. Treasury: Treasury did not develop specific corrective actions to address root causes of EITC improper payments, so it could not measure the effectiveness of its corrective actions. Agency officials recognized that the current actions on their own will be unable to significantly reduce the amount of EITC improper payments. As approved by OMB, Treasury did not establish a program-wide reduction target for EITC improper payments for fiscal year 2018 reporting. However, Treasury set a reduction target for EITC improper payments in its fiscal year 2019 AFR, per OMB guidance. VA: VA has documented procedures in place to measure the effectiveness of its corrective actions for PSAS improper payments. As part of this process, VA set reduction targets and timelines for reducing the errors associated with each corrective action. VA maintained a timeline spreadsheet showing the corrective action reduction targets by year and the percentage of improper payments it expects to be reduced once each corrective action is fully implemented. VA updated the spreadsheet at the end of fiscal year 2019 with the current results of the effectiveness measure for corrective actions reported in fiscal year 2018. In addition, VA also set a program-wide reduction target for PSAS improper payments. SSA: SSA did not measure the effectiveness of its corrective actions for OASDI and SSI improper payments. According to agency officials, SSA did not have procedures to collect the necessary data and therefore was unable to measure the effectiveness of its corrective actions. SSA’s procedures for its new standardized improper payment strategy (discussed above) direct responsible components to define the metrics and information necessary to evaluate the corrective actions and to determine if the actions are effectively reducing improper payments. However, it is still unclear which metrics will be used to determine the effect that OASDI and SSI corrective actions have on the corresponding root causes to demonstrate effectiveness. Until SSA develops and implements a documented process to measure the effect that the OASDI and SSI corrective actions have on improper payments, it will be unable to demonstrate whether the corrective actions are effective in reducing the associated improper payments and may risk continuing ineffective actions. As part of its overall payment integrity reporting in fiscal year 2018, SSA established program-wide reduction targets for both programs. However, some of SSA’s reduction targets have remained constant since fiscal year 2004 reporting. Agency officials stated that although SSA believes OASDI’s payment accuracy rate is exceptionally high, if SSA’s mitigation strategies help decrease improper payments, it would consider changing the reduction target. For SSI, agency officials stated that SSA believes that SSI’s program complexity and reliance on self-reporting have made meeting the current accuracy goal challenging. Agency officials further stated that if planned mitigation strategies help decrease improper payments, SSA would consider changing the SSI reduction target. OMB guidance directs agencies to measure the effectiveness of each individual corrective action annually. Agencies may measure the effectiveness of corrective actions by assessing the results of actions taken to address the root causes, such as the performance and outcomes of these processes. In addition, OMB guidance states that for long-term, multiyear corrective actions, agencies should identify annual benchmarks used to demonstrate the initial effect on improper payment prevention and reduction. For corrective actions already in place, agencies should be able to describe how they evaluate these actions’ effectiveness and the results. Federal internal control standards state that management should establish and operate activities to monitor the internal control system and evaluate the results. As part of these standards, management performs ongoing monitoring of the design and operating effectiveness of the internal control system as part of the normal course of operations. Additionally, federal internal control standards state that management should implement its control activities through policies. Unless USDA, Education, HHS, and SSA develop and implement a process that clearly links corrective actions to effectively addressing improper payments, they will be uncertain whether the actions are actually reducing improper payments and the agencies may risk continuing ineffective actions. Further, unless these processes are documented in policies and procedures, agencies will lack assurance that the effectiveness of their corrective actions is measured consistently. Conclusions Developing corrective action plans that respond to identified root causes of improper payments is a critical component in government-wide efforts to reduce improper payments. Agency processes to monitor the progress and measure the effectiveness of such plans are also essential to evaluating their efforts to address improper payments. However, certain agencies have not effectively taken these steps for the selected programs we reviewed. For example, USDA and Treasury have not developed agency-wide corrective actions that correspond to the identified root causes of improper payments in their SNAP and EITC programs, respectively, that would better position these agencies to reduce and prevent them. Also, HHS lacks important information to monitor its efforts to address CHIP improper payments because it does not consistently establish planned completion dates for agency-level corrective actions. Additionally, USDA, Education, HHS, and SSA do not have sufficient processes in place to measure the effectiveness of corrective actions to address improper payments for the selected programs we reviewed. Unless agencies develop corrective action plans that correspond to the root causes of improper payments and implement processes to effectively monitor progress and measure their effectiveness, their ability to ensure that their actions will reduce improper payments will be limited. Recommendations for Executive Action We are making the following seven recommendations—one each to Education, HHS, and SSA and two each to USDA and Treasury. The Administrator of FNS should develop and implement a process, documented in policies and procedures, to analyze SNAP state-level root causes to identify potential similarities among the states and develop and implement SNAP agency-level corrective actions, if appropriate, to help address them. (Recommendation 1) The Secretary of Agriculture should revise USDA’s procedures to include processes for monitoring the progress and measuring the effectiveness of improper payment corrective actions. The process for measuring the effectiveness of corrective actions should clearly demonstrate the effect USDA’s corrective actions have on reducing improper payments. (Recommendation 2) The Secretary of Education should revise and document Education’s process for measuring the effectiveness of its corrective actions based on its new statistical estimation methodology for Direct Loan and Pell Grant improper payments. This process should clearly demonstrate the effect Education’s corrective actions have on reducing improper payments. (Recommendation 3) The Secretary of Health and Human Services should document in policies and procedures HHS’s improper payment corrective action plan process. As part of these procedures, HHS should include processes for (1) establishing planned completion dates, (2) monitoring the progress of implementing corrective actions, and (3) measuring the effectiveness of improper payment corrective actions. The process for measuring the effectiveness of corrective actions should clearly demonstrate the effect HHS’s corrective actions have on reducing improper payments. (Recommendation 4) The Secretary of the Treasury should determine whether Treasury’s current improper payment root cause analysis provides sufficiently relevant information that can be used as a basis for proposed corrective actions in reducing EITC improper payments and, if not, update the analysis using more timely data to ensure their reliability for identifying root causes of EITC improper payments. (Recommendation 5) The Secretary of the Treasury should update Treasury’s strategy for addressing the root causes of EITC improper payments to include (1) coordinating with other agencies to identify potential strategies and data sources that may help in determining EITC eligibility and (2) determining whether legislative changes are needed, and developing proposals as appropriate, to help reduce EITC improper payments, such as those related to the inability to authenticate taxpayer eligibility. (Recommendation 6) The Commissioner of SSA should develop and implement a process, documented in policies and procedures, to measure the effectiveness of SSA’s corrective actions for OASDI and SSI improper payments. This process should clearly demonstrate the effect SSA’s corrective actions have on reducing improper payments. (Recommendation 7) Agency Comments and Our Evaluation We provided a draft of this report for comment to OMB, USDA, Education, HHS, Treasury, VA, SSA, and the Council of the Inspectors General on Integrity and Efficiency (CIGIE). We received written comments from five agencies—USDA, Education, HHS, VA, and SSA—which are reproduced in appendixes I through V and summarized below. The Assistant Director of Treasury’s Risk and Control Group also provided comments in an email, which are summarized below. Treasury, HHS, VA, and SSA also provided technical comments, which we incorporated as appropriate. CIGIE and OMB liaisons informed us that CIGIE and OMB had no comments on the report. In its written comments, USDA stated that it generally agrees with our findings and recommendations. USDA stated that FNS has agency-level corrective actions that correspond to the identified root causes and establishes planned completion dates, monitors the progress, and measures the effectiveness of SNAP’s corrective actions. However, USDA officials did not provide documentation or other information supporting such agency-level corrective actions and efforts. Rather, as discussed in our report, FNS provides technical assistance and support to the states to help them improve payment accuracy and requires them to develop state-level corrective actions. Because FNS’s initiatives do not address specific root causes, we continue to believe that USDA does not have agency-level corrective actions that correspond to the identified root causes of SNAP improper payments. In regard to our recommendation to FNS to develop and implement a process to analyze SNAP state-level root causes and take other related actions, FNS stated that it already has an existing process and recommended that we revise our recommendation to indicate that its existing process should be formalized. In our report, we acknowledge that under statutory requirements and program regulations, FNS requires the states to identify the root causes and develop corrective actions that address them. However, USDA did not provide any evidence that FNS analyzes the states’ root causes to identify similarities and develop corrective actions at the agency level. Therefore, we continue to believe that our recommendation to FNS to develop and implement this process is valid to help ensure that it develops corrective actions at the agency level, if appropriate, and to help reduce improper payments within SNAP. In regard to our recommendation to revise USDA’s procedures, USDA stated that it will develop a proposed action plan to revise its procedures for monitoring the progress and measuring the effectiveness of improper payment corrective actions and the revised process will focus on the impact corrective actions have on the corresponding root causes of improper payments. The actions USDA described, if implemented effectively, would address our recommendation. In its written comments, Education neither concurred nor disagreed with our recommendation, stating that FSA will continue to evaluate and refine its processes to measure corrective actions and the effectiveness of these actions. Further, Education stated that FSA’s measurement of corrective action effectiveness and root cause identification will gain additional precision as FSA collects annual improper payment data and builds upon the new baseline of statistically valid improper payment estimates. Education stated that FSA annually measures the overall effectiveness of its corrective action plans collectively against the improper payment reduction targets, rather than measuring the effectiveness of each individual corrective action. However, as discussed in our report, OMB guidance directs agencies to measure the effectiveness of each individual corrective action annually. We continue to believe that our recommendation to Education is valid to help ensure that Education’s corrective actions are effective in reducing improper payments. In its written comments, HHS stated that it does not concur with our recommendation. Specifically, HHS stated that the portion of our recommendation providing that HHS’s process for measuring the effectiveness of corrective actions should clearly demonstrate their impact on the corresponding root causes of improper payments is operationally impossible and not required by OMB guidance. We acknowledge that given the unique circumstances across federal agencies concerning improper payments, OMB guidance provides some flexibility for how agencies are to measure the effectiveness of their corrective actions. However, if agencies’ corrective actions are effective, they should ultimately reduce improper payments. Without being able to demonstrate whether corrective actions are effective in reducing the associated improper payments, agencies will be uncertain if their actions are actually reducing improper payments and may risk continuing ineffective actions. While we acknowledge that OMB guidance does not explicitly require agencies to demonstrate the impact corrective actions have on the corresponding root causes of improper payments, agencies are required to analyze the root causes of improper payments and develop corrective actions to reduce improper payments. As such, we clarified this portion of our recommendation to indicate that HHS’s process should clearly demonstrate the effect corrective actions have on reducing improper payments, to better align with the purpose of corrective action plans. We also made this revision to our recommendations to USDA, Education, and SSA. In its written comments, VA stated that PSAS supported improper payments statutory requirements by completing annual audit reviews, identifying root causes, and developing a national program action plan to reduce improper payments. VA also stated that PSAS reduced improper payments from 39.7 percent in fiscal year 2018 to 2.1 percent in fiscal year 2019 and continues to make improvements through enhanced audit reviews and consultation with PSAS sites. In its written comments, SSA stated that it concurs with our recommendation and will determine the most cost-effective strategies to remediate the underlying causes of payment errors and monitor, measure, and revise the strategies as needed. The actions SSA described, if implemented effectively, would address our recommendation. In emailed comments, the Assistant Director of Treasury’s Risk and Control Group neither concurred nor disagreed with our recommendations. In regard to our recommendation to update its strategy for addressing root causes of EITC improper payments, Treasury stated that each year it indicates in its corrective action plan that IRS will continue to work with Treasury to develop legislative proposals that will improve refundable credit compliance and reduce erroneous payments. Treasury also stated that its fiscal year 2020 budget request included two legislative proposals that may improve refundable credit compliance and reduce erroneous payments and that both proposals have been in the President’s Budget for several years now. We acknowledge these legislative proposals in our report, and note that although Treasury has made certain legislative proposals, it has not made proposals to specifically help address EITC eligibility criteria issues. Additionally, as noted in the report, Treasury’s strategy does not include identifying and proposing additional legislative changes needed to help reduce EITC improper payments. Therefore, we continue to believe that our recommendation to Treasury is valid to help ensure that Treasury addresses EITC eligibility issues, which Treasury identifies as the primary root cause for EITC improper payments. We are sending copies of this report to the appropriate congressional committees, the Director of the Office of Management and Budget, the Secretary of Agriculture, the Secretary of Education, the Secretary of Health and Human Services, the Secretary of the Treasury, the Secretary of Veterans Affairs, the Commissioner of the Social Security Administration, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-2623 or davisbh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Comments from the Department of Agriculture Appendix II: Comments from the Department of Education Appendix III: Comments from the Department of Health and Human Services Appendix IV: Comments from the Department of Veterans Affairs Appendix V: Comments from the Social Security Administration Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Matthew Valenta (Assistant Director), Stephanie Adams (Auditor in Charge), William Beichner, Susanna Carlton, Virginia Chanley, Anthony Clark, Lindsay Hollup, James Kernen, and Diana Lee made key contributions to this report.
Why GAO Did This Study Improper payments, estimated at almost $175 billion for fiscal year 2019, are a significant problem in the federal government. IPIA and OMB guidance directs agencies to analyze the root causes of improper payments and develop corrective actions to reduce improper payments. This report examines (1) actions that agencies took to identify root causes of improper payments for selected programs, (2) the extent to which their corrective action plans correspond to identified root causes, and (3) the extent to which they monitored progress and evaluated the effectiveness of corrective actions. GAO analyzed corrective action plans reported in fiscal year 2018 for the following eight programs: Department of Education's Direct Loan and Pell Grant; HHS's Children's Health Insurance Program; SSA's Old Age, Survivors, and Disability Insurance and Supplemental Security Income; Treasury's EITC; USDA's SNAP; and VA's Prosthetic and Sensory Aids Service. GAO selected these programs based, in part, on those programs with at least $1 billion in fiscal year 2018 improper payment estimates. What GAO Found Five out of six agencies used their improper payment estimation results to identify the root causes for the eight programs GAO reviewed. However, the Department of the Treasury (Treasury) used 2006 through 2008 taxpayer data to identify root causes of fiscal year 2018 Earned Income Tax Credit (EITC) improper payments. Without timely data on the true root causes of EITC improper payments, Treasury will lack quality information needed to develop appropriate corrective actions to reduce them. In addition, only one agency we reviewed—the Department of Veterans Affairs (VA)—adhered to relevant Improper Payments Information Act of 2002, as amended (IPIA), requirements and Office of Management and Budget (OMB) guidance. The Department of Agriculture (USDA) and Treasury did not develop agency corrective action plans corresponding to the identified root causes of improper payments for the Supplemental Nutrition Assistance Program (SNAP) and EITC, respectively. In addition, the remaining three agencies did not have processes in place to either establish planned completion dates, monitor progress, or measure the effectiveness of their corrective actions in reducing improper payments. Unless agencies develop corrective action plans that correspond to root causes of improper payments and implement processes to monitor progress and measure their effectiveness, their ability to ensure that their efforts will reduce improper payments will be limited What GAO Recommends GAO is making seven recommendations: one each to Education, HHS, and SSA and two each to USDA and Treasury to improve their processes for addressing root causes of improper payments and measure their effectiveness. In their responses, SSA agreed, USDA generally agreed, Education and Treasury neither agreed nor disagreed, and HHS disagreed with GAO's respective recommendation(s). GAO clarified four recommendations and continues to believe all the recommendations are valid.
gao_GAO-20-106
gao_GAO-20-106_0
Background Entities seeking to do business with DOD may have opaque ownership structures that obscure ownership or control by other entities or individuals. Beneficial Owner For the purposes of this report, we define a beneficial owner as the natural person or persons who directly or indirectly own and control, or receive substantial economic benefit from, a company. As the number of layers of ownership increases, ownership information becomes more opaque, as shown in figure 1. This opacity can make it difficult for DOD to determine which entities and individuals ultimately own or control its contractors. Identifying Business Ownership Information In the United States, no centralized information source or national registry maintains company ownership information. In 2014, the National Association of Secretaries of State found that most states collect minimal ownership data. The association reviewed key information collected by the 50 states and the District of Columbia during the entity-formation process and in annual or periodic reports. During both the entity-formation process and in annual or periodic reporting, the association found that very few states collect some form of entity ownership or control information from limited liability companies or corporations. The Securities and Exchange Commission collects some ownership information on publicly traded companies. Any person or group of persons that acquires beneficial ownership of more than 5 percent of a publicly traded company’s registered voting securities must register with the Securities and Exchange Commission. Institutional investment managers regularly disclose their holdings, and company officers, directors, and holders of more than 10 percent of a class of the company’s registered equity securities must file a statement of ownership with the Securities and Exchange Commission. System for Award Management and Ownership Information GSA’s SAM is a federal government-wide database for vendor data that is used across all federal agencies. Any entity that wishes to do business with the government must register in SAM to be eligible to receive a contract award, except in specific circumstances outlined in the law and FAR. To increase procurement transparency and traceability, and broaden the government’s ability to implement fraud-detection technologies, the FAR was amended to begin requiring entities that wish to do business with the federal government to provide additional ownership information through the annual registration process in SAM starting on November 1, 2014. The required ownership information includes the “immediate” and “highest” level ownership of an offeror, as shown in figure 2 below. The FAR includes a requirement for ownership to be provided at the entity level. There is no requirement for offerors to report their beneficial owners. Evaluation of Prospective Contractors before Contract Award The FAR contains several provisions governing the selection of an offeror. Provisions such as price and past performance of the offeror are generally applicable in determining which offeror should win a contract. Additional requirements may apply to certain types of procurements, such as the procurement of national security systems. We outline several of the relevant FAR provisions; however, this does not represent a comprehensive list of all steps required by the FAR in making contract- award decisions. Responsibility Determination A prospective contractor must affirmatively demonstrate its responsibility, including, when necessary, the responsibility of its proposed subcontractors. Contracting officers must then determine the responsibility of prospective contractors, including whether prospective contractors can perform the terms of a contract. To be determined responsible, a prospective contractor must have adequate financial resources to perform the contract (or the ability to obtain them); be able to comply with the required delivery or performance schedule; have a satisfactory performance, integrity, and ethics record; have the necessary organization, experience, accounting and operational controls, and facilities to carry out the contract (or the ability to obtain them); and be otherwise qualified and eligible to receive an award under applicable laws and regulations. Before awarding a contract over the simplified acquisition threshold (generally $250,000 at the time of our review), a contracting officer must review the prospective contractor’s performance and integrity information available in the Federal Awardee Performance and Integrity Information System (FAPIIS). FAPIIS is a federal government-wide database designed to assist contracting officers with making a responsibility determination by providing integrity and performance information of covered federal agency contractors and grantees. FAPIIS provides a prospective contractor “Report Card” that includes information pertaining to the prospective contractor’s past performance (if applicable), such as any administrative agreements, contract terminations, nonresponsibility determinations, and exclusions, among other things. It also includes the ability to view the company relationship information, which details the ownership information that prospective contractors are required to report in SAM. When making a responsibility determination, the contracting officer must consider all the information available through FAPIIS with regard to the prospective contractor and any immediate owner, predecessor (an entity that the prospective contractor replaced by acquiring assets and carrying out affairs under a new name), or subsidiary identified for that prospective contractor in FAPIIS. The contracting officer must document in the contract file how the information in FAPIIS was considered in any responsibility determination, as well as the action that was taken as a result of the information. DCMA can play a role in supporting contracting officials in making responsibility determinations. For example, DCMA officials stated that they may provide information on a company’s business systems, financial capabilities, and company history, and assess whether the prospective contractor is likely to stay in business for the duration of the contract. When assessing the capacity to perform a contract, DCMA officials stated they examine company assets as a whole, including any parent company, to make a determination. According to officials, DCMA’s goal for identifying the organizational structure is to determine whether the company as a whole has the assets to perform the contract rather than to identify fraud or other risks that may be associated with that company. The level and type of support that DCMA provides to contracting officials depends on the particular needs of contracting officials for any given procurement. Some contracts require contractors to comply with cost- accounting standards and submit disclosures of their cost-accounting practice to show from which specific business units they receive allocations and to which specific business units they pass allocations; however, these disclosures are only required after a contract that is covered by cost-accounting standards is awarded. Source Selection Contract award decisions are based on evaluation factors and significant subfactors that are tailored to the procurement, at the discretion of procurement officials. At a minimum, these factors must include: price/cost, quality, and past performance. Federal law grants DOD additional authority to use public and nonpublic information to make source-selection decisions when acquiring national security systems. DOD may exclude an offeror if necessary to protect national security by reducing supply-chain risk. Under this authority, DOD does not have to disclose the reason an offeror was excluded, nor can the offeror protest DOD’s decision. Competition Generally Establishes Price Reasonableness The FAR requires contracting officers to purchase supplies and services from responsible sources at fair and reasonable prices. For negotiated contracts, price reasonableness is ordinarily established by adequate competition, such as when there are more than two responsible offerors competing independently. For noncompetitive purchases with only one offeror, the contracting officer must obtain certified cost or pricing data, or data other than certified cost or pricing data, as necessary to establish a fair and reasonable price. Procurements with only one offeror may still be considered competitive if there was a reasonable expectation that two or more responsible and independent offerors would submit offers and the offeror submitted the offer with the expectation of competition. Never Contract with the Enemy Section 841 of the 2015 National Defense Authorization Act grants DOD and other federal agencies the authority to limit contracts with entities that provide funds to a person or group that actively opposes U.S. or coalition forces involved in a contingency operation in which members of the armed forces are actively engaged in hostilities. It also allows agencies to terminate for default, void, or restrict the award of a contract to any contractor that provides funds received under a federal contract directly or indirectly to entities actively opposing U.S. forces engaged in hostilities. Fraud and Fraud Risk Definitions Fraud and “fraud risk” are distinct concepts. Fraud involves obtaining something of value through willful misrepresentation and is challenging to detect because of its deceptive nature. Fraud risk exists when individuals have an opportunity to engage in fraudulent activity, have an incentive or are under pressure to commit fraud, or are able to rationalize committing fraud. When fraud risks can be identified and mitigated, fraud may be less likely to occur. Fraud Risk Management Standards and Leading Practices According to federal standards and leading practices, executive-branch agency managers are responsible for managing fraud risks and implementing practices for combating those risks. Federal internal control standards call for agency management officials to assess the internal and external risks their agencies face as they seek to achieve their objectives. The standards state that, as part of this overall assessment, management should consider the potential for fraud when identifying, analyzing, and responding to risks. In July 2015, GAO issued its Fraud Risk Framework, which provides a comprehensive set of key components and leading practices that serve as a guide for agency managers to use when developing efforts to combat fraud in a strategic, risk-based way. The Fraud Risk Framework consists of four components to effectively manage fraud risk: Assess, Design and Implement, Evaluate and Adapt, and Commit. The Assess component calls for federal managers to plan regular fraud risk assessments and to assess risks to determine a fraud risk profile. Identifying fraud risks is one of the steps included in the Fraud Risk Framework for assessing risks to determine a fraud risk profile. The fraud risk profile supports the development of a strategy to mitigate fraud risks. The Fraud Reduction and Data Analytics Act of 2015 (FRDAA), enacted in June 2016, requires the Office of Management and Budget to establish guidelines for federal agencies to create controls to identify and assess fraud risks and to design and implement antifraud control activities. The act further requires the Office of Management and Budget to incorporate the leading practices from the Fraud Risk Framework in the guidelines. In July 2016, the Office of Management and Budget published guidance about enterprise risk management and internal controls in federal executive departments and agencies. Among other things, this guidance affirms that managers should adhere to the leading practices identified in the Fraud Risk Framework. The act also requires federal agencies to submit to Congress a progress report each year for 3 consecutive years on the implementation of the controls established under the Office of Management and Budget guidelines, among other things. Recent GAO work examined federal agencies that are subject to FRDAA, including DOD, and found that 85 percent of those agencies have started planning and 78 percent have started implementing efforts to meet FRDAA requirements; however, the majority of these efforts were characterized as not being mature. Maturity was determined by agency responses to a survey question that asked whether the agency’s status of implementing FRDAA requirements was “not started,” “started but not mature,” or “mature.” The report identified the number and percentage of agencies that fell into each of these status categories, but did not state the level of maturity for any individual agency. DOD Contractors with Opaque Ownership Can Pose a Range of Fraud and National Security Risks in the Procurement Process Contractors with opaque ownership structures can pose a range of financial and nonfinancial fraud and national security risks to DOD by misrepresenting or concealing company ownership information to commit fraud against the government or to do harm to U.S. national security concerns. We identified multiple types of fraud and national security risks by examining 32 cases for fraud involving DOD contractors that were adjudicated or settled from calendar years 2012 through 2018 and conducting interviews with knowledgeable DOD officials and criminal investigators. There may be additional risks and cases related to contractor ownership that are not identified below. Court cases we identified were investigated by DOD and other entities based on, for example, information from whistleblowers, defective parts received by DOD, lawsuits involving contractors, and U.S. government officials determining they were receiving false contractor information. As discussed later in this report, DOD has not systematically assessed risks posed by contractor ownership; therefore the magnitude and prevalence of the risks we identified are not known. Appendix II of this report contains a complete listing and additional details of the 32 cases we identified. Contractors with Opaque Ownership Pose Financial Fraud Risks Including Price Inflation Contractors can use opaque ownership structures for illicit financial gain through a variety of methods, as described below. Concealing relationship with subcontractor to inflate prices. Contractors can subcontract with companies they own or control to inflate prices for financial benefit. For example, in a 2014 federal court case we examined, a contractor and another company with common ownership pled guilty to major fraud against the United States. They agreed to pay $434 million in criminal penalties and to settle a lawsuit in connection with concealing their relationship with a subcontractor that the contractor directed to fraudulently mark up costs on items that the contractor purchased and resold to DOD. Specifically, the contractor purchased goods from a company that its owners created, controlled, and used to make the fraudulent markups appear legitimate. Further highlighting the relationship between the company and the contractor, contractor personnel were also responsible for hiring individuals to work for the subcontractor. The contractor billed the government an artificially high price for the goods from July 2005 through April 2009 and resulted in a loss to DOD of $48 million. Figure 3 below illustrates this scheme to conceal ownership and fraudulently inflate prices. Billing for work not performed. Contractors or subcontractors can bill for work not performed by creating fictitious invoices that add costs to a contract. For example, in four court cases we examined, multiple DOD subcontractors were actually shell companies that did not have the inventory they purported to ultimately provide to the government or perform the work indicated in the contract requirements. According to documents filed in U.S. district court, some of these subcontractors hired other companies to perform work, but created additional invoices that added costs for work the subcontractors did not perform. These additional costs were then passed on to DOD. Disguising conflicts of interest. Contractors or subcontractors can conceal conflicts of interest for financial benefits. We identified a case involving a DOD subcontractor that concealed ownership for illicit financial gain. According to court records, a DOD contractor employee and his spouse formed a company and concealed their interests by not listing their names but listing the names of family members on formation documents. This company became a subcontractor to the company that employed the DOD contractor. The contractor employee, in his official position, wrote letters justifying awards of purchase orders to the subcontractor he owned and approving recommendations that the awards be made to the subcontractor. The co-owner of the subcontractor concealed her involvement by signing contracts using a different name, knowing that the use of her real name could reveal the DOD contractor employee’s ownership of the subcontractor and affect the awards. Creating the appearance of competition on a contract to inflate prices. In our review of 32 cases, we also identified the potential risk of companies creating the appearance of competition by submitting bids from fictitious companies. Specifically, we identified one case that involved a DOD contractor whose executives admitted as part of their plea agreements to creating fictitious, inflated bids that were not from actual businesses to ensure that the contractor’s own bid would be selected by DOD as the supposed lowest. In this instance, the contractor was required to obtain at least two competitive bids for certain services and items and provide the bids to DOD for selection. As part of their plea agreements, the individuals involved with the scheme also admitted that the scheme allowed the contractor to control and inflate the prices charged to DOD without any true, competitive bidding, as required. The contractor also fraudulently inflated invoices that were sent to DOD, and two individuals involved in the scheme admitted they were aware of losses to DOD of at least $34.8 million. Court records state that the scheme took place from 2011 to 2013. In 2017, two contractor executives involved with this scheme were sentenced to prison for 70 and 46 months. Additionally, we identified additional cases involving this contractor and its owner bribing government officials in exchange for the approval of fraudulent invoices, steering contracts, and covering up the contractor’s overcharging practices, which has led to at least 22 individuals pleading guilty. Additionally, DOD officials from Defense Pricing and Contracting and DLA identified the risk of different companies concealing common ownership to create the appearance of competition on a solicitation and attempt to inflate prices. By analyzing a subset of DOD solicitation data, we further examined the risk that contractors could disguise their ownership to create the appearance of competition. We identified potential relationships among the offerors of solicitations that could indicate common ownership. Our analysis of responses to approximately 2,700 solicitations in the Federal Business Opportunities (FBO) website from fiscal years 2015 through 2017 found indications that at least 16 offerors were potentially related to at least one other offeror when bidding on the same solicitation. This analysis shows indications that offerors may not always compete independently and the relationship among offerors is not always readily apparent to contracting officials or disclosed in SAM registration information. Specifically, we identified the following types of potential relationships among offerors. Offerors who shared the same management. We identified two offerors who each submitted bids on the same three solicitations and also shared the same mailing address and point-of-contact address, including suite number. According to the companies’ websites, the owner (who was also the President and Chief Executive Officer) for one offeror was the President and Chief Executive Officer of the other offeror. Further, both companies shared the same management team and neither company had reported any ownership information in SAM. According to DOD contracting officials, no additional information was disclosed to the contracting office for these offerors, nor were they otherwise aware of the potential relationship. Figure 4 below shows an example from one solicitation. Offerors who were potentially related to an entity excluded from doing business with the government. We identified two offerors who were potentially related to a third offeror who was actively excluded from doing business with the government. One of these offerors bid together with the excluded offeror on eight solicitations. Figure 5 below shows an example of one solicitation. In addition, a third potentially related offeror was identified as sharing information with one of these offerors who later bid together on a ninth solicitation. For one of the nine solicitations, one of the offerors potentially related to the excluded company was awarded a contract. According to DOD contracting officials, no additional information was disclosed to the contracting office for these offerors, nor were they otherwise aware of the potential relationship. Offerors who shared other information. We identified 11 offerors who shared other information with at least one other offeror when bidding on the same solicitation. In some instances, these potentially related offerors bid on multiple solicitations. For example, we found two potentially related offerors bid together on three separate solicitations in our FBO data. We further examined these 11 potentially related offerors’ SAM registration information to determine whether they reported shared ownership in SAM, and found one instance in which two of the potentially related offerors self-reported their relationship that one offeror owned the other; the remaining nine offerors did not report any type of shared ownership information in SAM. According to DOD contracting officials, none of the nine offerors disclosed a relationship with another offeror nor was the contracting officer otherwise aware of the potential relationship. While sharing certain information does not definitively confirm they are owned by the same entity, it is an indicator that these offerors are related. Figure 6 below highlights an example in which two offerors bidding on the same solicitation shared information and did not report shared ownership in SAM. Additionally, we identified an instance in which this type of information was also shared between two offerors and a subcontractor for a third offeror, as shown in figure 7 below. The potentially related offerors we identified did not appear to affect the overall competition on these contracts because other, seemingly unrelated offerors also submitted bids. As a result, it is unlikely that they would have affected the price paid by the government in these contracts. However, these potentially related offerors represent a risk that offerors may not always be competing independently and these types of relationships may not always be readily apparent to contracting officers, which is important when evaluating the sufficiency of competition on a solicitation and the independence of its offerors. Further, contractors may not always be forthcoming in reporting their ownership information in SAM, which can affect other areas of the procurement process, including any procedures that rely on the accuracy of this information. Contractors with Opaque Ownership Pose Nonfinancial Fraud Risks Including Circumventing Set-Aside Eligibility Requirements Contractors can pose nonfinancial fraud risks to DOD by concealing their ownership structure to bid on and obtain contracts that they are not eligible to receive. These nonfinancial risks may not pose a direct financial cost to DOD, but they can allow ineligible companies to contract with DOD while potentially denying eligible companies from contracting with DOD. As discussed below, these risks can also lead to additional vulnerabilities. In our review of 32 cases, we identified DOD contractors that concealed their ownership information to obtain contracts set aside for particular types of businesses, to obtain contracts only intended for domestic companies, and to circumvent debarment by the government. Set-Aside Contract Eligibility. Contractors with opaque ownership structures can pose the risk that government contracts set aside for small businesses are awarded to ineligible companies. Ineligible contractors could take advantage of Small Business Administration set-aside programs that allow small businesses that are owned by service-disabled veterans, women, minorities, or economically and socially disadvantaged individuals to receive government contracts specifically set aside for these types of businesses. Of the 32 cases we reviewed, we identified 20 cases in which DOD contractors or DOD contractor employees were found guilty, pled guilty, or settled with the government for representing themselves as eligible to receive set-aside contracts. These contractors falsified self-reported information and made false certifications to the government to claim eligibility by using eligible individuals as figurehead owners. In these cases, the figurehead owners did not actually maintain the level of beneficial ownership or control of the contractor required by federal regulations, or the contractors simply used the names of eligible individuals when communicating with the government to bid on and win contracts. For example, we identified one case that involved two DOD contractors participating in a single scheme to misrepresent their common ownership and obtain over $200 million in awards that they were not eligible to receive. One of the contractors that fraudulently obtained set-aside contracts claimed it was owned by a service-disabled veteran; however, that veteran had virtually no involvement with the contractor. The other contractor claimed to be owned by an economically disadvantaged individual who worked full-time for another entity and did not control the contractor. These contractors were not eligible to receive the set-aside contracts because they were not at least 51 percent controlled by the eligible individuals and the eligible individuals did not make long-term decisions for the companies. Rather, the contractors were controlled by an ineligible individual who owned and controlled a separate company that actually performed work on the set-aside contracts. To obtain government contracts set aside for companies owned by economically and socially disadvantaged individuals, the qualifying individuals must also control the majority of the company and make day- to-day decisions. Figure 8 below, which is based on an actual case, illustrates how ineligible contractors can obtain and receive government funds on contracts intended for Service-Disabled Veteran–Owned Small Businesses. Domestic Contractor Eligibility. Contractors with opaque ownership structures can also pose the risk of circumventing eligibility requirements for contracts that are only designated for domestic companies, which can lead to other vulnerabilities that affect warfighter readiness. Of the 32 cases we reviewed, we identified four cases in which individuals created domestic shell companies for foreign manufacturers and bid on contracts designated for domestic companies. In three of the four cases, the individuals behind the shell companies also had ownership interests in the foreign manufacturing companies. Foreign manufacturers received payments from the contracts, despite the contracts only allowing domestic manufacturers to be eligible, and one such manufacturer ultimately supplied DOD with defective and nonconforming parts that led to the grounding of at least 47 fighter aircraft. In multiple instances, another ineligible contractor supplied parts that were unusable due to design flaws and nonconformities. Three of these companies also exported military technical drawings and blueprints to foreign countries in violation of the Arms Export Control Act. Figure 9 below, which is based on an actual case, illustrates a contractor acting as a shell company and misrepresenting foreign manufacturing. Circumventing Debarment. Individuals that have been debarred, or prohibited from conducting business with the federal government, can circumvent their debarment by concealing their ownership in new companies that were created for the sole purpose of continuing to conduct business with the government. Of the 32 cases we reviewed, we identified one conviction of an individual who was debarred from 2013 to 2016 for supplying defective parts to DOD. This individual created three shell companies and concealed his beneficial ownership and control of these companies by omitting his name from communication with DOD and using fictitious names and names of family members as company officials. These three shell companies continued to provide defective and nonconforming parts to DOD, and the debarred individual received approximately $2.8 million in payments from DOD from May 2013 to June 2016. Contractors with Opaque Ownership Structures Pose National Security Risks Including Supply- Chain Infiltration DOD officials we spoke with and published DOD research have identified the risk of contractors disguising company ownership as an enabler to do harm to national security interests. Contractors fraudulently misrepresenting themselves to DOD could actually be operated by adversaries seeking to act against the government’s interests. Foreign- owned contractors can conceal ownership information when registering in SAM, which allows contractors to self-attest ownership information. For example, in addition to the 32 cases we identified through our review, we also identified a bid protest filed with GAO challenging a contract award made to a foreign-owned DOD contractor in fiscal year 2018 that prohibited the participation of foreign firms or domestic companies under foreign ownership, control, or influence. This contractor did not disclose its foreign ownership or control in SAM or to DOD, as required by the FAR and the solicitation. As a result of the bid protest, DOD subsequently terminated the contract later in fiscal year 2018 after confirming the foreign ownership with the contractor. DIA and DLA officials stated that adversarial foreign governments or other malicious entities, such as companies attempting to access sensitive government information, could access sensitive systems to conduct sabotage or surveillance. These entities could infiltrate DOD’s supply chain to introduce components, such as circuit-board chips and routers modified to fail, facilitate state or company espionage, or compromise the integrity of DOD’s information-technology systems. According to CIO officials, adversarial entities could also potentially gain access to sensitive information through their relationship with DOD contractors. For example, DIA officials identified the possibility of foreign or adversarial entities exploiting companies in DOD’s supply chain with financial difficulties, and according to CIO officials, DOD may not always have visibility over foreign entities acquiring a domestic contractor. In 2017, the Office of the Director of National Intelligence released a management background paper discussing supply-chain risks, which stated that the multiple layers and networks of suppliers in this chain can allow foreign adversaries the ability to access the supply chain at multiple points. For example, according to the background paper, a hostile foreign intelligence entity could potentially conceal its presence in government supply chains by operating through multiple front organizations, companies, hackers, and organized crime, making it extremely difficult to discover and counter its actions. The paper also states that adversaries may be able to penetrate the supply chain to access sensitive research and development programs, steal intellectual property and personally identifiable information, insert malware into critical components, and mask foreign ownership, control, or influence of key providers of components and services. Furthermore, in April 2018, the U.S.-China Economic and Security Review Commission issued a report identifying a supply-chain threat to U.S. national security that stems from products produced, manufactured, or assembled by entities that are owned, directed, or subsidized by national governments or entities known to pose a supply- chain or intelligence threat to the United States. DOD officials have also identified an additional risk of contracting with companies that have opaque ownership structures. For example, a 2017 Defense Contract Audit Agency report to Congress described the risk of individuals receiving government contracts or gaining access to government installations who would harm deployed troops. Officials we spoke with from the Joint Staff Logistics Directorate also acknowledged the risk that government funds could be provided to contractors owned by a person or entity that is actively opposing U.S. or coalition forces involved in a contingency operation in which service members are actively engaged in hostilities. These adversaries can potentially use opaque ownership structures to disguise their ownership and contract with the government in areas involved in contingency operations, such as Iraq or Afghanistan, to fund their operations or gain access to military bases. DOD Has Taken Steps That Could Address Some Risks Related to Contractor Ownership and Has Opportunities to Systematically Assess These Risks DOD has taken steps that could address some fraud and other risks related to contractor ownership in the procurement process. It has not yet conducted a department-wide assessment of these risks or identified them as a risk area for assessment in its development of a fraud risk management program in accordance with federal internal control standards and leading practices, however. As mentioned previously, DOD and other federal agencies revised the FAR in 2014 to collect some contractor ownership information. DOD has also begun to consider contractor ownership to address national security risks, including identifying and using contractor ownership information as part of its supply-chain risk analysis in the procurement of national security systems and critical components, avoiding contracting with the enemy, and determining whether contractor facilities can be cleared to access classified materials. Although DOD has taken these actions, it faces a number of challenges in identifying and verifying contractor ownership. To assist the department and its components in identifying and assessing fraud risks, DOD has also begun a department-wide fraud risk management program. As it develops a fraud risk assessment across the department, DOD has opportunities to systematically assess risks related to contractor ownership as part of this larger effort. This fraud risk assessment, if used to inform the development of a risk-based antifraud strategy, could enhance the effectiveness of managing fraud risks for DOD, including those related to contractor ownership. DOD Has Taken Steps That Could Address Some Fraud and Other Risks Related to Contractor Ownership DOD and Others Revised the FAR to Collect Ownership Information to Improve Their Review of Contractor Past Performance before Awarding New Contracts DOD, GSA, and the National Aeronautics and Space Administration amended the FAR in May 2014 to require prospective contractors to self- report their immediate and highest-level entity owner, but not their beneficial owner, as part of contractors’ annual registration process in SAM. The agencies added the requirement to support the implementation of business tools to help track contractor performance issues across corporations as well as to improve supply-chain transparency and integrity efforts, among other reasons. According to DOD procurement policy officials, the intent is that the ownership information would be made available in FAPIIS for contracting officers to help identify past-performance issues across corporations to aid with responsibility determinations. The FAR requires contracting officers to consider all relevant information available in FAPIIS when making responsibility determinations, but, according to DOD procurement policy officials, there is no requirement to document whether and how ownership information is considered. According to DOD procurement policy officials, contracting officers’ general focus in the responsibility determination process is largely centered on whether the contractor is financially solvent, has the ability to carry out the contract, and has satisfactory past performance. DOD procurement policy officials said that they did not want to be too prescriptive in directing contracting officers on the use of this information, and therefore have not developed policies or procedures or provided training on how to specifically use the ownership information collected. According to these officials, DOD has not historically considered contractor ownership structures in the responsibility determination process, nor has the agency been aware of the extent to which such structures could pose a range of risks. As discussed below, conducting a department-wide assessment of risks posed by contractor ownership—an action that DOD has not yet taken—would be a key first step for the department before developing such policies and procedures. Within DOD, DLA has taken steps that could address some risks posed by contractor ownership. First, according to procurement officials, DLA provides its contracting officials with a “contractor responsibility matrix,” which outlines mandatory, recommended, and optional steps to take when making a responsibility determination for procurements both below and above the simplified acquisition threshold. Among the steps included, DLA requires contracting officials to review contractors’ attestations to ownership or control by a foreign government to determine whether the prospective contractor is qualified and eligible to receive an award. It also recommends contracting officials obtain responsibility information from other sources, including an internet search of the company’s reviews, and its owners and principals. This step is listed as optional for existing contractors. Further, DLA’s contracting officers are required to review the Defense Contractor Review List to identify any past-performance information. The Defense Contractor Review List is an internal tool used by DLA that is designed to monitor fraud, waste, and abuse for commercial entities and military unique items. The system is designed to allow DLA to identify and communicate information on its contractors, such as performance ability, delinquency information, suspension and debarment information, and various types of notes that may be relevant to contract performance or procurement decisions. DLA officials told us the Defense Contractor Review List can be used to communicate information or risks about contractor ownership. The Defense Logistics Acquisition Directive requires DLA contracting officers to review any Special Attention Reason Codes in the Defense Contractor Review List and comply with its associated Special Attention Treatment Codes when making responsibility determinations. The Special Attention Reason Codes describe the basis for being on the list and the Special Attention Treatment Codes provide recommended actions to contracting officers for mitigating risk. According to DLA officials, contractor ownership information is generally not identified in the Defense Contractor Review List. Nevertheless, ownership information may be included in the documentation if, for example, the contracting officer identifies that two or more companies appear to be related or in cases in which there may be suspected collusion. DOD Has Taken Steps to Use Contractor Ownership Information to Address Other Risks Such as National Security Concerns DOD has taken steps in other areas to use contractor ownership information to address risks in specific types of procurements, including those involving national security systems. For example, DOD has taken steps to address national security concerns related to contractor ownership, including conducting threat assessments to identify risks related to supply chains for critical components and national security systems. DOD has also taken steps to identify contractor ownership information to avoid contracting with the enemy, and to address foreign ownership, control, and influence in contracts involving classified information. DOD has outlined policies and procedures in some, but not all, of these areas. As discussed below, conducting a department-wide assessment of risks posed by contractor ownership—an action that DOD has not yet taken—would be a key first step for the department before fully developing such policies and procedures. Steps taken to use ownership information to address supply chain risks. DOD has taken some steps to identify and consider contractor ownership to address supply-chain risks. For example, DIA considers contractor ownership information when conducting threat assessments as part of its supply-chain risk analysis for procurement of national security systems and critical components, according to DIA officials. Specifically, DOD is able to use public and nonpublic intelligence information to exclude sources that present risks of an adversarial foreign government or other malicious entities infiltrating DOD’s supply chain and stealing information or compromising government systems. DIA officials told us that, as part of this supplier- related threat assessment, they identify and consider ownership information along the supply chain, including beneficial-ownership information. The guidelines in Intelligence Community Standard 731-02 state that a supply-chain threat assessment for a procurement item determined to be mission-critical should at a minimum include information on the contractor’s parent company, ultimate parent company, and subsidiaries. However, the guidance does not specify whether this ownership and related company information is to be independently verified or whether it relies on the contractor self-attestations in SAM. According to the guidance, supply-chain threat assessments should also include, at a minimum, information on the contractor’s key management personnel, such as members of the board of directors, officers, general partners, and senior management officials. The guidance does not mention, however, identifying beneficial owners or those who do not have direct control over a contractor but derive substantial economic benefit from it. Steps taken to use ownership information to address legal provisions against contracting with the enemy. Officials from the Joint Staff Logistics Directorate responsible for DOD’s vendor vetting program told us that contractor ownership information, including beneficial ownership, may be identified as part of the intelligence information gathered on vendors by combatant commands to ensure that money is not flowing to contractors owned by a person or entity that is actively opposing U.S. or coalition forces involved in a contingency operation in which service members are actively engaged in hostilities. According to these officials, DOD has not established department-wide policies or procedures to implement reviews of contractor ownership during the process of vetting vendors, but it is something the department is currently developing. These officials stated that a vendor threat-mitigation working group discusses how to close gaps in information sharing among the intelligence, procurement, and operations communities. Officials also noted some challenges. Although contracting officers are responsible for determining the responsibility of vendors and whether vendors can perform the terms of a contract, the information that may be available to contracting officers and the actions that they can take are not always clear. For example, the officials we spoke with mentioned concerns that contracting officers are not always able to access or act on intelligence information. GAO recently completed a review of this program in a classified report. Steps taken to address ownership risks in contracts involving classified information. DOD has taken steps to address risks posed by contractor ownership as part of the Facilities Clearance Process. DOD uses the Facilities Clearance Process to determine whether a contractor is eligible to access classified information. DOD has developed written policies and procedures for how contractor ownership, including foreign ownership, control, and influence, is to be investigated and addressed. As part of this process, Defense Security Service (DSS) guidance instructs its officers to identify key management personnel and to assess the risks they pose for possible foreign ownership, control, or influence. DSS guidelines indicate that key management personnel include company officers, directors, and members of a limited liability company, among others. Some key management personnel, such as members of a limited liability company, may also be the owners. According to DSS officials, beneficial owners who benefit financially but do not partake in active management may be identified as key management personnel as part of the clearance process, depending on various factors including the percentage of ownership. As an example, DSS officials stated that an individual who owns 50 percent of a company would not be able to purport that he or she does not control the company. According to the DSS guidance, if foreign ownership, control, or influence is found, mitigation agreements can be put into place to reduce the risk. DOD Has Encountered Challenges in Identifying and Verifying Contractor Ownership DOD officials identified a number of challenges in identifying and verifying contractor ownership, especially if the contractor is actively seeking to misrepresent its ownership. For example, verifying contractor ownership can be challenging because state governments determine the type of information collected during company formation and, as discussed earlier, most states collect minimal ownership information as part of this process. As described earlier, there is no centralized information source or registry on company ownership information in the United States. As a result, contracting officers could face challenges in time-consuming efforts to verify contractor ownership. Further, DOD procurement policy officials stated that workload and resource constraints limit the extent to which they can verify contractor ownership. The nature of ownership information submitted during the SAM registration process also presents challenges to any verification efforts conducted by contracting officers. The ownership information submitted in SAM is self-reported by the prospective contractor, and therefore relies on the contractor to honestly report such information. DOD officials told us that, for most procurements, with the exception of those involving classified work or other national security concerns, this information is not verified. A related limitation involving SAM ownership information is that contractors must provide information on the immediate and highest-level entity owners and are not required to report beneficial-ownership information, that is, on the natural person or persons who own or control, or benefit financially from, the company. Lastly, while the SAM ownership requirement provides some transparency at the prime-contractor level, it does not provide transparency at the subcontracting levels below the prime contractor. Subcontractors are not required to register in SAM and, therefore, are not required to report their ownership. Consequently, DOD generally does not have insight into the ownership of its subcontractors. DOD procurement policy officials noted that this poses particular challenges in identifying fraud and other risks to the supply chain. For example, the contractor itself may not pose a risk; but that does not guarantee that the contractor’s suppliers do not pose fraud or other risks. DOD procurement policy officials told us that it would be helpful to require subcontractors to register in SAM and report their ownership. This requirement would be an additional burden on contractors, however, and would need to be balanced with the potential benefit of being able to identify problem actors. Another challenge involves the use of publicly available ownership information, including commercially available data services, by contracting officers to help identify contractor ownership. Depending on how a company is structured, there may be no publicly available ownership information. Furthermore, DOD procurement policy officials told us that public information, including ownership information, could be inaccurate or outdated and potentially expose the department to bid protests from the contractor. Therefore, any external or supplemental information used that was not part of the contractor’s submission would need to be vetted by the contractor before using it. These officials said that DOD would need to come up with an efficient process to inform the prospective contractor of the additional information and provide due process to allow it the opportunity to refute any information obtained. Additionally, DOD procurement policy officials noted that another difficulty with using a commercial tool to determine ownership is the volume of contracts processed by contracting officials, which amounted to over 570,000 new contracts in fiscal year 2018. For sensitive procurements in which DOD has the authority to use both public and nonpublic information (for example, those involving national security systems or classified work), DSS officials stated that the process of identifying and verifying ownership is lengthy, particularly with complex ownership. In some instances, it has taken DSS 1 to 2 years to resolve issues that have arisen when clearing contractors’ facilities for access to classified materials. In addition, DSS officials mentioned that the many different types of business structures, including new structures that DSS comes across, create challenges for identifying ownership. According to DIA officials, it is significantly easier to identify the beneficial owner of publicly traded companies than privately owned companies. DSS officials also mentioned that it is difficult and resource-intensive to monitor changes to contractor ownership, particularly given that they monitor 13,000 facilities. According to DOD procurement policy officials, DOD would need to determine which contracts require additional research into contractor ownership and which office would be responsible for conducting the research. Officials noted that DOD does not currently have the resources in place to focus on these kinds of activities because contracting officers are already operating in a constrained environment with limited resources, lacking the time, resources, or training they need to conduct in-depth reviews or analysis of the ownership aspects of a particular company. According to these officials, DOD should dedicate staff and funds to resolve this problem, including bringing in people with data- analysis and data-mining skillsets to learn from private-sector companies and organizations that already conduct vendor ownership-related risk assessments and data analytics. DOD procurement policy officials identified that another strategy to address opaque ownership structures would be to require contractors to report additional ownership information, such as beneficial-ownership information, when registering to do business with the federal government in SAM. However, the officials also noted that, previously, both public- sector organizations and private companies have resisted requirements to provide additional ownership information, due in part to the difficulty in defining ownership. Additionally, regulatory trends within government contracting have generally focused on easing the burden to do business with the government. New requirements to provide additional information may be viewed as an additional burden. A selected group of companies that contracted with DOD in the last 5 years provided us with mixed views on the potential burden of providing additional ownership information. Most small-business contractors we contacted told us that an additional beneficial-ownership reporting requirement would pose little to no further burden on them. In contrast, both of the large, publicly traded companies that similarly contracted with DOD expressed concerns about the complexity and difficulty of reporting their beneficial ownership. One large company noted that beneficial ownership would need to be more narrowly defined for it to determine the resulting regulatory burden. DOD Has Opportunities to Systematically Assess Risks Related to Contractor Ownership as It Develops a Fraud Risk Assessment across the Department DOD Has Begun to Develop a Department-Wide Fraud Risk Assessment DOD has taken steps to conduct a department-wide fraud risk management program designed to identify and assess fraud risks. According to DOD’s Fraud Risk Management Pilot Program Instructions, in 2017 DOD began efforts to design, implement, and operate an internal control system that addresses fraud risks and to comply with requirements established by FRDAA. As mentioned earlier, FRDAA created requirements for agencies to establish financial and administrative controls for managing fraud risks. FRDAA also requires agencies to report their progress identifying risks and vulnerabilities to fraud affecting payroll, beneficiary payments, grants, purchase and travel cards, and large contracts. As part of this implementation process, and to test the development of its fraud risk management program, DOD conducted a fraud risk management pilot program in 2018 by selecting four components to identify fraud risks, assess controls they have in place to mitigate these risks, and develop mitigation plans, as appropriate. According to DOD, the pilot program was designed to assist DOD and its components in the development of a department-wide fraud risk management program by identifying and assessing fraud risks in a manner that is aligned with the leading practices within GAO’s Fraud Risk Framework. To prepare for this pilot program, in 2017, the Office of the Under Secretary of Defense (Comptroller) (OUSD) conducted a survey requesting that 66 DOD components determine the extent and maturity of control activities currently in place related to the prevention, detection, and response to fraud. The survey asked components to provide, among other things, information on any antifraud programs, key fraud risks identified, and processes for identifying, responding to, and monitoring risks. The responses from the 41 responding components were scored to determine their fraud program maturity. According to DOD’s Fraud Risk Management Pilot Program Instructions, the results of this survey were also used to identify potential vulnerabilities from the FRDAA requirements and guide the development of DOD’s pilot program. DOD officials told us that before the recent development of their fraud risk management pilot program, the department did not have a process for assessing fraud risks department-wide. Also, as part of the pilot program, OUSD(C) and the components identified seven fraud schemes that affect large contracts, five of which we discuss above as having the potential to involve risks posed by contractor ownership. Specifically, the pilot program identified fraud schemes involving service-disabled veteran–owned businesses, inflated prices charged by contractors for the services rendered, bid submission with the same two or three offerors on multiple contract opportunities, inclusion of one or more contractors as a subcontractor on the bid rigger’s proposal, and counterfeit parts. As discussed previously in this report, opaque ownership structures can play a role in carrying out these types of fraud schemes. DOD completed the pilot program in 2018, and in March 2019 began expanding the fraud risk management program department- wide by requesting that DOD components identify fraud risk and controls in place to mitigate these risks by July 2019. As with the pilot program, the components were requested to identify and assess fraud risks to meet requirements established by FRDAA and allow DOD to identify fraud risks and vulnerabilities facing the department. DOD Has Not Systematically Assessed Risks Related to Contractor Ownership While DOD has taken some steps to identify and potentially address fraud and other risks posed by contractor ownership, it has not conducted a department-wide assessment of these risks or selected them as a risk area for assessment in its development of a fraud risk management program. DOD procurement policy officials told us that contractor ownership and financing structures have not historically been considered by the department. DOD procurement policy officials expressed the need for a strategic assessment of contractor ownership risks at the Office of the Secretary of Defense (OSD) level to deal with the wide range of potential threats that exist. Still, getting support at the senior OSD level to consider the risks posed by contractor ownership and dedicate resources to mitigating these risks is a challenge, according to these officials. The challenge exists because senior DOD officials may not be aware of the potential magnitude or frequency of risks posed by contractor ownership issues, including the extent to which risks cross multiple areas throughout the department. Additionally, DOD procurement policy officials told us that contracting officers do not have anyone within the department to contact for assistance in determining ownership during the procurement process and there is no dedicated entity within the department that deals with contractor ownership issues. Federal internal control standards call for agency management officials to assess the internal and external risks their entities face as they seek to achieve their objectives. The standards state that as part of this overall assessment, management should consider the potential for fraud when identifying, analyzing, and responding to risks, including changes to risks, and consider factors such as absent or ineffective controls that provide an opportunity to commit fraud. In a complementary fashion, the Assess component of GAO’s Fraud Risk Framework calls for federal managers to plan regular fraud risk assessments and to identify and assess risks to determine a fraud risk profile, as described in figure 10 below. According to the Fraud Risk Framework, a fraud risk profile documents the findings from a fraud risk assessment and can help agencies decide how to allocate resources to respond to residual fraud risks. The Assess component also indicates that relevant stakeholders, including those with responsibilities for specific control activities and with knowledge of emerging fraud risks, should be involved in the assessment process. This could include a variety of internal and external stakeholders, such as general counsel, contractors, or other external entities with knowledge about emerging fraud risks or responsibilities for specific control activities. For example, the DOD Office of Inspector General and its work on emerging risks involving contractor ownership may inform the fraud risk assessment process and help managers to identify fraud risks. Additionally, an assessment of ownership risks could include relevant DOD officials responsible for assessing and responding to national security risks, such as those responsible for assessing supply- chain risks in national security system procurements, vetting vendors to ensure DOD avoids contracting with the enemy, and determining whether contractor facilities can be cleared to access classified materials. Including relevant stakeholders would allow DOD to leverage the knowledge and experience of such officials and more comprehensively identify risks related to contractor ownership. Further, it would allow DOD to better understand the extent to which risks cross multiple areas throughout the department. At a fundamental level, assessing risks arising from contractor ownership would allow DOD to take a strategic, risk-based approach to identifying and managing these risks. In addition, a risk assessment would help DOD better understand the magnitude and prevalence of these risks, including the effects these risks have from both a fraud and national security perspective, and whether certain types of procurements are more vulnerable to contractor ownership risks. Further, conducting a department-wide assessment of risks posed by contractor ownership would assist the department in its evaluation of whether its existing control activities are sufficient and designed to effectively respond to these risks or whether additional control activities are needed. For example, it would allow DOD to better determine how contractor ownership information should be used and verified, and whether additional ownership information should be collected. In accordance with leading practices, DOD would then be positioned to design and implement specific control activities to prevent and detect contract ownership-related fraud and make informed decisions on how best to use its resources. Conclusions DOD is the largest contracting agency in the federal government in terms of contract dollars obligated and number of contracts awarded. The scope and scale of this activity makes DOD procurement inherently susceptible to fraud. Our various analyses and discussions with procurement officials from across the department identified risks posed by contractors with opaque ownership that involve various types of procurements. DOD has taken some steps that could address some risks posed by contractor ownership in the procurement process. It has the opportunity to include these risks as part of its department-wide fraud risk assessment at a strategic level. Assessing risks related to contractor ownership, as a fundamental first step, would help DOD better determine whether certain types of procurements are more vulnerable to this type of risk. Further, it would help DOD determine whether additional policies and procedures are needed to articulate how officials should use and verify the ownership information it collects, or to require additional ownership information. We recognize that collecting additional ownership information, including beneficial-ownership information, could pose compliance burdens for contractors; and regulatory trends have generally focused on easing the burden to do business. Additionally, verifying contractor ownership can be challenging and time-consuming. Nevertheless, having a thorough assessment of contractor-ownership risks will better position DOD to make informed decisions on how best to use its resources and help ensure that the department’s fraud risk management program is organized and targeted to manage risks in a prioritized manner. Lastly, involving relevant stakeholders with knowledge of emerging risks could help inform other types of risk assessments across the department, including national security concerns. Doing so will contribute to the effective implementation of leading fraud risk management practices when considering the existing and emerging risks to the department. Recommendation for Executive Action The Office of the Undersecretary of Defense (Comptroller) (OUSD) should include an assessment of risks related to contractor ownership as part of its ongoing efforts to plan and conduct a department-wide fraud risk assessment. As part of this assessment, consistent with leading practices, DOD should involve relevant stakeholders with knowledge of emerging risks and use this information to help inform other types of risk assessments across the department, including for national security concerns. (Recommendation 1) Agency Comments We provided a draft of the sensitive version of this report to DOD and GSA for comment. In commenting on a draft of the sensitive version of this report, DOD concurred with our recommendation and provided additional written comments outlining current and planned efforts in response to our recommendation. These written comments were deemed sensitive by DOD and have been omitted from this report. In an email, GSA stated that it did not have any comments. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Administrator of GSA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6722 or bagdoyans@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report is a public version of a sensitive report that we issued on September 12, 2019, with the objectives to (1) identify types of fraud and other risks, if any, that contractors with opaque ownership could pose to the Department of Defense (DOD) in the procurement process and (2) assess whether DOD has taken steps to address risks posed by contractor ownership in the procurement process. The sensitive report included the results of data analysis we conducted to identify offerors who might disguise their ownership to create the appearance of competition. DOD deemed some of the details from this analysis to be sensitive, which must be protected from public disclosure. This report also omits sensitive information about ongoing investigations, certain internal controls and vulnerabilities, and actions taken to address some of these vulnerabilities. Although the information provided in this report is more limited, it addresses the same overall objectives as the sensitive report and uses the same methodology. To address our first objective, we researched information on closed cases investigated by the Defense Criminal Investigative Organizations or prosecuted by the Department of Justice (DOJ) from calendar years 2012 through 2018. These cases were identified by researching press releases from the websites of the DOJ Office of Public Affairs, Offices of the U.S. Attorney, DOD Office of Inspector General, and Defense Criminal Investigative Organizations. We also researched legal databases and news articles involving DOD contractors to identify federal court cases and federal agency decisions. We reviewed GAO bid-protest decisions to identify cases in which a contractor may have failed to disclose foreign ownership or concealed beneficial-owner information to obtain contracts that they were not eligible to receive. We interviewed investigators from the Defense Criminal Investigative Organizations and DOD contracting offices to supplement our research. For each case identified, we reviewed the associated federal court filings or DOJ press releases to determine the outcome of the case and how contractor ownership was used or concealed to carry out the offense. To identify additional types of risks that may not have been identified through our case-study research, we interviewed officials from the General Services Administration (GSA) and officials from across DOD, including the Office of Inspector General, Defense Criminal Investigative Organizations, Defense Pricing and Contracting, the Office of the Under Secretary of Defense (Comptroller) (OUSD), the Office of the Chief Information Officer, Defense Intelligence Agency (DIA), Defense Security Service (DSS), Defense Logistics Agency (DLA), Defense Contract Management Agency (DCMA), and Defense Contract Audit Agency, and relevant procurement policy officials from the Departments of the Army, Navy, and Air Force. We examined known risks identified through our case-study research and interviews with DOD officials; however, these risks are not necessarily representative of the extent or the types of these risks. There may be additional fraud or other risks and cases related to contractor ownership that are presently undiscovered fraud and are not identified in our report. Additionally, we further examined the risk that contractors could be disguising their ownership to create the appearance of competition on a contract to inflate prices by analyzing bid response data from GSA’s Federal Business Opportunities (FBO) website and registration data in GSA’s System for Award Management (SAM). Specifically, we analyzed responses to approximately 2,700 solicitations submitted for fiscal years 2015 through 2017 to identify indications of potentially related offerors bidding on the same solicitation. We selected this date range because fiscal year 2015 was the first year in which the Federal Acquisition Regulation (FAR) required offerors to report their ownership and fiscal year 2017 was the most-recent complete year of data at the time of our analysis. To identify whether offerors were potentially related, we analyzed information to identify instances in which different offerors shared certain information. Offerors sharing information does not definitively prove that the offerors are related or share ownership; however, it is an indicator that these offerors may not be independent of each other. For offerors we identified as potentially related, we researched company websites and third-party data sources to determine whether we could find other indicators of a relationship. Further, we provided a list of the potentially related offerors we identified to the relevant DOD contracting office to determine whether the offeror disclosed any relationships to other offerors or whether the contracting officer was otherwise aware of the relationship with another offeror. The results of our analysis are limited to the approximately 2,700 solicitations we reviewed and are not generalizable to other DOD solicitations. To assess the reliability of the data used in our analysis, we performed electronic testing to determine the validity of specific data elements in the FBO bid module and other datasets. We also reviewed documentation related to these databases, compared the data to published sources and source documentation maintained in the DOD contracting files, and interviewed GSA officials responsible for these databases. We determined that the data were sufficiently reliable for the purposes of analyzing potential ownership relationships. To address our second objective, we reviewed federal laws, the FAR, DOD regulations, directives, instructions, policies, procedures, and training documents. We also reviewed OUSD(C) fraud assessment templates and preliminary results from DOD’s fraud risk management pilot program. We interviewed procurement policy officials from GSA, Defense Pricing and Contracting, DLA, and the Departments of the Army, Navy, and Air Force as well as officials from the Office of the Chief Information Officer, OUSD(C), DIA, DSS, DCMA, the Defense Contract Audit Agency, the Joint Staff Logistics Directorate, the Defense Industrial Policy office, members of DOD’s Procurement Fraud Working Group, and the Naval Contracting Council to discuss how DOD has addressed risks. We also interviewed officials from the Defense Acquisition University to determine how, if at all, DOD trained contracting officials to consider risks posed by contractor ownership. To assess these efforts, we compared these documents and the information from our interviews to federal internal control standards and the leading practices outlined in GAO’s Framework for Managing Fraud Risks in Federal Programs. To gain the perspectives of contractors on whether a requirement to report beneficial- ownership information when doing business with DOD would impose a burden on companies, we researched and contacted several government contractors’ associations to gain the perspectives of their members. The contractors’ associations we contacted included associations for large, medium, and small businesses working in a variety of industries doing business with the government. We received responses to our inquiries from three associations. To gain their members’ perspectives, officials from the three associations forwarded our inquiries to their members and we received responses from 16 members. These 16 members were from a range of business sizes and industries. The perspectives gained from our queries are limited to the contractors from whom we received a response and are not generalizable to all contractors. We conducted this performance audit from August 2017 to September 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We subsequently worked with DOD from September 2019 to November 2019 to prepare this version of the original sensitive report for public release. This public version was also prepared in accordance with these standards. Appendix II: Summary of GAO Review of Cases Adjudicated or Settled from Calendar Years 2012 through 2018 The table below summarizes the information we reviewed involving Department of Defense (DOD) contractors or subcontractors that provided false information about ownership or corporate structure to allegedly commit fraud. We identified cases involving contractors that posed financial and nonfinancial risks to DOD (see app. I for additional details on the methodology used). Financial risks we identified involved DOD contractors using opaque ownership structures to fraudulently inflate prices on DOD contracts. We also identified subcontractors that misrepresented ownership or shared common ownership with a contractor for the purpose of obtaining awards or overcharging the government. Nonfinancial risks we identified involved contractors bidding on and obtaining contracts that they were not eligible to receive, including contracts set aside for small businesses owned by service-disabled veterans or socially and economically disadvantaged individuals. We also identified cases involving ineligible foreign manufacturers creating domestic shell companies to obtain government contracts. As discussed in our report, DOD has not assessed risks posed by contractor ownership; therefore the magnitude and prevalence of these risks are not known. There may be additional risks and cases related to contractor ownership that are not identified below. The 32 cases below were adjudicated or settled from calendar years 2012 through 2018. As shown in the table below, we used public court records and Department of Justice and DOD press releases to identify the type of fraud and calendar years in which the cases were adjudicated or settled, a summary of how the contractor’s ownership was disguised or obfuscated to carry out the fraud schemes, dollar amount awarded or received to the extent available in each case, and the government agencies affected by the fraud. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, the following staff members made key contributions to this report: Tonita Gillich (Assistant Director); Tracy Abdo (Analyst-in-Charge); Marissa Esthimer; Colin Fallon; Mollie Lemon; Maria McMullen; Madeline Messick; Dustin Milne; Lauren Ostrander; Daniel Purdy; Daniel Silva; Sabrina Streagle; and Shana Wallace. Others who contributed to this report include Steven Campbell, Suellen Foth, and Pamela Snedden.
Why GAO Did This Study DOD generally accounts for about two-thirds of federal contracting activity. Some companies doing business with DOD may have an opaque ownership structure that conceals other entities or individuals who own, control, or financially benefit from the company. Opaque ownership could be used to facilitate fraud and other unlawful activity. The House Armed Services Committee report on the National Defense Authorization Act for fiscal year 2018 included a provision for GAO to examine the risks posed by contractors with opaque ownership and DOD's processes for identifying ownership. This report identifies types of fraud and other risks that opaque contractor ownership poses to DOD in the procurement process and assesses whether DOD has taken steps to address those risks. GAO reviewed applicable laws and regulations and interviewed DOD officials, including procurement staff and criminal investigators. GAO researched cases from 2012–2018 where contractors may have concealed or failed to disclose ownership information. GAO compared DOD's efforts to leading practices in GAO's Fraud Risk Framework. This is a public version of a sensitive report that GAO issued in September 2019. Information that DOD deemed sensitive involving ongoing investigations and certain internal controls and vulnerabilities has been omitted. What GAO Found The Department of Defense (DOD) faces several types of financial and nonfinancial fraud and national security risks posed by contractors with opaque ownership. These risks, identified through GAO's review of 32 adjudicated cases, include price inflation through multiple companies owned by the same entity to falsely create the appearance of competition, contractors receiving contracts they were not eligible to receive, and a foreign manufacturer receiving sensitive information or producing faulty equipment through a U.S.-based company. For example, one case involved an ineligible foreign manufacturer that illegally exported sensitive military data and provided defective and nonconforming parts that led to the grounding of at least 47 fighter aircraft, as illustrated below. DOD has taken some steps that could address some risks related to contractor ownership in the procurement process but has not yet assessed these risks across the department. DOD, in coordination with other agencies, revised the Federal Acquisition Regulation in 2014 to require contractors to self-report some ownership information. DOD has taken steps to identify and use ownership information—for example, as part of its supply-chain risk analysis when acquiring critical components. DOD has also begun a department-wide fraud risk management program, but it has neither assessed risks of contractor ownership across the department nor identified risks posed by contractor ownership as a specific area for assessment. Assessing risks arising from contractor ownership would allow DOD to take a strategic approach to identifying and managing these risks, make informed decisions on how to best use its resources, and evaluate its existing control activities to ensure they effectively respond to these risks. What GAO Recommends GAO recommends that DOD assess risks related to contractor ownership as part of DOD's ongoing efforts to assess fraud risk. DOD should use this information to inform other types of risk assessments, including national security concerns. DOD concurred with GAO's recommendation.
gao_GAO-20-361
gao_GAO-20-361_0
Background The federal government’s civilian real-property holdings include thousands of leased office buildings and warehouses across the country that cost billions of dollars annually to rent, operate, and maintain. GSA’s Public Building Service acquires space on behalf of the federal government through new construction and leasing and acts as a caretaker for federal properties across the country. The type and amount of space for each lease varies based on a particular agency’s need, and GSA categorizes leases by value depending on factors such as square- footage and location. As of fiscal year 2018, the Public Building Service held or leased space in 8,681 buildings or other assets and maintained an inventory of more than 370-million square feet of workspace for 1.1 million federal employees, plus support contractors. The federal-leasing process contains several stages, and brokers can be involved in many parts of this process (see fig. 1) as a way to supplement the work of GSA’s leasing staff. For example, in the “requirements development” phase, GSA can task brokers with drafting project milestones and working with federal agencies that are seeking building space to provide a complete requirements package. In the “lease acquisition” phase, brokers can conduct market research on rental rates, negotiate rates and terms of the lease, and prepare final contract forms. For such work, brokers can earn a commission based on a percentage of the aggregate lease value. However, pursuant to the Federal Acquisition Regulation, brokers are not allowed to complete some activities, as contractors cannot be used for the performance of inherently government functions. Brokers are not allowed to complete all required leasing tasks to execute a federal government lease. For example, according to GSA officials, a broker cannot sign a lease contract on behalf of the federal government with a property owner since that action is considered an inherently governmental function. The broker may prepare the final lease contract, but GSA’s contracting officials are responsible for signing the lease. Even when a broker is involved in the leasing process, GSA officials oversee and approve the broker activities. Prior to 2015, GSA had implemented various changes to how it used brokers to assist with its leasing program. Before 1997, GSA’s in-house staff completed all leasing acquisition work, but starting in the late 1990s, downsizing initiatives at GSA reduced the number of staff and therefore its in-house capacity to acquire leases. In 1997, GSA began to increase its use of brokers by signing regional contracts for broker services and paying brokers by using appropriated funds. By 2003, brokers were completing approximately 20 percent of GSA’s leasing work. In 2003, GSA analyzed the advantages, disadvantages, and costs of different types of contracting options for using the brokers, including having them negotiate leases on a nationwide basis, as compared to designated geographic zones or local areas. Based on that analysis, GSA concluded that contracting for brokers to negotiate leases nationally represented the best option available and formalized the program as the National Broker Contract program. In 2004, under this program, GSA awarded nationwide contracts to four commercial real- estate brokerage firms, moving from a regional to a national approach. In 2010, GSA established the second iteration of the broker program (called the National Broker Contract 2), which maintained a similar nationwide structure with four national contracts to brokers. We have previously found that GSA has been unable to demonstrate cost savings with its broker program. For example, in 2007, we found that GSA was unable to quantify savings from the program and recommended that GSA develop processes for doing so. In response to our recommendation, GSA conducted a comparative analysis of prior agency contracts and broker contracts; this analysis demonstrated program cost savings. However, GSA’s subsequent efforts to demonstrate continued cost savings were less conclusive. For example, in 2012, when GSA attempted to compare rental rates negotiated by brokers with those negotiated by in-house staff, the agency found little difference between the two and noted that the data were insufficient to conduct a meaningful comparison. In 2013, we found that GSA had not linked its goals and metrics for evaluating the broker program to the anticipated cost savings in rental rates. As a result, GSA had no means of evaluating and reporting on this aspect of the program, and the value of the broker program in terms of cost savings continued to be unclear. We recommended that GSA link program goals to anticipated cost savings and develop and implement a means of evaluating and reporting program results. In response, GSA developed a metric for measuring the efficacy of utilizing brokers to assist with lease workloads and a performance report that included information on financial savings and productivity, among other things. We found limitations; however, with these efforts, as discussed in the second section of this report. GSA Has Changed Its Broker Program to Allow Greater Flexibility and Has Prioritized Using Brokers for High Value Leases GSA Has Increased the Number of Brokers and the Flexibility for Using Them GSA has made changes to the broker program to allow more brokers to participate and to increase GSA’s flexibility in its use of brokers. In 2015, GSA changed the program’s name to the GSA Leasing Support Services program (GLS). Under this version of the program, GSA moved from using four brokers on a nationwide basis to designating brokers within four geographical zones. GSA awarded contracts to two or three brokers for each zone (see table 1). Thus, each GLS contract covers a zone rather than the entire country, as was previously done under the National Broker Contract. Currently, there are six GLS brokers, and each broker can serve up to two zones. Of the six GLS brokers, five participated in the National Broker Contract programs. According to GSA officials, modifying the program to operate by zone provided a greater opportunity to involve more brokers, increase competition and local market specialization, and strengthen relations among brokers and GSA regional offices. In addition, awarding contracts by zones rather than the entire country has allowed small businesses to participate as brokers, and GSA selected two small-business brokers as prime contractors: Carpenter Robbins Commercial Real Estate, Inc., and Public Properties LLC. Multiple GSA regional offices oversee and monitor brokers within each zone, except for the National Capital region, which is its own zone. In early 2020, GSA plans to announce the brokers that will be involved in the fourth iteration of the program. In this iteration, called GLS Plus, the zones and number of brokers within each zone will remain the same. In addition to establishing the zones, GSA has also allowed its regional staff to have more flexibility in deciding how to use brokers. During the past two iterations of the National Broker Contract, brokers had to be involved during the entire leasing process. In the GLS program, regional GSA officials choose broker services for specific parts of the leasing process based on the needs of the region. For example, several regional officials said they could now request brokers to perform market research or negotiate a lease, while GSA staff performs other tasks to complete a lease. Officials in three of the six regional offices we interviewed said this change provided additional flexibility in how GSA involves the brokers in the leasing process and helped balance the workload of GSA staff. In GLS Plus, GSA will request that brokers provide additional post-award services such as evaluating pricing for proposed renovations and monitoring on-site construction progress for the leased facility. Brokers Are Used Primarily for High-Value Leases In the GLS program, about 64 percent of the brokers’ workload were high- value leases. GSA officials told us they typically task brokers to negotiate these high-to-moderate value leases because brokers are paid through commissions as a percentage of the lease’s value. Since they earn more money with high value leases, they have a greater incentive to participate in the program. Consistent with what GSA officials said, the agency’s leasing data showed that leases involving brokers tended to have large square-footage and higher rents than the rents for leases that did not involve brokers, as shown in table 2. According to GSA’s leasing data from October 2005 to July 2019, the agency used brokers in about 37 percent of all leases. Available data did not clearly demonstrate the extent to which brokers negotiated lower lease rates than GSA’s in-house staff for similar types of properties. Although there are differences in rental rates and other outcomes of leases involving brokers compared to those that do not, it is difficult to determine whether these differences are due to having brokers involved in the process as opposed to the characteristics of the leases themselves. Various factors affect rental rates in federal leases, such as local market areas, type of facility, square footage, and unique requirements, among other issues. According to the Public Buildings Service’s Commissioner, brokers are more successful at negotiating lower lease rates relative to the market than GSA in-house staff and using brokers provides savings to the government. GSA officials said they believe this result is in part because brokers negotiate what are called “commission credits”—a percentage of the total commission that goes back to the federal tenant agency in the form of a reduction in rent— which can result in lower costs for federal tenant agencies. In contrast, several lessors (property owners) said that when GSA uses brokers to negotiate leases, broker commissions have to be paid by the lessor and that this cost is ultimately passed on to GSA’s federal-agency-tenant clients. Furthermore, three real estate economists we interviewed indicated that real-estate sale prices and rental rates are driven primarily by competitive market forces and thus would not be heavily influenced by broker negotiation. These economists were not aware of any research indicating that brokers could affect commercial real estate rental rates. Broker Leases Include Commissions to the Broker and Credits to Tenant Agencies As previously noted, GSA typically tasks brokers to negotiate high-to- moderate value leases. A broker-negotiated GSA lease includes a total commission negotiated between the lessor and the broker that represents a percentage of the aggregate lease value. This total commission is comprised of the standard commission paid to the broker and commission credits given back to the federal tenant agency. In the GLS program, the total commission sometimes includes a “best value” commission that a broker may earn on top of the standard commission. This total commission includes the following three components: Standard Commissions. The standard commission a broker earns is normally a percentage of the total lease value. Our analysis showed that brokers earned about $390 million in standard commissions since fiscal year 2006 (see table 3). For the GLS program, brokers had earned just over $35 million as of July 2019. At the time of our review, the program was ongoing, and brokers were still completing leases. Best Value Commission. Under the GLS, in addition to the standard commission that a broker always earns, the broker can be paid an additional commission, called the “best value commission,” by negotiating a lease rate below an established market rate target and earning high evaluation ratings from GSA. Specifically, the best value commission was expected to incentivize brokers to negotiate lower rental rates. This best value commission is paid out of the commission credit the tenant agency would otherwise receive and does not increase the total cost of the commission. As of July 2019, brokers had collected about $3.5 million in best value commissions during the GLS program. GSA plans to eliminate the best value commission in the new iteration of its broker program, GLS Plus. Officials said determining whether brokers met the best value criteria was burdensome for regional officials and that brokers prefer a steady volume of future government leases as an incentive. Similarly, two real estate economists we interviewed said that the best value commission was unnecessary to incentivize brokers to seek the best rates for their GSA client, and that the prospect of additional future business negotiating government leases was a sufficient incentive. Commission credits. The commission credit is a percentage of the total commission that goes back to the federal tenant agency in the form of a reduction in rent. As part of the total commission, brokers have negotiated over $340 million in commission credits. GSA estimates that its future GLS Plus program will generate $129 million in commission credits throughout the duration of the program. Lessors and real estate economists we interviewed highlighted various issues about GSA’s commission structure, including commissions paid to the broker and commission credits paid back to the tenant agency. The interviewees had different perspectives on whether GSA’s broker program and the current commission structure are beneficial to the federal government. Some questioned whether the use of brokers saves the federal government money. As previously noted, according to GSA officials, lessors, through the commission, pay the brokers, which is customary in commercial real estate. Although GSA does incur some costs from appropriated funds because GSA officials oversee the work of brokers, GSA officials noted that GSA does not currently use its own appropriated funds to compensate brokers for services performed as a part of the broker program. However, four lessors that we interviewed said that broker commission costs are passed through to federal tenants in their leases. These lessors questioned the benefits of using brokers for federal leases. In contrast, two real estate economists we interviewed said that GSA could potentially be missing cost-saving opportunities when brokers are not used because rental rates are generally set by competitive market forces, also GSA’s in-house staff may not negotiate commission credits. GSA officials, however, disagreed with this statement, saying in-house staff generally seek to receive credit or concessions for leases they negotiate since there is no commission to be paid to a broker. Another real estate economist we interviewed indicated that paying brokers based on a fixed price basis, versus a commission basis, could result in lower costs to the government because this type of payment structure could involve GSA brokers’ bidding for lease acquisition assignments in fixed-price terms only. This real estate economist also said that this approach could potentially address past concerns involving GSA’s commission structure. GSA Faces Limits in Assessing Value of Its Broker Program GSA Has Established Various Goals for Its Broker Program Over the years and with different iterations of the program, GSA has established various goals for the broker program; most of these goals pertain to cost savings. During our review, GSA officials also said that the main purpose of the program is to serve as a workforce multiplier for the regional offices—providing needed personnel to complete leases that GSA does not have enough staff to complete on its own. Our review of GSA documents and interviews with GSA staff identified various program goals as shown in table 4. For GLS Plus, the fourth iteration of the broker program, which GSA plans to start in mid-2020, the proposed goals include achieving taxpayer savings, improving the customer experience, and leveraging broker expertise. GSA officials also said that maximized productivity would be a goal of the program. GSA Relies on Data to Measure Cost Savings That Some Stakeholders Said Is Inaccurate As previously discussed, one of the main goals of the broker program is to avoid costs and save the taxpayer money. In November 2019, GSA headquarters officials said that they demonstrate cost savings of the broker program through its Lease Cost Avoidance Plan, which aggregates cost-savings from several efforts, including negotiating leases below market rates, reducing rented square footage, and leasing vacant space. A metric within the Lease Cost Avoidance Plan that seeks to show whether leases are negotiated below market rates is called Lease Cost Relative to Market, which is a comparison of the negotiated rental rate to the target market rate. According to this metric, as reported by GSA, over the last 3 years, brokers have negotiated 303 leasing deals, 60 percent of which were below the market rate (17.8 percent below the market rate, on average), which helped GSA avoid $676 million in costs. In addition, GSA found that brokers negotiated better rental rates than GSA in-house staff, on average. For example, GSA reported that in fiscal year 2018, 56 percent of leases negotiated by brokers were below the market rate compared to 38 percent of leases negotiated by GSA in-house staff. As discussed previously, however, there are various factors, including the type of lease that may account for these differences. This metric is calculated primarily using market lease rates that GSA determines using a tool it developed called the “Bullseye” report. To develop the report, GSA gathers available market data from commercial real estate databases and compiles these data to identify local information, analysis, and insight regarding the local real estate submarket. According to GSA guidance, the success of the GLS program is dependent on the brokers’ negotiating competitive lease rates through full utilization of the Bullseye report and standardized negotiation objectives. The guidance further states that the Bullseye report should be utilized by GSA regional offices as a tool to make informed leasing decisions on behalf of the U.S. government and can provide the necessary backup documentation to aid leasing personnel in their negotiation with an offeror. While GSA headquarters officials noted that this tool is adequate for this use, other GSA officials and brokers had concerns about whether the Bullseye report accurately reflects market rates and conditions. GSA regional officials we interviewed had mixed views on the accuracy of the Bullseye report. For example, several officials questioned the accuracy or noted limitations to the Bullseye report. In addition, four of the six brokers found the Bullseye report to be rarely or only sometimes accurate. As a result, brokers told us that they found it difficult to negotiate a rental rate at or below the target Bullseye rate. In addition, two lessors we interviewed agreed that the gap between the Bullseye report and local market rates potentially affected negotiations with GSA. Furthermore, brokers publicly questioned the accuracy of Bullseye reports in written responses to GSA’s draft solicitation for the 2020 GLS Plus program. They also suggested that the new broker program should include an adjudication process for revisiting Bullseye target rates. Selected GSA regional officials and brokers in our review identified several factors that may affect the accuracy of the Bullseye reports: Geography. According to GSA officials, the Bullseye report includes market rates from over 85 major markets in the U.S. However, GSA regional officials and brokers we interviewed said that the Bullseye report provides limited submarket rental rates for specific areas or neighborhoods within large metropolitan areas. This can be problematic because there can be significant rental differences among different areas within a given market. For example, in response to GSA’s draft solicitation for the new broker program, brokers stated that they found the Bullseye target rates to be an obstacle in the rapidly moving West Coast urban markets, and there can be significant discrepancies between Bullseye rates and actual market rates. One selected GSA regional office in our review provided examples of the Bullseye target rate being below the market rate in several instances. For example, the average asking rent for office space in San Diego, CA, was 36 percent higher than the Bullseye rate. Federal requirements. According to GSA regional officials and to brokers, the Bullseye report does not take into account the unique building requirements for federal leases. For example, GSA officials and brokers reported that the Bullseye report develops a target rental rate based on certain classes of buildings (A, B, and C). Although the government generally accepts A and B class buildings, C buildings are generally unacceptable for federal leases. Brokers we interviewed said including these C building rates could lower the market rates identified by the Bullseye market report for certain areas. GSA officials said they are not able to remove the C building rates from the Bullseye report because the data are purchased from a private-sector data source that includes various building rates from a local area. In addition, brokers said the Bullseye report does not take into account the unique requirements of federal buildings. For example, federal law enforcement agencies require certain security measures, such as special entrances. Brokers reported that landlords may increase their pricing to account for these factors. Brokers also identified these issues in the draft solicitation for the new broker contract, noting that the Bullseye does not use comparable buildings that take into account the uniqueness of a specific space requirement. Lag time. Several brokers and GSA officials told us that federal leases generally take significantly longer than commercial leases due to the federal leasing acquisition process. As a result, GSA officials and brokers found that by the time a lease was awarded, which could be years later, the initial target market rate provided by the Bullseye report was outdated. GSA headquarters officials told us if the Bullseye report is over a year old, then an updated report should be requested, although it’s not mentioned by the 2016 Bullseye guidance memo. Officials from selected GSA regional offices varied on whether those updates occurred or not. Furthermore, several brokers in our review told us that they found that the Bullseye report is not always updated after a year. One broker told us that there have been several instances when a lease is about to be awarded—which can be 1 to 2 years after the initial Bullseye report was generated—and the tenant agency is not willing to accept the rental rates negotiated in the lease. Or GSA’s leasing staff is hesitant to execute the lease due to differences between the Bullseye rate and the actual lease contract rate. This can cause significant delays or result in the project being canceled all together. Concerns about the reliability of the Bullseye report call into question whether the Lease Cost Relative to Market metric can accurately demonstrate how brokers’ efforts lead to cost savings, either through achieving rental rates below market or better rates than GSA in-house staff. Even though GSA provided us cost-savings data in November 2019 based on this metric, at other times during our review, GSA officials described limitations and questioned the efficacy of using this metric. Specifically, in April 2019, GSA headquarters officials told us that GSA had stopped using this metric because GSA found it unreliable. For example, GSA found the comparison was not indicative of broker effectiveness or ability to negotiate low rental rates. At that time, GSA officials cautioned against using the Lease Cost Relative to Market data for comparative purposes, such as comparing broker performance to in- house GSA staff performance. The officials said it is impossible to assess the financial information of a lease transaction and evaluate a specific procurement method—using brokers or not—without talking directly to the GSA in-house staff responsible for overseeing the procurement. Furthermore, GSA officials told us in April 2019, that leases negotiated by brokers were not comparable to leases negotiated by in-house staff because they work on different types of leases. In December 2019, however, GSA officials told us that GSA does still track this metric, uses it for GSA’s Lease Cost Avoidance Plan, and that the agency believes brokers can achieve better deals for the government than in-house staff. Nonetheless, GSA officials told us that they have not assessed the reliability of or made any changes to how they calculate the Bullseye report. According to Standards for Internal Control in the Federal Government, management should use quality information to achieve the entity’s objectives and to inform decision-making. Until GSA assesses the reliability of the information used to calculate reported cost savings for the broker program, it is hindered in its ability to fully assess the effectiveness of the program. GSA Lacks Measures of Brokers’ Effectiveness As a Workforce Multiplier As noted above, throughout the various iterations of the program, GSA has identified various goals for the broker program. During this review, a key goal consistently stated by GSA officials we interviewed was the use of the broker program as a workforce multiplier—providing additional people that enable GSA to complete leasing work it would otherwise be unable to complete. The effectiveness of the broker as a workforce multiplier is significant to GSA because leasing staff has decreased by over 50 percent since the late 1990s, from over 800 personnel to less than 400 in 2019. Consequently, GSA staff rely on brokers to deliver leased space to federal agencies. GSA officials told us that a broker may not accomplish a lease faster or cheaper than GSA staff but that the agency does not currently have the personnel to complete its leasing work. GSA’s Strategic Plan 2018-2022 also states that GSA will use brokers where appropriate to improve efficiency in awarding leases. Although GSA has set target goals for utilizing brokers and tracks the number of leases regional officials assign to brokers, we found that GSA had limited ability to track how using brokers to augment the GSA’s leasing workforce achieves results for GSA’s leasing efforts. For example, GSA has increased its broker utilization targets in recent years, as described in figure 2, requiring regional offices to award more lease projects to brokers. Moreover, GSA tracks performance relative to these targets, and regional officials in our review told us that they are evaluated based on the number of leases they task to brokers. Additionally, in April 2019, GSA developed a model to project, on average, the number of hours a broker saves the GSA’s lease-contracting officer and project manager. The project estimated the broker saved roughly 175 to 125 hours, respectively, per project over a 3-year period. GSA then multiplied the hourly salary of GSA leasing personnel by the potential number of hours saved to generate their reported personnel savings of $3 million per year. However, tracking these outputs alone does not provide GSA with a means to measure the effectiveness of the broker program as a workforce multiplier. An output measure tracks the direct product or activity delivered by a program, while an outcome measure tracks the progress the program is making toward achieving its goal. Tracking the number of hours a broker saves for GSA officials provides limited information to help GSA understand the overall benefits of the broker program as a workforce multiplier. For example, this goal does not demonstrate if brokers are more productive than in-house staff or if they are completing leases more efficiently, such as brokers completing an additional number of leases on an annual basis. According to GSA officials, the principal way they measure broker program outcomes is through its Lease Cost Avoidance Plan, which, as we previously discussed, aggregates cost savings from a number of GSA leasing efforts, including the broker program. The plan identifies realized cost avoidance through various metrics such as leases negotiated below market costs and reductions in rental square footage and vacant space. However, aside from the negotiated rental rates, GSA does not currently have specific metrics that allow it to distinguish the particular role brokers play in achieving those results. For example, GSA officials said that the more leases that can be replaced by using brokers, the more GSA can tackle its expiring lease inventory and right size leases with rental square foot reductions. Specifically, GSA officials said that brokers contributed to a 2.5 percentage square footage reduction in fiscal years 2018 and 2019. However, since this metric applies to the leasing program in general and is not specific to the brokers, GSA is unable to demonstrate the extent to which such reduction is attributable to the use of brokers. GSA officials also told us that using brokers allows GSA to replace more leases on time and thus avoid extending leases, which is more costly and can lead to agencies renting space under less favorable terms. GSA measures this through its lease replacement rate, which tracks the percentage of expiring leases that are replaced in a timely manner. For example, GSA reported that in fiscal year 2019, it replaced 61 percent of its lease inventory, which represented $481 million of its $791 million lease inventory. However, while GSA tracks the number of lease extensions brokers have worked on, GSA is unable to demonstrate the extent to which the use of brokers helps GSA avoid lease extensions and holdovers. Furthermore, similar to the Lease Cost Avoidance Plan, this metric applies to the leasing program in general and is not specific to the brokers. As a result, GSA has limited information on the extent to which brokers contributed to leasing program outcomes. GPRA, as amended, creates a framework for articulating unified goals and outcome measures that can provide federal agencies with a clear direction for successful implementation of program activities and improve the efficiency and accountability of agencies’ efforts. We have previously reported that the GPRA framework can serve as a leading practice at other organizational levels, such as component agencies, offices, programs, and projects. GPRA calls for outcome-based metrics that are linked to goals, which allow a program to track the progress an organization is making toward achieving its intended outcome. Because GSA lacks outcome-based metrics that demonstrate the broker’s role in achieving the program’s goal for being a workforce multiplier, GSA is hindered in its ability to distinguish the role brokers played in its reported program results. Furthermore, having such a metric could help GSA make better decisions about the balance of brokers versus in-house leasing staff since GSA received $34 million dollars for fiscal year 2020 to hire an additional 34 GSA lease-contracting officers and specialists. GSA officials said they plan to complete this hiring in 2020. Conclusions GSA has developed a program that allows the agency to utilize expertise and personnel from leading commercial real-estate brokers to help it complete thousands of federal leases. GSA has stated cost savings and workforce goals for the broker program but lacks the information necessary to assess if the program is achieving its intended results. If GSA envisions that the use of brokers is to save money, then having quality, reliable data and information is critical to demonstrating this result. If using brokers to augment GSA’s workforce were also a goal, then having outcome-based metrics would allow GSA to show whether it is achieving that goal. This information is especially critical as the program has changed over time and could provide GSA insight on what has been successful in the past. This information would also inform GSA’s decision-making as it launches another version of its broker program and uses millions of dollars in appropriated funds to increase the agency’s leasing personnel. Recommendations We are making the following two recommendations to the Administrator of GSA: GSA should assess and address the reliability of the information used to calculate reported cost savings for the broker program. (Recommendation 1) GSA should develop outcome-based metrics to evaluate the effectiveness of using brokers to supplement the GSA’s leasing workforce. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this product to GSA for review and comment. GSA provided written comments, which are summarized below and reproduced in appendix I. GSA said it did not agree with our main conclusions and findings because it believed our report did not acknowledge brokers’ demonstrated benefits. We noted throughout the report that brokers play an important role in helping GSA achieve various leasing-related goals. Our position is that the lack of quality data and outcome-based metrics inhibit GSA’s ability to demonstrate the brokers’ specific effect in achieving GSA’s goals as compared to other factors. With regard to the first recommendation about data used to calculate reported cost savings from the broker program, GSA said it concurred with the recommendation and is making changes to its data systems that it believes will improve its data on brokers. GSA said it did not agree with the second recommendation as it was originally worded about having outcome-based measures to evaluate the effectiveness of using brokers. In providing technical comments on our report, GSA officials raised concerns that this recommendation gave the impression that GSA had no metrics to assess the brokers. The agency said that it has several outcome-based metrics in place that it believes can be correlated with the value of the brokers, including achieving cost savings, replacing leases on time, and reducing the need to hire more GSA staff. In GSA’s letter, it referenced these statistics, several of which we had included in our report as well. For example, our report discusses GSA’s Lease Cost Relative to Market measure, which is a comparison of the negotiated rental rate to a target market rate. We also noted, however, that this metric is calculated primarily using data, that GSA staff and other stakeholders we interviewed expressed concerns about as unreliable. These concerns resulted in our first recommendation. Further, other metrics, such as reducing square footage and replacing leases, that GSA pointed to relate to GSA’s leasing efforts in general and are not designed in a way to distinguish the brokers’ contributions specifically. Specifically, GSA officials said that brokers contributed to a 2.5 percentage square footage reduction in fiscal years 2018 and 2019. This metric, however, applies to the overall leasing program, and GSA is unable to demonstrate the extent to which such reduction is attributable to the use of brokers. In addition, GSA does not have a means to measure the effectiveness of the broker program in supplementing its workforce to achieve these goals, a result that GSA staff in headquarters and regional offices consistently told us was the primary reason GSA uses brokers. Tracking the number of hours a broker saves for GSA officials provides limited information to help GSA understand the overall benefits of the broker program. Such information does not demonstrate if brokers are more productive or efficient than in-house staff, such as whether brokers are completing an additional number of leases on an annual basis, for example. Additional metrics focused on evaluating the outcomes of GSA’s use of brokers would benefit the agency because it has lost over 50 percent of its leasing personnel since the 1990s. Furthermore, GSA received $34 million to hire additional agency lease-contracting officers and specialists in 2020. Consequently, it is imperative that it has information and data that could inform the right mix of brokers and GSA leasing personnel as the agency moves forward with its leasing work. In response to GSA’s concerns and to make our recommendation more specific, we clarified the recommendation. Specifically, we focused it more narrowly on the need to evaluate the effectiveness of using brokers to supplement the GSA leasing workforce. We also made some additional changes to the draft to include more information about the metrics GSA uses and that it believes can be correlated to the use of brokers. We are sending copies of this report to the appropriate congressional committees, the Administrator of the General Services Administration, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or RectanusL@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the U.S. General Services Administration Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Lori Rectanus, (202) 512-2834 or RectanusL@gao.gov. Staff Acknowledgments In addition to the individual named above, other key contributors to this report were: Andrew Huddleston (Assistant Director), Nelsie Alcoser, (Analyst-in-Charge), Caitlin Cusati, Josh Ormond, Colleen Taylor, Jack Wang, Michelle Weathers, Crystal Wesco, and William Woods.
Why GAO Did This Study As the leasing agent for the federal government, GSA acquires space for federal agencies and currently manages over 8,000 leases. To help negotiate leases, GSA contracts with commercial real-estate brokerage firms. In previous reviews, GAO reported that GSA was unable to demonstrate cost savings and results from its use of brokers, and GAO made related recommendations. A statute included a provision for GAO to review GSA's broker program. This report examines: (1) how GSA's broker program has changed over time and (2) GSA's goals for the broker program and how GSA measures the program's results. GAO reviewed documentation from GSA's broker program and GSA's available data on leases assigned to brokers from October 2005 to July 2019. GAO interviewed officials from GSA headquarters, selected GSA regional offices that work with brokers, as well as other stakeholders, including representatives from the six brokers currently participating in the program. What GAO Found The General Services Administration (GSA) contracts with commercial real estate brokers to perform a variety of services needed to acquire and complete leases. GSA uses brokers to negotiate leases meeting certain thresholds in urban areas (see figure). GSA has made several changes to its broker program since 2015, including: changing how brokers can be assigned to leases, i.e., using brokers for specific geographical zones rather than on a nationwide basis; allowing greater flexibility in when and how brokers can be used during the leasing process; and changing the name from the National Broker Contract program to the GSA Leasing Support Services program. Statistics for General Services Administration's (GSA) Leases That Involve Brokers Compared For the broker program, GSA's goals include saving money and supplementing its leasing workforce; however, potentially inaccurate data and limited outcome-based metrics could affect GSA's ability to assess whether it is meeting these goals. According to GSA, in the last 3 years, brokers have negotiated 303 leases, 60 percent of which were below the market rate (17.8 percent below the market rate, on average), an outcome that, GSA says helped it avoid $676 million in costs. However, selected GSA regional officials and brokers expressed concerns about the accuracy of the market reports used to calculate these cost savings. Additionally, while GSA has identified various outcome-based metrics related to its leasing program, these metrics do not indicate whether using brokers to supplement its leasing workforce has enabled GSA to complete leasing work it would have otherwise been unable to complete. For example, GSA sets targets for and tracks the number of leases assigned to brokers each year, but this measure is not an indicator of the effectiveness of using brokers. Quality information, along with additional reliable outcome-based measures, is important for GSA to define success for its 2020 broker program which creates new contracts and expands services performed by brokers. What GAO Recommends GAO is making two recommendations that GSA should: (1) assess and address the reliability of the information used to calculate reported cost savings and (2) develop outcome-based metrics to evaluate the effectiveness of using brokers to supplement the GSA leasing workforce. GSA concurred with the first recommendation but did not concur with the second. GAO continues to believe that GSA needs metrics to assess the brokers' role as a workforce supplement.
gao_GAO-19-558T
gao_GAO-19-558T_0
Background In April 2016, IRS released its most recent tax gap estimate, stating that taxpayers should have paid an average of about $2.5 trillion dollars per year in federal taxes for tax years 2008 to 2010. Of this amount, IRS estimated that taxpayers voluntarily and timely paid about 81.7 percent, or $2.04 trillion, leaving $458 billion in unpaid taxes per year, as shown in figure 1. The tax gap estimate is an aggregate estimate of the five types of taxes that IRS administers—individual income, corporation income, employment, estate, and excise taxes. For each tax type, IRS attempts to estimate the tax gap based on three types of noncompliance: (1) underreporting of tax liabilities on timely-filed tax returns; (2) underpayment of taxes due from timely-filed returns; and (3) nonfiling, when a taxpayer fails to file a required tax return altogether or on time. Underreporting of tax liabilities accounted for most of the tax gap estimate for tax years 2008 to 2010, making up 84 percent of the entire estimated gross tax gap, as shown in figure 2. Individual income taxes made up the largest portion ($264 billion) of underreporting. Underreporting of business income accounted for nearly half ($125 billion) of that amount, as shown in table 1. Business income underreporting includes income from sole proprietors, which accounted for the largest share ($78 billion) of individual income tax underreporting. IRS uses various approaches to estimate the different components of the tax gap. A primary source of information IRS uses is its National Research Program (NRP) study of individual tax returns. Through NRP, IRS examines a stratified, random sample of tax returns, and uses statistical modeling to produce estimates of noncompliance for the population of individual income tax return filers. Other areas of the tax gap are estimated using payment data or other statistical models. In 2016, IRS completed examinations for an NRP study on employment tax returns filed from tax years 2008 to 2010. IRS employees reported that they plan to start analyzing the results by June 2019. However, IRS has not provided plans for how it will use the results to update the current state of the employment tax gap estimate, as we previously recommended. The tax gap includes unintentional errors as well as intentional evasion, such as intentionally underreporting income, intentionally overreporting expenses, and engaging in abusive tax shelters or frivolous tax schemes. As we have previously reported, completely closing the tax gap is not feasible, as it would entail more intrusive enforcement and more burdensome recordkeeping or reporting than the public is willing to accept, and more resources than IRS is able to commit. However, even modest reductions would yield significant financial benefits and help improve the government’s fiscal position. Tax noncompliance, even when unintentional, could discourage compliant taxpayers and undermines the integrity of the tax system and the public’s confidence in it. For example, consider two groups of taxpayers with similar tax situations—those who pay the full amount of tax due and those who do not. Those who do not pay taxes are not meeting their obligation to fund government services, which, in effect, shifts the fiscal burden to those who do pay. Further, IRS devotes resources to attempt to collect taxes due from noncompliant taxpayers—resources that could be used for other purposes. In addition, noncompliance can create an unfair competitive advantage among businesses because those that do not pay tax debts are avoiding costs that tax-compliant businesses are incurring. For example, our past investigations identified instances in which federal contractors with tax debts won awards based on price differentials over tax compliant contractors. Key Factors Contributing to the Tax Gap Include Limited Third-party Information Reporting, Resource Trade-offs, and Complexities in the Tax Code Limited Third-Party Information Reporting Our past work has found that three important factors contributing to the tax gap are the extent to which income is reported to IRS by third parties, IRS’s resource trade-offs, and tax code complexity. As we have previously reported, the extent to which individual income tax taxpayers accurately report their income is closely aligned with the amount of income that third parties report to them and to IRS. For example, according to 2008–2010 IRS data, taxpayers misreported more than half of the types of income for which there is little or no third-party information reporting, such as business income (see figure 3). In contrast, when employers both withheld taxes from, and reported information on, wages and salaries to employees and IRS (through Form W-2, Wage and Tax Statement), taxpayers misreported on only 1 percent of such income. Similarly, taxpayers misreported less than 10 percent of investment income that banks and other financial institutions reported to account holders and IRS (through Forms 1099). For items subject to substantial third-party information reporting, IRS is able to use automated processes to address noncompliance. The automated underreporter program, through which IRS matches amounts reported on tax returns with amounts reported on information returns submitted by third parties, is one such process. This computer matching program allows IRS to identify discrepancies between tax returns and information returns, and propose automatic changes to taxpayers. For items with little to no third-party information reporting, IRS must rely on more resource-intensive methods, such as correspondence or face-to- face examinations, to address noncompliance. While these examinations may be started by reviewing specific tax return line items, they may also be expanded to cover other areas of the tax returns if there are indications of misreporting in areas of the return not previously identified. However, it is harder for IRS to detect noncompliance in areas with little third-party information reporting. IRS Resource Trade-offs IRS’s budget declined by about $2.6 billion (18.8 percent) from fiscal years 2011 through 2019, and IRS’s budget for fiscal year 2019 is less than its fiscal year 2000 budget, after adjusting for inflation (see figure 4). Since fiscal year 2011, IRS staffing has fallen from 95,544 full-time equivalent employees to an estimated 75,676 in fiscal year 2019, a 20.8 percent reduction. At the same time, IRS faces increasing responsibilities, such as implementing relevant aspects of Public Law 115-97, which included significant changes to corporate and individual tax law. IRS also faces ever-evolving and significant challenges protecting taxpayer information, preventing identity theft and fraud, and modernizing an aging technology infrastructure. We previously reported that available staff has been a key factor in IRS decisions to scale back a number of program activities, such as examining tax returns, according to IRS officials. Our analysis of IRS data shows the rate of individual returns audited has declined between fiscal years 2011 and 2018 (see figure 5). Reducing examinations can reduce revenues collected through such enforcement action, and may indirectly reduce voluntary compliance. Tax Code Complexity The federal tax system contains complex rules that may be necessary to appropriately target tax policy goals, such as providing benefits to specific groups of taxpayers. However, this complexity imposes a wide range of recordkeeping, planning, computing, and filing requirements upon taxpayers. For example, taxpayers who receive income from rents, self- employment, and other sources may be required to make complicated calculations and keep detailed records. This complexity can lead to errors and underpaid or overpaid taxes. Complexity, and the lack of transparency that it can create, can also exacerbate doubts about the tax system’s integrity. Tax expenditures—tax credits, deductions, exclusions, exemptions, deferrals, and preferential tax rates estimated by the Department of the Treasury to reduce tax revenue by about $1.38 trillion in fiscal year 2018—can add to tax code complexity. In part, this is because taxpayers must learn about, determine their eligibility for, and choose between tax expenditures that may have similar purposes. For example, as we reported in 2012, about 14 percent of filers in 2009 (1.5 million of almost 11 million eligible returns) did not claim an education credit or deduction for which they appeared eligible. The complexity involved with tax expenditures may be acceptable if they achieve their intended purposes. However, in many cases, their effectiveness is questionable or unknown. With some exceptions, tax expenditures generally are not subject to reauthorization and the annual congressional budget processes. We have recommended greater scrutiny of tax expenditures since 1994, as periodic reviews could help determine how well specific tax expenditures achieve their goals, and how their benefits and costs (including complexity) compare to those of other programs with similar goals. Such actions would help facilitate oversight and accountability of tax expenditures more in line with the performance management and reporting requirements of other federal programs. Paid tax return preparers and tax software developers help taxpayers navigate the complexities of the tax code. However, some paid preparers may introduce their own mistakes. For example, in a limited study in 2014, we found that seven of 19 preparers who completed returns for our undercover investigators made errors with substantial tax consequences while, only two preparers calculated the correct refund amount. Likewise, using NRP data, which are statistically representative, we estimated that 60 percent of returns prepared by preparers contained errors. Multiple Strategies Are Needed to Reduce the Tax Gap IRS’s overall approach to reducing the tax gap consists of improving services to taxpayers, and enhancing enforcement of the tax laws. In spite of these efforts, the percentage at which taxpayers pay their taxes voluntarily and on time has remained relatively constant over the past three decades. Our past work has demonstrated that no single approach will fully and cost effectively address noncompliance since the problem has multiple causes and spans different types of taxes and taxpayers. In light of these challenges, we have made numerous recommendations to IRS that have not yet been implemented, as well as matters for congressional consideration. For example, in our most recent high-risk update, we highlighted various actions IRS should take to improve enforcement of tax laws and reduce the tax gap. Strategy for using compliance data. Developing and documenting a strategy that outlines how IRS will use data to update compliance strategies could help address the tax gap. For example, a strategy that outlines how IRS plans to use NRP data to update compliance programs and approaches would help IRS determine resource trade- offs and more fully leverage the investment it makes in compliance research, while providing Congress with a better understanding of the merits of the research it is being asked to fund. Voluntary compliance goal. A long-term, quantitative goal for improving voluntary compliance may provide IRS with a concrete target the agency can use in fulfilling its mission. Without a quantitative goal, it will be more difficult for IRS to determine the success of its strategies, adjust its approach when necessary, and remain focused on results, especially since factors that affect compliance change over time. Analyzing employment tax NRP study results. Developing and documenting plans to assess its NRP employment tax study results would help IRS (1) identify areas of noncompliance, (2) devise actions to address such noncompliance, and (3) update its employment tax gap estimate. Without completed analysis of the NRP employment tax study results, IRS risks using outdated data to make decisions about compliance and areas of the tax gap to pursue. Leveraging the Return Review Program. IRS’s Return Review Program (RRP) is a tool to detect and select potentially fraudulent returns to prevent the issuance of invalid refunds. Evaluating the costs and benefits of expanding RRP to analyze individual returns not claiming refunds could support other enforcement activities by streamlining the detection and treatment of other types of noncompliance and fraud. Given that the tax gap has been a persistent issue, reducing it will also require targeted legislative actions, such as those we highlighted in our 2019 high-risk update. Additional third-party information reporting. Expanding third-party information reporting to IRS could increase voluntary tax compliance. For example, reporting could be required for certain payments that rental real estate owners make to service providers, such as contractors who perform repairs on their rental properties, and for payments that businesses make to corporations for services. Enhanced electronic filing. Requiring additional taxpayers to electronically file tax and information returns could help IRS improve compliance in a resource-efficient way. For example, expanding the mandate for corporations to electronically file their tax returns could help IRS reduce return processing costs, select the most productive tax returns to examine, and examine fewer compliant taxpayers. Math error authority. Providing IRS with authority—with appropriate safeguards—to correct math errors and to correct errors in cases where information provided by a taxpayer does not match information in government databases, among other things, could help IRS correct errors and avoid burdensome audits and taxpayer penalties. Paid preparer regulation. Providing IRS with the authority to regulate paid tax return preparers could improve the accuracy of the tax returns they prepare. Chairman Neal, Ranking Member Brady, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgements If you or your staff have any questions about this testimony, please contact James R. McTigue, Jr. at (202) 512-9110 or mctiguej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Jeff Arkin, Assistant Director; Robyn Trotter, Analyst-in-Charge; A.J. Stephens; and Alicia White. Other staff who made key contributions to the reports cited in the testimony are identified in the source products. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The tax gap—the difference between tax amounts that taxpayers should have paid and what they actually paid voluntarily and on time—has been a persistent problem for decades. The tax gap estimate is an aggregate estimate of the five types of taxes that IRS administers—individual income, corporation income, employment, estate, and excise taxes. For each tax type, IRS attempts to estimate the tax gap based on three types of noncompliance: (1) underreporting of tax liabilities on timely filed tax returns; (2) underpayment of taxes due from timely filed returns; and (3) nonfiling, when a taxpayer fails to file a required tax return on time or altogether. This testimony discusses factors contributing to the tax gap and strategies to reduce it. This testimony is based on prior GAO reports on the tax gap and enforcement of tax laws, including those with open recommendations or matters for congressional consideration that could help reduce the tax gap. Enforcement of tax laws has been on GAO's High Risk List since its inception in 1990, and GAO has made various recommendations to IRS and suggestions to Congress to reduce the tax gap that have resulted in improvements. For example, GAO recommended that IRS consider comparing individuals' tax returns with the information educational institutions report to verify taxpayers' education tax benefits claims and suggested that Congress require brokers to report to both taxpayers and IRS the adjusted cost of the securities sold by taxpayers. These actions resulted in billions of dollars in additional revenue. What GAO Found The Internal Revenue Service's (IRS) latest tax gap estimate (2016) found that taxpayers voluntarily and timely paid about 81.7 percent of owed taxes for tax years 2008-2010, leaving an annual gross tax gap of $458 billion. IRS estimated a net tax gap—after late payments and enforcement actions—of $406 billion. GAO's work has found that three important factors contribute to the tax gap. Limited third party information reporting. The extent to which individual taxpayers accurately report their income is closely aligned with whether third parties (e.g., employers) report income (e.g., wages) to them and to IRS. IRS resource tradeoffs. IRS's budget and staffing levels have fallen over the past decade, and IRS faces increasing responsibilities, such as implementing Public Law 115-97—commonly known as the Tax Cuts and Jobs Act—which involved significant changes to tax law. Tax code complexity. The federal tax system contains complex rules that may be necessary to appropriately target tax policy goals; however, this can engender errors and lead to underpaid taxes. GAO's work has demonstrated that no single approach will fully and cost-effectively address noncompliance since the problem has multiple causes and spans different types of taxes and taxpayers. In light of these challenges, GAO has made numerous recommendations to IRS—some of which have not yet been implemented—such as developing and documenting a strategy that outlines how IRS will use data to update compliance approaches to help address the tax gap. Reducing the tax gap will also require targeted legislative actions. For example, expanding third-party information reporting could increase voluntary compliance and providing IRS with the authority to regulate paid tax return preparers could improve the accuracy of the tax returns they prepare.
gao_GAO-20-101
gao_GAO-20-101_0
Background Generally, agencies dispose of their excess property through GSA’s government-wide property disposal process. See figure 1. Disposal is facilitated by GSA’s disposal system, known as GSAXcess. Once an agency has determined that it no longer has an internal agency need for its property during agency internal screening, it generally declares and reports the property as excess. Subsequently, the agency places information on the property in GSAXcess and then other federal agencies can screen, request, and, if approved by GSA, obtain the excess property for their own use or can then provide it to an authorized non-federal recipient free of charge, minus transportation costs. If no federal agency (for its own use or use by its eligible non-federal recipient) requests the excess property from GSAXcess, it then becomes surplus to the federal government, and a State Agency for Surplus Property can request it and provide it to eligible non-federal entities in their state, such as local governments and non-profits. Property not claimed by a State Agency for Surplus Property can then be sold to the general public typically through a GSA auction or an approved sales center. Finally, unsold property may be abandoned and destroyed by the reporting agency. Agencies can provide property to non-federal recipients in various ways. Some agencies, such as USDA and DOE, have been granted their own independent authorities that allow them to provide their unneeded or excess property to eligible non-federal recipients, such as public entities and colleges or universities. Eligible recipients are determined by program requirements. Other agencies, such as DOL, predominately provide excess property to non-federal recipients through a grant, contract, or cooperative agreement. For our three selected agencies, we focused on how USDA and DOE provided property to non-federal recipients using their independent authorities and how DOL provided property to non-federal recipients through contracts. Table 1 describes these agencies’ programs that provide property to eligible non-federal recipients. More detail on the independent authorities used by USDA and DOE can be found in appendix II. Additional information about excess property previously provided by DOL through cooperative agreements to apprenticeship programs can be found in appendix III. Agencies with independent authorities and programs differ in when they are able to provide property to non-federal recipients. Such agencies can allow eligible non-federal recipients, as determined by their agency’s independent authority, to screen for and request unneeded property during agency internal screening; in other words, before the property is declared excess and available to other agencies and entities. For example, USDA is authorized to provide certain equipment to its contractors or recipients when doing so would further agricultural research or teaching objectives. Currently, USDA allows eligible non- federal recipients to screen for all USDA unneeded property through an USDA internal module in GSAXcess at the same time as its sub- agencies, and before the property is made available to federal agencies in GSAXcess. If there is no demand for the property by an eligible non- federal recipient during internal screening, it is then made available in GSAXcess, where other agencies screen for and request property for their own use or for use by associated non-federal recipients. Regardless of how agencies provide property to non-federal recipients, they are required to annually report to GSA on property they provided. GSA provides agencies with guidance to assist with their reporting responsibilities, including: GSA Bulletin Federal Management Regulation B-27: defines terms and provides agencies guidance on using GSA’s Personal Property reporting tool. GSA Personal Property Reporting Tool (reporting tool): a template used by federal agencies to report excess property provided to non-federal recipients. The reporting tool has pre-determined drop- down menu items for agencies to select from when reporting property provided to non-federal recipients, such as the authority used. Technical Assistance and Guidance: GSA officials told us that they provide training, technical assistance, and guidance through webinars, email, and phone when agencies seek additional information on reporting requirements. GSA publishes the information reported by agencies in its annual Non- Federal Recipient Report, which includes information such as the agency, non-federal recipient, authority used, and the original acquisition cost of the property. Selected Agencies Established Processes for Providing Property to Eligible Non-Federal Recipients but Lacked Insight into Property Use Each Agency Established Regulations or Guidance to Govern the Process of Providing Property to Eligible Non-Federal Recipients USDA, DOE, and DOL established agency regulations or guidance for managing the disposition of property during the internal-screening process and once it has been declared excess, including providing property to non-federal recipients, as described below. USDA has three separate Federal Excess Personal Property Program handbooks specific to each sub-agency within USDA that manages property provided to non-federal recipients under the Federal Agriculture Improvement and Reform (FAIR) Act, the Forest Service, and the National Institute of Food and Agriculture‘s (NIFA) Federal Excess Personal Property Programs. These handbooks describe the process through which eligible non-federal recipients can screen (i.e., search for and select) for unneeded and excess property. Specifically, USDA makes property available to non-federal recipients for each of these programs during internal screening at the same time that other USDA sub-agencies can screen the property. DOE officials and guidance explained how its offices should dispose of federal excess personal property, including when eligible non- federal recipients can screen for unneeded and excess property. DOE makes property available to non-federal recipients after internal agency screening once it is determined the property is not needed within DOE. For DOE’s Economic Development Property Program, DOE makes property available to the eligible Community Reuse Organization by word of mouth or through a DOE excess email listing. For the Math and Science Equipment Gift Program, the recipient is made aware of property by word of mouth or as a result of a subcontract that has ended with a university. For the Laboratory Equipment Donation Program, DOE extracts energy-related property from within the Energy Asset Disposal System and allows eligible non-federal recipients to screen for that property on an external website. During this screening period by non-federal recipients, if property is requested and the request is approved by DOE, DOE then transfers the property directly to the non-federal recipient. DOL policy explains how Job Corps contractors may directly access GSAXcess to obtain excess property. Specifically, it explains how contractors can screen and obtain excess property when it is made available to all federal agencies and other eligible non-federal recipients, generally on a first-come, first-served basis. Additionally, for USDA and DOE, if there is no demand for unneeded property among eligible non-federal recipients, the property is then declared as excess property and reported to GSA and becomes available in GSAXcess where it is made available to all other federal agencies and eligible non-federal entities. Through the various programs at our selected agencies, officials reported to us that they provided property with an original acquisition value of between $0.4 and $33 million to non-federal recipients through their agency-specific programs in fiscal year 2017, most of it through the Forest Service’s Federal Excess Personal Property Program. (See table 2). Each Agency Did Not Generally Know How Recipients Were Using Property The three agencies we reviewed assigned various offices the responsibility for monitoring property provided to non-federal recipients. The program officials in charge of monitoring are to ensure, among other things, that non-federal recipients use the property within a reasonable period of time and for the purpose it was intended, according to agency regulations and program requirements. See figure 2. Once property is provided to a non-federal recipient some agencies retain title, or ownership, of the property, while others pass ownership to the recipient. For agencies disposing of property using the GSA-regulated disposal process, GSA regulations require agencies to, among other things: (1) ensure the use of excess personal property acquired for use by the non- federal recipient is authorized and complies with applicable federal regulations and agency guidelines, (2) review and approve transfer documents once property is requested by the non-federal recipient, and (3) ensure the non-federal recipient does not place the property into storage (i.e., stockpile) property and uses the property within a reasonable time frame. Requirements in the authorizing legislation govern USDA and DOE disposal when these agencies use their independent authorities. While monitoring responsibilities were assigned, these selected agencies reported and we found that property provided to non-federal recipients was sometimes disposed of prematurely, not used at all, or not used within the required time frames. For example: According to Office of Property and Fleet Management officials responsible for property provided under the FAIR Act, they conducted an unscheduled property compliance check at a non-federal recipient location that revealed that a non-federal recipient (i.e., a school) improperly sold property before USDA’s 1-year requirement to use the property was met. As a result, the school was put on probation and was required to send inventory reports to USDA on a regular basis. An official from a state forestry department we spoke to reported having obtained a large vehicle that was not used. Furthermore, this official told us that due to a lack of indoor storage space the vehicle was stored outside exposed to the elements and its condition deteriorated over time. According to Agricultural Research Service officials responsible for property provided under the NIFA Federal Excess Personal Property Program, they revoked the participation of a non-federal recipient (i.e., a college) that was unable to provide information on how or whether property was being used. Several Laboratory Equipment Donation Program recipients reported instances where they did not report required information at the end of the first year of use, according to program requirements. One recipient told us that it never used several pieces of equipment it received because they were in poor condition and put them in storage, rather than disposing of the property. Whether these instances are widespread or uncommon is unknown, due to a lack of consistent monitoring at USDA, DOE, and DOL to determine how and whether the property provided to non-federal recipients was used. USDA’s guidance from the Federal Excess Personal Property Program handbook for the FAIR Act specifies that regular audits and reviews of participating institutions are required to ensure property is being used in support of research, educational, technical, and scientific activities for related programs. Specifically, USDA requires property that is obtained by an institution to be placed into use for the purpose it was acquired within 1-year of receipt and to be used for 1- year thereafter. However, USDA Office of Property and Fleet Management officials told us that due to a limited travel budget and staff to conduct monitoring they relied on informal “spot checks” to monitor property provided to non-federal recipients under the FAIR Act. DOE’s Office of Asset Management said it had discontinued monitoring any excess property provided by the Economic Development Property program to non-federal recipients. According to DOE Office of Asset Management officials, they mistakenly believed the Economic Development Property authority had expired, and thus believed they were relieved of their monitoring responsibilities of the property provided to non-federal recipients. According to officials, they determined during the course of our review that the authority had not expired, but stated DOE regulations currently do not reference Economic Development Property. Officials stated that they did not know when they had last monitored the program and were not informed of its activities, even though between fiscal years 2013 and 2017, DOE reported to GSA’s Non-Federal Recipient Report that the program provided over $154 million in property to non-federal recipients. According to DOE Office of Asset Management officials, they were unaware of the DOE sites that reported this data to GSA. In addition, DOE has previously acknowledged monitoring concerns with the program. Office of Asset Management officials told us they are determining how use of this authority will continue in the future. As of December 2019, DOE’s Office of Asset Management had not issued any new guidance or clarifications on the program, or a time frame for when such guidance or clarification might be issued. DOE’s Office of Science told us it had not consistently monitored property provided to Laboratory Equipment Donation Program recipients to ensure that required information was reported at the end of the first year, which is a requirement of the program. According to three Laboratory Donation Equipment Program recipients we spoke with, they had never provided information to DOE, and DOE had not requested information on property they received. According to Office of Science officials, they had not regularly contacted Laboratory Donation Equipment Program recipients because the process for doing so had been manual, and therefore was unsustainable and led to poor record-keeping. In March 2019, Office of Science officials established a new platform that will generate automatic email notifications to non-federal recipients of Laboratory Equipment Donation Program property within 11 months of receipt. DOE officials told us that the new system started receiving applications in June 2019, and thus DOE will begin the automated notifications no later than May 2020. Within DOL, the National Property Officer for DOL’s Job Corps Program retired in December 2018 and the position has not been officially filled. In September 2019, DOL officials told us that the National Property Officer’s responsibilities—which include periodically reviewing policies, procedures, and excess property provided to Job Corps centers—are temporarily being filled by another employee, in addition to that employee’s other responsibilities. They do not expect to hire a full-time National Property Officer before the end of calendar year 2019. It is unclear to what extent monitoring activities have been conducted within the National Office in the absence of a full-time National Property Officer. We identified discrepancies between the data provided to us by Job Corps Program officials on the excess property provided to Job Corps centers, and the data maintained by the Job Corps centers we visited. For example, we identified items that had been provided to Job Corps centers that were not tracked in DOL’s internal property-management system. According to DOL officials, property under a certain dollar threshold is not tracked internally, a practice that might account for the discrepancies. However, we identified several items that were obtained by Job Corps centers that were over the dollar threshold set by DOL. For example, one Job Corps center we visited obtained two walk-through metal detectors that exceeded the dollar threshold but are missing from DOL’s Job Corps Program data. Offices within our selected agencies also did not fully carry out their oversight responsibilities. According to federal standards for internal control, management should evaluate performance and hold individuals accountable for their internal control responsibilities as well as internally communicate the necessary quality information to achieve the entity’s objectives. Specifically, effective oversight and communication with key stakeholders are essential in ensuring that management is held accountable for carrying out their internal control responsibilities and meeting agency objectives. However, we found that the selected agencies did not take steps, such as communicating information, to ensure that the non-federal recipient programs were carried out in accordance with the agency’s property management regulations or program requirements, for various reasons: At USDA, Office of Property and Fleet Management officials acknowledged they have not consistently provided oversight of personal property across USDA because it was not considered a priority within the agency to do so. For example, until USDA established an inventory-compliance metric, sub-agencies did not regularly conduct required property inventories, and Office of Property and Fleet Management officials lacked the ability to require them to do so. As another example, officials said they requested that an office within USDA reconcile its non-federal recipient reporting data and make changes to the report to be provided to GSA. However, the office did not respond to their request, and the Office of Property and Fleet Management did not have the ability to enforce any corrective actions not taken. These experiences signaled to the Office of Property and Fleet Management that this area was not an agency priority and limited the ability to conduct oversight. However, USDA’s Office of Property and Fleet Management officials conceded that more consistent and robust agency-wide oversight of property provided to non-federal recipients would provide them with a better understanding of the effectiveness of their property-management controls. At DOE, communication problems have interfered with oversight. The Office of Asset Management is responsible for communicating information and providing guidance on the agency’s property management regulations to ensure that program offices are carrying out their property programs in accordance with those regulations. However, according to Office of Science officials, they were unaware that the Laboratory Equipment Donation Program was included in DOE’s property management regulations, though they had seen manuals about the program referenced in other DOE guidance. In addition, according to Office of Science officials, in the absence of information about the Economic Development Program in DOE regulations, they were using DOE guidance that reflected DOE policy to provide property to non-federal recipients. However, the guidance used by Office of Science was discontinued in 2011 and, as mentioned above, is currently under review, according to Office of Asset Management officials. Office of Asset Management officials stated that not having official guidance that can be communicated to the sites about the use of this program is problematic and said they recognized the need for improved guidance and communication between the offices going forward. In addition to these issues, we have reported in the past that managing property in general has been a low priority for federal agencies. Consistent with this report, officials from our three selected agencies stated that it was not always cost-effective to prioritize the monitoring and oversight of property programs for various reasons. Some also reported that, given limited resources, they prioritized high-risk or high-dollar value property that was still in the federal government’s possession rather than low-risk or low-dollar valued property within or divested from federal agency possession. We recognize that higher value property still being used may require more robust monitoring. However, as described above, there are good reasons to pay attention to whether the property provided to non-federal recipients, such as schools and state foresters, is being used according to regulations and guidance—not the least of which, it collectively represents millions of dollars in federal resources. As we described above, our three selected agencies alone provided about $76 million in property to non-federal recipients in fiscal year 2017. Furthermore, agencies may consider the property low value, because they are no longer using it, but if that property, for example an old fire truck, keeps a federal or non-federal entity from purchasing expensive new parts, then it is not as clear that the value of the property is actually low. Finally, no matter the value of the property, agencies without effective oversight of the authorities and programs they are responsible for cannot be assured that they are adhering to federal regulations and meeting program requirements, including whether property is being used as intended or to its fullest extent. Benefits of Property Were Reported by Agencies and Non- Federal Recipients but Effect on Government Is Unclear due to Lack of Reliable Data Programs Reportedly Benefit Selected Agencies and Non-Federal Recipients but May Reduce Others Agencies’ Access to Property Officials’ at the three agencies we reviewed told us that providing unneeded or excess property to non-federal recipients was cost-effective for them or the federal government. For example, DOE officials reported that being able to dispose of property during internal screening helped them dispose of property more quickly than they would be able to do through GSAXcess and also reduced warehousing costs. USDA officials told us that being able to provide property to non-federal recipients potentially saves USDA on warehouse costs, but there are also likely additional savings since many of their non-federal recipients also obtain excess property from other federal agencies. DOL officials told us they save on contracting costs, as the Job Corps centers are able to obtain federal property for free, versus having to purchase similar property, whose costs could be built into contracts with federal agencies and paid for with federal funds. Officials at our selected agencies told us that distributing unneeded and excess property to non-federal recipients also enhances their mission. For example, a USDA official told us a goal of the Federal Excess Property Program under NIFA—as managed by the Agricultural Research Service—was to provide property to non-federal recipients to establish relationships between USDA and state agricultural schools and programs. The official told us there is also increased value to USDA from the partnerships in the program, including an increase in agricultural experimental work and cooperative educational programs that assist USDA. DOE Office of Science officials told us that providing the scientific equipment through the Laboratory Equipment Donation Program encourages colleges and universities to develop energy-related programs. In addition, officials told us the program encourages future scientists to potentially work for DOE in the future. DOL officials told us that providing property to Job Corps center contractors helps DOL provide job training for at-risk youth. All 17 non-federal recipients we spoke with told us that federal property received from the selected agencies was beneficial for their program or department as well. For example, one DOE Laboratory Equipment Donation Program recipient told us that the equipment received was used to furnish a teaching laboratory, which the recipient would not have otherwise been able to purchase due to a limited budget. A state forester told us that property received from the Forest Service’s Federal Excess Property Program (such as fire trucks, gloves, and electronics) has had a real positive effect on rural fire departments because they would otherwise have been unable to purchase these items due to limited budgets. Officials from a DOL Job Corps center told us that the property they obtained as excess from GSAXcess is a lifeline for their operations, as they were able to obtain a lot of dorm and kitchen equipment to assist with their operations. See figure 3 below for examples of equipment obtained by non-federal recipients. While the selected agencies and non-federal recipients report benefits, the agency-specific disposal programs and agreements used at our selected agencies and other agencies may not benefit all federal agencies or even non-federal recipients. As we describe in more detail later in this report, GSA does not have reliable data on the scope of property provided to non-federal recipients across the federal government. However, based on our discussions with GSA officials and other stakeholders, as well as our review of 2003 property utilization and donation study, when agencies use their independent authority, in some instances, other stakeholders may not be eligible to acquire the property. First, non-federal recipients can obtain property at multiple points in the disposal process, a factor that could mean potential recipients get several chances to obtain property. For example, when agencies, such as USDA and DOE, provide unneeded property to non-federal recipients, the property does not enter GSAXcess. Additionally, other federal agencies and State Agencies for Surplus Property may not be eligible recipients to obtain unneeded property. According to the GSA property utilization and donation study, the increase in laws providing agencies with independent authority to give property to non-federal recipients has reduced the remaining pool of assets that would have otherwise entered the government-wide property disposal cycle. Additionally, when property does enter GSAXcess, an agency may obtain the property and provide it to a non-federal recipient. While GSA officials said they prioritize giving the property to the federal agency that plans to use it for its own needs over a federal agency that plans to provide it to a non-federal recipient, GSA officials said they are not always aware of how federal agencies plan to use the property. In this respect, a federal agency may acquire the excess property for use by a non-federal recipient instead of a federal agency acquiring the property for its own use. GSA officials also told us that they did not have data on the amount of property that is provided to non-federal recipients at the various points of the disposal process. Thus, it is unknown how often non-federal recipients obtain excess property from a federal agency, and whether or how often other recipients that may want excess and surplus property are missing out on property. Figure 4 illustrates the reduction in property that can occur when non- federal recipients obtain property at various points in the disposal cycle. Second, because of the decentralized nature of disposal, some non- federal recipients could benefit more than others. For example, a rural fire department eligible to receive property under the USDA Forest Service’s Federal Excess Property Program could potentially obtain property: (1) during USDA internal screening, (2) from USDA as excess, or (3) through their State Agency for Surplus Property once the property is deemed surplus to federal government. Officials from four out of five State Agencies for Surplus Property told us that they have some recipients that are eligible to receive property through multiple points in the disposal process. In contrast, other non-federal entities, such as non-profit groups, may only be able to obtain property through their State Agency for Surplus Property because they are not eligible to receive property under a federal agency-specific program. As a result, these non-federal entities may have less property available to them and would have to pay a fee to the State Agency for Surplus Property to obtain the property. In addition, DOE and USDA officials said they do not advertise their agency- specific property programs, so a smaller pool of eligible recipients may be competing for and benefiting from the property over those that are unaware of those programs. For example, one Laboratory Equipment Donation Program recipient told us he became aware of the program through a previous mentor and would have not otherwise known about the program because it is not advertised. Government-Wide Data on How Federal Agencies Provided Property to Non- Federal Recipients Were Unreliable for Reporting Purposes GSA’s reporting tool and accompanying bulletin are unclear, a lack of clarity that resulted in inconsistent data on the number of non-federal recipients obtaining property. As the reporting tool and bulletin serve as the primary means for ensuring consistent information is collected on non-federal recipients that are provided property, it is important that they accurately convey the information agencies should report. However, we found the following three issues made the data unreliable for reporting the amount of property provided to non-federal recipients through authorities and agency specific programs. Wrong Disposal Authority and Program Reported We found that agencies incorrectly reported the authorities and programs used to provide excess property to non-federal recipients, making it difficult to understand how many agencies are providing property to non- federal recipients or what authority they are using to do it. Our analysis of the non-federal recipient reports found that during fiscal years 2013 to 2017, 16 agencies reported providing property to a non-federal recipient through various types of authorities, including agency-specific authorities. However, one of our selected agencies reported using another agency’s independent authority or program to provide excess property to a non-federal recipient. Specifically, in fiscal year 2016, we found five instances where DOL reported using a DOD independent authority. DOL and GSA officials told us these instances were likely the result of data entry errors. We also found that agencies reported information incorrectly under their own programs. The full extent of such errors in unclear due to the inconsistency and incompleteness of the data; however, we found clear examples of reporting errors that agency officials confirmed. As previously discussed, DOE reported providing $154 million in unneeded property to non-federal recipients through the Economic Development Property program, but DOE officials stated that they do not know if the data were accurate or complete, in part, because the officials were not aware the Economic Development Program existed and thus were not conducting any oversight at the time. DOE officials told us that they are taking steps to clarify when the Economic Development Property program should be used in reporting, and anticipate that the correct reporting will take place in fiscal year 2020 once clarification is complete. These errors occur because GSA’s reporting tool is limited. Specifically, the tool allows those who are inputting the information to select authorities and programs that are not specific to their agencies, rather than limiting options to the drop down menu of selections that actually are appropriate. GSA Office of Government-wide Policy officials told us that they provide a definition sheet, and offer training to each agency on how to enter data, but they are not sure if agencies are using their guidance. Even if those inputting data did refer to the sheet, GSA officials told us that since these are agency-specific programs, they are not aware of all the ways in which agencies are able to provide property to non-federal recipients and that the reporting tool may not reflect all the current authorities and programs used. A DOE official told us that the categories are not mutually exclusive, a situation that is confusing and can lead to inconsistent reporting even among offices within DOE. Because of this data input issue, it makes it difficult to understand how many agencies are providing property to non-federal recipients under these independent authorities. Lack of Clarity on Whether to Report Loaned Property We found that agencies inconsistently reported loaned property provided to non-federal recipients, resulting in inaccurate government-wide data on the amount of loaned property. Our analysis of the data found that only DOE, among all reporting agencies, reported providing loaned property ($104 million) to non-federal recipients between fiscal year 2013 and 2017 to GSA’s Non-Federal Recipient Report. According to DOE’s property guidance, all excess property, including loaned property furnished to non-federal recipients should be reported. Conversely, USDA and DOL officials told us that they did not believe loaned property had to be reported, because title or government ownership of that property remained with the federal government. It is unclear based on GSA’s guidance and interviews with GSA officials whether loaned property should be reported by agencies. GSA’s guidance states that excess property furnished in any manner whatsoever, including loaned property, should be reported. The reporting tool seems to support the guidance, as it included loaned property in the drop-down menu from which agencies could select the mechanism used to provide property. However, GSA’s guidance does not specify the circumstances in which loaned property should be reported and how it may differ from property loaned under an agency-specific program. For example, we found that USDA reported providing property to non-federal recipients under its agency-specific Federal Excess Property Programs when the title or ownership remained with the federal government, but did not report providing any loaned property outside of its agency-specific programs. GSA officials stated that there might be confusion among some agencies about whether excess property loaned to non-federal recipients needs to be reported when ownership remains with the federal government. Because only one agency reported loaned property outside of agency-specific programs, GSA guidance may not clearly specify whether and how loaned property should be reported. Property Provided to Non- Federal Recipients Was Underreported We found inconsistencies in how property obtained by agencies in GSAXcess on behalf of non-federal recipients was reported, leading to underreporting of property provided to non-federal recipients. For example, we found that DOL was not reporting property obtained in GSAXcess for its Job Corps centers, because it believed that since this property was obtained from GSAXcess, GSA should be reporting these transactions. GSA officials told us DOL is responsible for reporting this information. According to GSA’s bulletin, agencies are required to report all of their transactions involving excess property provided to non-federal recipients, but do not need to report items sold, transferred, or donated by GSA on their behalf as part of the disposal process. Thus, there may be confusion among agencies on whether property obtained in GSAXcess should be reported by the agency or GSA. As a result, there could be undercounting of property provided to non-federal recipients, as neither DOL nor GSA is reporting the property. GSA Office of Government-wide Policy officials told us that they realize data reporting can be improved but do not have concrete plans in place to do so. For example, GSA officials told us they have identified changes to the reporting tool to make it more user-friendly and to address some of the features that lead to reporting errors. GSA officials provided us with documentation listing some changes they would like to make to the reporting tool, including incorporating a range of data checks that will trigger caution or error messages for inappropriate data entries, and to generate agency system reminders to ensure data are turned in by each agency. GSA officials told us they made some of these changes to the fiscal year 2019 reporting tool. These changes represent a potential step in the right direction. However, GSA has not established a plan with time frames to implement further changes. Moreover, based on the documentation provided to us, it is unclear whether the proposed changes will address some of the limitations we identified including (1) agencies’ reporting property under another agency’s program in the reporting tool, (2) whether loaned property should be reported by agencies, and (3) clarifying what property GSA is reporting on behalf of agencies. According to federal standards for internal control, it is important for management to periodically review policies, procedures, and related control activities for continued relevance and effectiveness in achieving the entity’s objectives or related risks. As we have shown, each of these limitations obscure data that would be helpful in understanding whether and to what extent property provided to non-federal recipients is done so at a cost to the federal government. Without addressing the limitations of the reporting tool and bulletin, it is not clear that the non-federal recipients’ report data will be consistent moving forward. Moreover, due to limited data, the implications of providing property to non-federal recipients ahead of other recipients, such as federal agencies and State Agencies for Surplus Property are unknown. Without taking action to update the reporting tool and bulletin to identify issues we found, it is unclear the extent to which GSA will be able to improve the data collected in the Non-Federal Recipient Report. Conclusions By using GSA’s government-wide disposal process as well as independent agency authorities, agencies have an opportunity to be good stewards of government property by allowing others to reuse federal property in lieu of purchasing new property. While there are benefits to allowing agencies to provide property to non-federal recipients before others receive it, there are also potential implications. In the past, we have observed there is a government-wide lack of attention to management of property other than real property, and we continue to find that lack in this review. A full assessment of whether these efforts are achieving the intended effects are impeded due to a lack of oversight, monitoring, and accurate data about what types and amounts of property are provided to non-federal recipients. Until USDA, DOE, and DOL direct their offices to fulfill their oversight responsibilities, there may be an ongoing lack of accountability for managing such programs. Furthermore, lack of effective monitoring will continue to undermine any assurances to agencies and Congress that this property is being used in a timely manner, as intended, or to its fullest extent. Finally, given the large amount of property managed and disposed of by the federal government each year, the lack of reliable data makes it difficult to understand the overall scope of property provided to non-federal recipients and the implications for the government-wide disposal process. Recommendations for Executive Action We are making seven recommendations: two recommendations to USDA, two recommendations to DOE, one recommendation to DOL, and two recommendations to GSA. The Secretary of Agriculture should direct the Office of Property and Fleet Management to consistently monitor property provided to non-federal recipients within 1 year of receipt, and to ensure property is being used for its intended purpose 1 year after initial monitoring. (Recommendation 1) The Secretary of Energy should direct the Office of Asset Management to resume monitoring the Economic Development Property program, including property provided to non-federal recipients. (Recommendation 2) The Secretary of Labor should direct the Employment and Training Administration to take steps, such as reconciling data between Job Corps centers and the Job Corps National Office, to ensure that the entities responsible for overseeing and monitoring the Job Corps Program have accurate data on the excess property provided to non-federal recipients. (Recommendation 3) The Secretary of Agriculture should direct the Office of Property and Fleet Management to establish clear processes to oversee property programs, including excess property provided to non-federal recipients across the agency. (Recommendation 4) The Secretary of Energy should direct the Office of Asset Management to update its regulations and guidance on programs that provide property to non-federal recipients to ensure regulations are current and establish a process to regularly communicate information about non-federal recipient programs to DOE program offices. (Recommendation 5) The GSA Administrator should direct the Office of Government-wide Policy to revise the Personal Property Reporting Tool by updating the authorities agencies can select. (Recommendation 6) The GSA Administrator should direct the Office of Government-wide Policy to document in what circumstances excess property loaned to non- federal recipients should be reported and what property GSA is reporting on behalf of agencies, for example, by updating GSA guidance. (Recommendation 7) Agency Comments We provided a draft of this report to USDA, DOE, DOL, and GSA for comment. Three agencies provided comments, which are reprinted in appendixes IV through VI and summarized below. USDA informed us by email that it had no comments and concurred with the recommendations. DOE also provided technical comments, which we incorporated, as appropriate. In its written comments, DOE agreed with our recommendations and stated that the Office of Asset Management will update the annual property reporting requirements for Economic Development Property and will also update DOE’s internal policies and provide property information on DOE’s internal informational website. In its written comments, DOL’s Employment and Training Administration agreed with our recommendation and stated that it will take steps to improve the accuracy of data on excess property provided to Job Corps contractors and has recently taken actions to improve the monitoring and oversight of Job Corps property. For example, the Employment and Training Administration stated it is working closely with DOL’s Office of the Assistant Secretary for Administration and Management to develop a new process for GSAXcess review and will formalize property reporting requirements, processes, and roles and responsibilities in the next update to its property management guidance. In its written comments, GSA agreed with our recommendations and stated that it already added relevant authorities to the Personal Property Reporting Tool in July 2019. In addition, GSA stated it will continue to contact agencies to ensure that all relevant authorities are included in the reporting tool and will evaluate technical updates to the reporting tool to ensure that agencies select an appropriate authority when reporting. Also, GSA stated it will communicate with agencies to clarify any confusion regarding reporting requirements for loaned property and is committed to reviewing and updating relevant regulations and guidance, particularly in terms of reporting property that agencies obtain via GSAXcess. We are sending copies of this report to the appropriate congressional committees, the GSA Administrator, Secretary of Agriculture, Secretary of Energy, Secretary of Labor, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact Lori Rectanus at (202) 512-2834 or rectanusl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology Our review focused on how federal agencies provide, manage, and report on property provided to non-federal recipients. Our objectives were to examine (1) how selected agencies manage unneeded and excess property provided to non-federal recipients, and (2) what is known about the benefits, effects, and reported data of providing property to non- federal recipients. To address both objectives, we reviewed applicable federal statutes and regulations pertaining to property disposal, including General Services Administration (GSA) property management regulations, and agencies’ independent authorities for providing property to non-federal recipients. We also reviewed GSA bulletins, briefings, and a 2003 GSA property utilization and donation study to understand the effects and requirements for providing and reporting property to non- federal recipients. To assess how selected agencies manage unneeded and excess property provided to non-federal recipients, we selected three agencies and reviewed documentation and interviewed officials from the three agencies—the United States Department of Agriculture (USDA), the Department of Energy (DOE), and the Department of Labor (DOL). We selected these agencies using information from GSA’s government-wide Non-Federal Recipient Report that provides data on excess property provided to non-federal recipients by agency, and reports from GSA’s centralized property database (GSAXcess) on overall property disposed of and obtained by federal agencies from fiscal year 2013 to 2017. After reviewing those reports, we selected agencies based on: (1) the amount of property provided to non-federal recipients in terms of original acquisition cost, (2) the amount of property obtained through GSAXcess in terms of original acquisition cost, (3) the number of independent authorities reported being used by the agency to provide property to a non-federal recipient, and (4) the amount of property provided to non- federal recipients through a grant, contract, or cooperative agreement. We selected these agencies based on these factors because we were looking for agencies that provided a large amount of property to non- federal recipients through their independent authorities and programs, as well as an agency that provided less property through the independent authorities and programs, and more through grants, contracts, or cooperative agreements. We reviewed each selected agency’s policies and program guidance describing disposal processes, including processes for providing unneeded and excess property to non-federal recipients, and compared the processes to relevant federal internal control standards on oversight and monitoring. We interviewed agency property management officials as well as agency program officials responsible for managing property provided to non-federal recipients through agency programs, including DOE’s Laboratory Equipment Donation Program, Economic Development Property program, and Math and Science Equipment Gift Program and three USDA Federal Excess Personal Property programs, including the Federal Agriculture Improvement and Reform (FAIR) Act program, the Forest Service Federal Excess Property Program, and the National Institute of Food and Agriculture Federal Excess Property Program to gain a high-level understanding of the impetus of the agency-specific disposal programs, and how those programs were managed. For DOL, officials told us that they currently provided property to non- federal recipients through contracts with DOL Job Corps centers and had previously provided property through cooperative agreements and memorandums of understanding with apprenticeship programs, but these agreements were canceled in 2016. Thus, we interviewed agency officials knowledgeable about excess property obtained through GSAXcess and provided through contracts to Job Corps Centers to understand how DOL provided property to non-federal recipients. More detail on the independent authorities used by agencies can be found in appendix II and additional information about excess property DOL previously provided to apprenticeship programs can be found in appendix III. To assess what is known about the benefits, effects, and reported data on providing property to non-federal recipients, we interviewed officials from State Agencies for Surplus Property in Arizona, California, Georgia, Illinois, and Texas to obtain their views on the GSA property disposal process. We selected these states because their State Agency for Surplus Property was a top 20 recipient of surplus property in terms of original acquisition value during a given year from fiscal year 2014 to fiscal year 2017, according to data provided by GSA on surplus property donation. We also interviewed and obtained documentation from 17 non- federal recipients in those five states to understand how they used unneeded and excess property provided by the USDA’s Forest Service Federal Excess Property Program, the DOE’s Laboratory Equipment Donation Program, and DOL’s Job Corps Program and how monitoring of federal property occurred. We selected these non-federal recipients because they obtained property from these three agencies through their independent authorities or agency programs. Information we obtained from these non-federal recipients is not generalizable to all non-federal recipients of excess property. In addition, we interviewed knowledgeable officials from GSA’s Office of Government-Wide Policy and Office of Personal Property Management. See table 1 for a list of federal agencies, non-federal recipients, and other stakeholders interviewed. We also analyzed and summarized Non-Federal Recipient Report data from fiscal year 2013 to 2017 to understand the scope of excess property that agencies provided to non-federal recipients through various programs and agreements. We used these years because this was the most current data available to us at the time we started our review. To assess the reliability of the Non-Federal Recipient Report data, we (1) performed electronic testing for obvious errors in accuracy and completeness; (2) reviewed GSA’s agency guidance on reporting requirements; and (3) interviewed officials at our selected agencies to discuss identified data errors. We found that information in the database was not sufficiently reliable for reporting the amount of property provided to non-federal recipients through independent authorities and programs. As discussed in the report, we used some of the data to provide illustrative examples of reporting errors and to develop recommendations for improving or establishing management controls to help ensure data quality. We conducted this performance audit from June 2018 to December 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings, and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Selected Agencies’ Independent Authorities Appendix III: Provision of Excess Property through Department of Labor’s Employment and Training Administration Background on Excess Property Provided to Apprenticeship Programs For several decades, the U.S. Department of Labor’s (DOL) Employment and Training Administration (ETA) has provided excess property to support apprenticeship training programs, according to DOL officials. For about 15 years, the Office of Apprenticeship within ETA had agreements with two apprenticeship programs—the International Union of Operating Engineers (IUOE) and the International Training Institute for the Sheet Metal Workers and Air Conditioning Industry (ITI) to support the training of apprentices in the fields of heavy equipment operation and maintenance and sheet metal fabrication and installation, respectively. According to the most recent agreements, DOL’s objective was to increase the number of women and minorities in apprenticeships. According to IUOE staff, these agreements supported equipment needs and hands-on training hours at 63 of 64 apprenticeship programs that provide training for construction-industry jobs, and according to ITI staff, property was obtained by its 150 training centers. How Property Has Been Provided for Apprenticeship Program Use Under GSA regulations, federal agencies, including DOL, can provide excess property to their grantees, contractors, and cooperatives. DOL executed cooperative agreements and memorandums of understanding with IUOE and ITI to provide excess property to support their apprenticeship training programs. According to DOL officials, the cooperative agreements and memorandums of understanding served as the legal instrument that laid out the relationship between ETA and the apprenticeship programs and the terms and conditions for obtaining excess property. The most recent memorandums of understanding between ETA and the apprenticeship programs were signed in August 2015 and were set to expire on December 31, 2020. IUOE and ITI representatives were provided access to view and request federal excess property in GSAXcess, the General Services Administration’s (GSA) government-wide, web-based system for facilitating the disposal of excess property. As authorized by DOL, IUOE and ITI representatives could screen property at the same time as other federal agencies. Once property was requested, the request would be reviewed and approved by DOL officials, certifying that the property fulfilled a mission-need for the particular site requesting the property. If GSA allocated the property to DOL, the federal agency disposing of the property would transfer the property directly to the training program or school that requested it; the particular training program or school was required to pay any associated transportation costs. Once the property was transferred, the training program or school was responsible for maintaining the property, which remained under the ownership of DOL, and IUOE and ITI were responsible for annually inventorying and certifying the property in their possession. When the property was no longer needed, it could be transferred to another site that needed the equipment or was disposed of by DOL’s listing the property in GSAXcess. Benefits and Challenges for Provision of Property to Apprenticeship Programs There are no available data on the types or number of property that has been historically provided for apprenticeship program use. There are, however, data on what property is currently held by IUOE and ITI. According to DOL, as of September 2019, IUOE had over 2500 pieces of construction equipment and vehicles they obtained from GSAXcess between 1979 and 2017, while ITI had over 2000 pieces of property acquired from GSAXcess between 1999 and 2013. According to IUOE and ITI staff, this property was useful to the sites that received it because it provided training hours to apprentices and lead to cost savings, but challenges were cited in disposing of property when it was no longer needed. Training hours: according to IUOE staff, the equipment they obtained, while often dated, provided invaluable opportunities for apprentices to receive training hours on equipment they might not otherwise obtain. For example, according to IUOE staff, a training center in Michigan obtained a used crane that could cost $1 million to purchase new, and uses it at a dedicated area onsite to support various types of disaster response training activities. According to ITI staff, the property obtained by their schools included hand saws, drills, computers, and furniture. Cost savings: DOL officials and apprenticeship program staff said that the ability to obtain equipment in this fashion lead to cost savings. For example, according to IUOE staff, the property that was obtained through GSAXcess was a key element to fulfilling equipment needs for their programs, particularly for smaller programs that did not have as many resources. However, these sites have other options to obtain equipment, such as from the original equipment manufacturer or on the market. In addition, according to IUOE staff at the Casa Grande Training Center in Arizona, equipment obtained by the site was primarily heavy equipment and rolling stock used to train apprentices and saved the center money because they did not have to purchase new equipment. See figure 5 for an example of excess equipment obtained. ITI staff stated that the property they obtained to support the training of apprentices in their schools allowed the schools to spend funds on other program areas, rather than equipment. Out-of-date equipment: Many IUOE sites continue to use the equipment they obtained, but it is not all in working condition. For example, Casa Grande has some equipment that is no longer in working order and the site does not want to invest money to repair the equipment, if it can no longer use it. According to ITI staff, they have not obtained excess property from GSAXcess since 2013 and have been unable to dispose of property received under prior agreements with DOL that is no longer needed. For example, staff estimated that about 90 percent of the equipment they obtained is now obsolete (over 2,000 items) and they would like to dispose of it. At a school in Miami, Florida, ITI had to purchase additional storage to store obsolete property and classrooms were filled with obsolete computers. ITI schools currently fulfill their equipment needs through loans from ITI headquarters or through purchasing their own equipment. Recent Changes to the Apprenticeship Program and Potential Effects In 2016, DOL made the determination that it would no longer provide equipment to apprenticeship programs due to legal and policy concerns, and according to DOL officials, they dissolved the agreements with IUOE and ITI in October 2016. In August 2017, DOL sent letters to IUOE and ITI stating that DOL would no longer continue to furnish excess property to non-federal entities. In cancelling these agreements, the department said it no longer wanted to retain ownership of the equipment, nor did it have a mechanism to allow IUOE and ITI to retain the property. However, recently DOL has received independent authority to provide property to the apprenticeship programs. Specifically, in its fiscal year 2018 appropriations, DOL received independent statutory authority to provide up to $2 million in excess property to apprenticeship programs for purposes of training apprentices in those programs through grants, cooperative agreements, contracts, or other arrangements. DOL did not provide excess property to these programs during fiscal year 2018. In its fiscal year 2019 appropriations, DOL was again authorized to provide up to $2 million in excess property. According to DOL officials, they planned to use the authority to transfer ownership of property already in IUOE’s and ITI’s possession that the programs would like to keep in support of its apprenticeship training programs. In April and May 2019, DOL officials sent letters to IUOE and ITI requesting that the apprenticeship programs take steps to verify property currently in their possession. In addition, IUOE and ITI were required to identify property for which they would like to obtain ownership from DOL and provided instructions for applying the fair market value to this property. In September 2019, DOL approved the transfer of ownership of 96 items at a fair market value of about $1.7 million IUOE wished to retain and 75 items with a fair market value of about $216,000 ITI wished to retain, for a total of $1.9 million in the aggregate. For property that IUOE and ITI did not want to keep, including obsolete items discussed above, DOL is in the process of disposing of it using GSAXcess, according to DOL officials. DOL officials told us that DOL does not plan to transfer any additional property to apprenticeship training programs in the future because the authority provided in the fiscal year 2019 appropriations expired at the end of the fiscal year. Appendix IV: Comments from the Department of Energy Appendix V: Comments from the Department of Labor Appendix VI: Comments from the General Services Administration Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, the following staff made key contributions in this report: Aisha Cabrer; Lacey Coppage; Nancy Lueke (Assistant Director); Joshua Ormond; Nitin Rao (Analyst-in-Charge); Amy Rosewarne; Kelly Rubin; Atiya Siddiqi; and Crystal Wesco.
Why GAO Did This Study The federal government owns and manages over a trillion of dollars of property that is not real property, such as vehicles, computers, and office furniture. Federal agencies generally get rid of excess property through GSA's disposal process, which then allows entities such as other federal agencies, to obtain that property if they want. Some agencies have independent authorities that allow them to provide property to non-federal recipients, such as universities, before or during the GSA disposal process. GAO was asked to review how federal agencies provide property to non-federal recipients. This report examines (1) how selected agencies manage unneeded and excess property provided to non-federal recipients and (2) what is known about benefits, effects, and data on property provided to these recipients. GAO analyzed GSA non-federal recipients' reports from fiscal years 2013 to 2017, the most current available at the start of our review, and selected three agencies—USDA, DOE, and DOL—to obtain variety on the methods used to provide property to non-federal recipients. GAO reviewed relevant processes and interviewed officials from GSA, selected agencies, and non-federal recipients. What GAO Found GAO found the U.S. Department of Agriculture (USDA), Department of Energy (DOE), and Department of Labor (DOL) established a process for providing property to non-federal recipients but had limited insight into how these recipients used this property. Officials told GAO that some of the property was disposed of prematurely or not used at all. Such outcomes are inconsistent with agency policy. Whether these instances are widespread or uncommon is unknown due to a lack of consistent monitoring and oversight. For example, DOE officials said they were not monitoring property provided by one of their programs, because they thought the authorization had expired. Without consistent monitoring or oversight, agencies cannot be assured that property is being used as required or achieving intended objectives. Selected agencies identified benefits of providing unneeded and excess property to non-federal recipients, but the larger effect of these efforts is unclear due to a lack of reported reliable data. Agency officials said providing property to these recipients saves costs and enhances their mission. However, other sources, including a General Services Administration (GSA) study, reported that using these authorities has reduced the amount of property that would otherwise be available to federal agencies or other recipients. While data on property provided to non-federal recipients are key to understanding the effects of the program, GAO found the government-wide data on property provided to non-federal recipients were unreliable. For example, GAO found that agencies reported incorrect authorities for transactions and underreported excess property provided to such recipients. GSA's current reporting tool and guidance are unclear on how agencies should report these items, and GSA does not have definite plans on what changes it will make to address these government-wide data issues. Until these changes are made, it will be hard to understand the scope of property provided to non-federal recipients and assess the effects on the federal government's disposal process, such as whether federal agencies and other recipients may be missing opportunities to obtain property. What GAO Recommends GAO is making seven recommendations, including one to DOL and two apiece to USDA, DOE, and GSA concerning improving oversight, monitoring, and data quality for property provided to non-federal recipients. All four agencies agreed with the recommendations.
gao_GAO-19-590
gao_GAO-19-590_0
Background U.S. Assistance to Central America Has Supported Three Objectives— Prosperity, Governance, and Security The United States historically has maintained close ties to Central America and played a role in the region’s political and economic development because of geographic proximity and common interests. The United States has provided assistance to the governments of Central America, including those of the Northern Triangle, under multiple initiatives over many years. In 2008, the United States began a multiyear assistance package to Central America under the Mérida Initiative to help address violence and criminal activity, especially from drug trafficking and other criminal organizations. In 2010, U.S. assistance continued under CARSI. CARSI was a collaborative partnership between the United States and Central American partner countries, including El Salvador, Guatemala, and Honduras, designed to improve citizen security within the region by taking a broad approach to security beyond traditional counternarcotics activities. Multiple U.S. agencies implemented projects in Central America, particularly in the Northern Triangle, to support and complement these initiatives. These projects focused on, among other things, improving law enforcement and criminal justice, promoting the rule of law and human rights, preventing youth violence in violence-prone areas, enhancing customs and border control, and encouraging economic and social development. Introduced in 2014, and updated in 2017, the Strategy is the latest U.S. government initiative in the region. The Strategy notes that prior U.S. assistance did not yield sustained, broad improvements in social or economic conditions and thus the Strategy intends to take a comprehensive, an integrated, and a whole-of-government approach that aligns activities and resources required to achieve systemic and lasting improvements. Under this approach, the Strategy promotes three mutually reinforcing objectives—prosperity, governance, and security. These three objectives seek to address challenges facing Central American countries, including the three Northern Triangle countries. For example: Prosperity Challenges: Northern Triangle countries have had high rates of poverty, low per capita income, and a lack of employment opportunities. The World Bank reported that, in 2014, over half of the population of Guatemala lived below the poverty line and, in 2017, almost one-third of the population of El Salvador and more than half of the population of Honduras lived below the poverty line. The World Bank also reported that El Salvador, Guatemala, and Honduras had among the lowest per capita incomes in Latin America in 2017. In addition, more than 27 percent of the population aged 15 to 24 in each of the Northern Triangle countries were not employed or seeking education or training in 2016, according to the World Bank. Governance Challenges: Northern Triangle countries have experienced widespread corruption, weak government institutions, and poor adherence to the rule of law. According to the 2018 Transparency International Corruption Perception Index, which ranks 180 countries by their perceived levels of public sector corruption, the Northern Triangle countries ranked among the bottom half. In addition, in 2018, Guatemala and Honduras ranked in the lowest 15 percent of countries in the World Justice Project’s Rule of Law Index, which measures countries’ adherence to the rule of law. Security Challenges: Northern Triangle countries have had weak security structures, high rates of crime and gang activity, and a lack of legitimate employment opportunities for youth susceptible to being drawn into criminal activity. While Northern Triangle countries experienced a decline in homicide rates from 2014 to 2017, the average homicide rate for El Salvador, Guatemala, and Honduras remains much higher than the averages for Latin America and the Caribbean for recent years and five to 12 times higher than the 10- year average for the United States. In addition, the percentage of people in the Northern Triangle who reported feeling safe walking in their neighborhoods at night was about 50 percent in 2017. Agencies reported implementing various assistance projects in the Northern Triangle to support the prosperity, governance, and security objectives from fiscal year 2013 through fiscal year 2018. We found that these projects generally correspond to 18 sectors that align with the three objectives of the current Strategy. Figure 1 shows the alignment of the 18 sectors with the objectives of the Strategy, including the six sectors we selected for an in-depth review. Table 1 shows the definitions for each of the 18 sectors we identified and the three objectives of the Strategy. Multiple Agencies Provide Assistance to Central America Multiple agencies implemented assistance projects in the Northern Triangle to support the prosperity, governance, and security objectives from fiscal years 2013 through 2018. State, USAID, DOD, and USDA were the primary agencies that implemented such projects in the Northern Triangle during this period. In particular, State and USAID manage foreign assistance to support the Strategy’s objectives, and play key roles in monitoring and evaluating this assistance. According to agency officials, State’s Bureau of Western Hemisphere Affairs (WHA) is responsible for managing the implementation of the Strategy’s objectives among agencies. For example, WHA manages regular coordination meetings with USAID and State’s Bureau of International Narcotics and Law Enforcement Affairs (INL) as well as larger coordination meetings with other relevant agencies, including DOD and USDA, according to officials. In addition, WHA gathers information across agencies on a quarterly basis to produce and disseminate cables that discuss progress and challenges related to the Strategy’s objectives. WHA also collaborated with USAID to develop a plan to monitor and evaluate U.S. assistance and report results. Agencies Allocated about $2.4 Billion from Fiscal Years 2013 through 2018 to Implement Hundreds of Projects to Support Prosperity, Governance, and Security in the Northern Triangle State, USAID, DOD, and USDA Allocated about $2.4 Billion to Fund Projects to Support Objectives in the Northern Triangle from Fiscal Years 2013 through 2018 Based on our review of agency funding data, we found that State, USAID, DOD, and USDA allocated about $2.4 billion in assistance to the Northern Triangle to support projects related to prosperity, governance, and security objectives from fiscal years 2013 through 2018. USAID reported the largest amount of allocations with approximately $1.44 billion, while State reported $464 million, and USDA and DOD each reported less than $235 million. For fiscal years 2013 through 2018, the four agencies reported allocating the largest amount of funding for projects in Guatemala, followed by Honduras and El Salvador. Specifically, the agencies reported allocating approximately $1.07 billion or 45 percent of total allocations to fund projects in Guatemala, approximately $749 million or 32 percent of total allocations to fund projects in Honduras, and approximately $496 million or about 21 percent of total allocations to fund projects in El Salvador. Some agencies also reported allocations for multi-country projects implemented in two or more countries, including at least one Northern Triangle country. For example, USAID funded a regional initiative to improve clean energy investment and reduce overall energy consumption throughout many Central American countries. The agencies reported allocating approximately $53 million for multi-country assistance projects implemented exclusively in two or three Northern Triangle countries, or about 2 percent of the total. See Table 2 for reported amounts of allocated funding by country and agency from fiscal years 2013 through 2018. State, USAID, DOD, and USDA Implemented At Least 370 Projects to Support Objectives in the Northern Triangle from Fiscal Years 2013 through 2018 State, USAID, DOD, and USDA implemented at least 370 technical assistance projects in the Northern Triangle to support prosperity, governance, and security objectives from fiscal years 2013 through 2018. The total number of projects that we report is lower than the actual number of projects implemented because some agencies and bureaus could not report data at the project level. Specifically, DOD and INL reported some broader assistance data that encompassed two or more projects and officials told us they were unable to disaggregate this data at the project level. Among the four agencies, USAID implemented the largest number of projects in the Northern Triangle during our time frame. Specifically, USAID reported that it implemented 218 projects or 59 percent of the projects reported across the four agencies. State reported that it implemented 124 projects or about one-third of the projects. DOD and USDA each reported 14 projects to support prosperity, governance, and security or about 4 percent each of the total projects. Collectively, the agencies reported they implemented the largest number of projects in Guatemala (126), followed by Honduras (106), and El Salvador (86). Agencies reported they implemented 52 multi-country projects that included at least one Northern Triangle country. See table 3 for the number of projects reported by country and agency. Agency officials typically reported implementing similar types of projects in each of the Northern Triangle countries, although there were some differences in the number of projects implemented for each objective and sector based on each country’s needs (see fig. 2). For example, officials told us that agencies implemented fewer agricultural development projects in El Salvador because its agriculture industry is small relative to Guatemala and Honduras and the majority of its population lives in urban rather than rural, agricultural areas. Instead, agency officials in El Salvador said agencies focused their prosperity assistance on projects in the economic growth sector that targeted more prominent business areas such as technology or manufacturing. For example, USAID supported a youth training center in El Salvador where students develop computer skills to work in the information technology fields (See fig. 3). Some agencies funded projects that supported multiple sectors and objectives, while others focused on a specific sector supporting one of the three objectives (see fig. 4). For example, USAID and State supported all three objectives by implementing projects in a variety of sectors. However, USDA supported only the prosperity objective by implementing projects primarily in the agricultural development sector and DOD supported the security objective by implementing projects primarily in the professionalize the military and develop defense capabilities sector. We also identified some specific assistance projects that supported more than one of the three objectives. For example, some of USAID’s workforce development projects targeted at-risk youth, which supported both the prosperity and security objectives. Other USAID projects worked with government officials in the Northern Triangle to improve health, environment, or economic growth, which supported both the prosperity and governance objectives. In addition, State’s rule of law projects, which trained police and other personnel in the judicial sector, supported both the governance and the security objectives. Below is an overview of the agencies’ general roles and responsibilities for supporting the three objectives: Prosperity: USAID, State, and USDA implemented projects supporting the prosperity objective. USAID implemented projects to assist populations to meet basic needs, help businesses access markets for goods and services, build a skilled workforce, and enhance health systems and education institutions. For example, one USAID economic growth project in El Salvador provided assistance to small enterprises through university-affiliated training centers where representatives of firms received training and advice to improve their business practices. State also implemented projects to assist businesses and entrepreneurs develop their capabilities. For example, State implemented a multi-country project to provide training to small and medium businesses on e-commerce platforms to access new markets and increase sales. USDA and USAID both implemented projects intended to help farmers improve agricultural management practices and increase their access to markets and capital. For example, a USAID agricultural development project in Honduras provided training to local farmers to increase their household incomes, strengthen access to food markets, and diversify their crops (see fig. 5). A USDA project provided schools in Honduras with food assistance, infrastructure improvements, and trainings to support school feeding, sought to improve educational outcomes (see fig. 6). Governance: USAID and State were the primary agencies supporting the governance objective. USAID projects provided technical assistance to governments to increase accountability, transparency, revenue collection, and provision of basic services. For example, a USAID project in Guatemala provided technical assistance to municipal governments to improve their financial management and increase the quality of government-provided services such as water and sanitation systems. State and USAID also supported this objective by supporting projects to strengthen justice institutions, combat corruption, improve democratic processes, and advocate for the protection of human rights. For example, we visited a morgue in Honduras where USAID and INL collaborated to provide forensic training and equipment and improve evidence collection and analysis capabilities, to better prosecute crimes (see fig. 7). Security: State, USAID, and DOD implemented projects to support the security objective. USAID and INL projects supported community based activities to prevent violence by supporting community youth centers, strengthening community policing, and implementing workforce development projects for at-risk youth. For example, a USAID project in Honduras provided technical training, mentorship, and job placement support for at-risk youth. INL also provided training and equipment to law enforcement to improve its capabilities and reputation in communities and to better identify and prevent crime, violence, and gang activity. For example, we visited the International Law Enforcement Academy in El Salvador, where U.S. assistance provides a variety of training courses to Central American and South American police, judges, and prosecutors, to increase capacity and coordination among law enforcement officials (see fig. 8). In addition, State funded and DOD funded and implemented projects to train and equip Northern Triangle militaries. DOD officials in Honduras, for example, told us they provide a range of trainings to Honduran military leaders at U.S. military schools. See appendix II for a summary of U.S. assistance projects in the Northern Triangle for our six selected sectors. Agencies Reported That Projects Implemented from Fiscal Years 2013 through 2018 Achieved Mixed Results State, USAID, DOD, and USDA reported mixed results, primarily focused on outputs, for the 190 projects in the six sectors we reviewed. While some projects in these sectors achieved the targets that agency officials established, others did not. We reviewed a variety of performance-related documents for the 190 projects that aligned with our six selected sectors—economic growth, agricultural development, good government service, justice reform, community based violence prevention, and professionalize the military and develop defense capabilities. Specifically, we reviewed State and USAID’s PPRs for fiscal years 2013 through 2018 for each Northern Triangle country, and State’s INCSRs for fiscal years 2013 through 2018. We also reviewed State and USAID’s Progress Report for the Strategy for fiscal years 2018 and 2019, and State’s quarterly country cables reporting on agencies’ progress in implementing projects that support the Strategy’s objectives in each of the Northern Triangle countries for available quarters of fiscal years 2016 through 2018. In addition, we reviewed implementer progress reports for a sample of 19 projects to obtain more detailed information on project-specific outputs and outcomes, as well as all available evaluations related to the six sectors completed from fiscal years 2013 through 2018. Examples of results for projects related to each of the six sectors include the following. Economic Growth: USAID implemented projects to assist workers improve their access to employment, and help firms improve their business practices and access markets. According to the PPRs we reviewed, USAID achieved 81 of 123 (66 percent) of its targets for performance indicators related to the economic growth sector for fiscal years 2013 through 2018. In addition, all nine evaluations in the sector reported generally positive project results. For example, according to the PPR, USAID assisted 176 firms to invest in improved technologies and 329 firms to improve their management practices in Guatemala in fiscal year 2017, exceeding the targets of 141 and 310, respectively. In addition, 5,067 individuals completed workforce development programs with U.S. assistance in the Northern Triangle countries in fiscal year 2018, according to the PPRs. USAID reported that 1,376 individuals completed workforce development programs in Guatemala, which exceeded the target of 1,000. However, USAID reported that 3,040 individuals in El Salvador and 651 individuals in Honduras completed such programs, which did not meet the fiscal year targets of 7,300 and 5,000, respectively. According to an evaluation of a USAID project in El Salvador that focused on providing training to individuals to improve their job opportunities, 3,585 individuals completed the training, which was 175 fewer than expected due, in part, to the project’s focus on training individuals for existing jobs and the scarcity of job opportunities for some individuals who completed the training. Agricultural Development: USAID and USDA implemented projects that provided assistance to apply improved agricultural technologies or management practices, and increase agricultural productivity and food security. According to the PPRs we reviewed, USAID achieved 58 of 86 (67 percent) of its targets for performance indicators related to this sector for fiscal years 2013 through 2018, and six of eight evaluations of agricultural development projects generally reported positive project results. For example, USAID reported in the PPR that 35,245 individuals in Honduras received short-term training with U.S. government support on agricultural productivity or food security in fiscal year 2018, exceeding the fiscal year target of 32,500, but 40,492 individuals received such training in Guatemala, which did not meet the target of 52,417. According to an implementer progress report, as of March 2017, an ongoing USDA school feeding project in Honduras had helped to construct and rehabilitate kitchens and food storage facilities at five of the 30 schools targeted by the project in 2017. An evaluation of a USDA project in El Salvador reported that the project issued 307 agricultural loans to improve agricultural production, which did not meet the target of 345 loans due, in part, to a delay in implementing the project. Good Government Service: USAID implemented projects to help create accountable and effective government institutions through improved provision of government services, increased citizen oversight, and greater ethics and transparency. According to the PPRs we reviewed, USAID achieved 22 of 30 (73 percent) of its targets for performance indicators related to this sector for fiscal years 2013 through 2018. Some of the projects achieved mixed results, according to an evaluation of projects in this sector. For example, USAID in the PPRs reported that in Honduras it exceeded targets in fiscal year 2018 by providing assistance to 94 local governments to improve public service and by training over 2,600 individuals in Guatemala in fiscal management to strengthen local government and foster decentralization. USAID met the target for fiscal year 2018 by having 81 public policies introduced, adopted, repealed, changed, or implemented with citizen input in Honduras. A USAID project in Guatemala designed to better manage public resources and government services reported in its fiscal year 2017 annual report that it helped 76 percent of the municipalities involved in the project increase their average monthly revenues following the project’s financial management training. However, an evaluation of two USAID projects in Honduras found that one project did not meet 70 percent of targets and struggled to successfully promote decentralization laws or increase municipal fiscal autonomy. Justice Reform: USAID and State provided technical assistance and equipment to help improve the efficiency of the courts and forensic laboratories, and strengthen the capabilities of prosecutors and judges. According to the PPRs we reviewed, USAID achieved 27 of 41 (66 percent) of its targets for performance indicators related to this sector for fiscal years 2013 through 2018. For example, according to the PPRs, 2,298 government officials in El Salvador received anti- corruption training with U.S. assistance in fiscal year 2018, surpassing the fiscal year target of 1,845. However, according to the PPRs, 150 individuals affiliated with nongovernmental organizations received such anti-corruption training in Guatemala in fiscal year 2017, which was below the fiscal year target of 550. The Progress Report for the Strategy for fiscal year 2019 reported that USAID assisted 244 courts in Guatemala to improve their case management systems in fiscal year 2018, which surpassed the target of 220. The Progress Report for the Strategy also reported that State and USAID trained 12,557 justice system personnel, including prosecutors and criminal investigators, in the Northern Triangle in fiscal year 2018; which surpassed the target of 2,275. Although State did not report targets, it provided data in its annual INCSR on U.S.-supported trainings, including training more than 1,000 police and justice sector personnel in El Salvador in 2016 and 2017, and 262 students in criminal investigations in Honduras in 2013. An evaluation of a USAID project in Guatemala noted the project helped improve prosecution practices and court management, but the evaluation also noted that continuous support would be required to preserve and consolidate reforms. Community Based Violence Prevention: USAID and State supported a number of efforts under the security objective to prevent violence in communities. According to the PPRs we reviewed, USAID achieved 7 of 18 (39 percent) of its targets for performance indicators related to this sector for fiscal years 2013 through 2018. For example, in El Salvador, 13 U.S. government-supported schools or other learning spaces met the criteria for the safe schools program in fiscal year 2018, surpassing the target of 10 schools. However, according to the PPR, in Honduras approximately 161,300 individuals participated in U.S.-funded gang prevention and education in fiscal year 2018, which did not meet the fiscal year target of 219,600. The Progress Report for the Strategy for fiscal year 2018 reported that State’s Gang Resistance Education and Training Program (GREAT) reached tens of thousands of youth and hundreds of police officers received instructor certifications to deliver anti-gang and crime prevention training through the program in the Northern Triangle in fiscal year 2017. However, State did not report targets for the program for the fiscal year. According to an implementer progress report, as of June 2018, an ongoing USAID project in Honduras that provides workforce development services for at-risk youth, had enrolled 2,528 of the project’s target of 6,500 youths for fiscal years 2017 and 2018. In addition, 440 of the project’s target of 2,488 youths for those fiscal years had completed the workforce development services as of June 2018, according to the report. Professionalize the Military and Develop Defense Capabilities: DOD and State supported efforts to professionalize the militaries of the Northern Triangle countries and develop their defense capabilities. While DOD and State reported positive output results for this sector, they also reported some limitations. According to the PPRs we reviewed, State achieved 48 of 71 (68 percent) of its targets for performance indicators for this sector for fiscal years 2013 through 2018. For example, in fiscal year 2018, State reported that 100 military personnel in Guatemala received technical or tactical training, which met the fiscal year target. State also reported that Guatemalan military personnel completed 12 exercises with U.S. or coalition personnel as a result of U.S. government assistance, which also met the target for fiscal year 2018. However, State reported that it supported the training of 44 fulltime peacekeeping staff in El Salvador in fiscal year 2017, which did not meet the target of 155. In its monitoring progress reports from fiscal years 2013 to 2018, DOD reported that it provided international military education and training to over 2,000 military personnel in the Northern Triangle, although DOD did not report targets. DOD personnel also engaged directly with Central American military personnel to improve their professionalism. For example, in Guatemala, DOD helped to establish a defense budget system designed to increase transparency and accountability of funds within the Ministry of Defense. However, DOD has reported ongoing challenges regarding the professionalism of Northern Triangle militaries and noted that public trust in the militaries remains low. Limited Information Is Available on Progress toward Prosperity, Governance, and Security Based on our review of various performance-related documents, we found limited information on progress toward improving prosperity, governance, and security in the Northern Triangle. Specifically, agencies generally reported more information about progress toward prosperity than toward governance and security. Some of the evidence about governance and security may be limited because evaluations were conducted unevenly across agencies and sectors. In addition, project implementers did not consistently collect key information to assess progress toward the Strategy’s objectives. Nevertheless, agency officials cited examples of important results from U.S. assistance as well as challenges to achieving progress toward the objectives. In addition, the Strategy’s monitoring and evaluation plan is not comprehensive because, while the plan specifies that State and USAID should track evaluations of their projects, it does not include a plan for evaluations of projects conducted by agencies other than State or USAID. Agencies Reported More Information on Progress toward Prosperity than toward Governance and Security for the Sectors We Reviewed For the sectors we reviewed, agencies generally reported more information on progress toward prosperity for projects related to economic growth and agricultural development, than toward governance and security. In addition, agencies generally reported positive information on progress toward prosperity for projects related to these sectors. For example, an evaluation of a USAID economic growth project in Guatemala reported the project supported 64 public-private partnerships that managed $39.1 million in investment, primarily from the business sector, for health, nutrition, and education activities to improve economic growth and development. In addition, USAID reported in the PPR that small and medium-sized firms assisted by its projects in El Salvador increased annual sales by approximately 40 percent in fiscal year 2016, which exceeded the target of 29 percent. In Guatemala, USAID also helped to increase crop yields by about $62 million and reduced household poverty by about 12.6 percent through two projects that trained agricultural producers in farm management practices and helped them access markets, according to an evaluation. Finally, an evaluation of a USDA agricultural development project in El Salvador reported that it helped generate approximately 12,930 new jobs, significantly exceeding the project’s goal of 900 jobs, in part, through increased access to credit and credit competency training. In general, however, little information was available from agency reports about progress toward the governance and security objectives. For example, an evaluation of a USAID project in good government service in Honduras that provided technical assistance to local governments to improve citizen satisfaction with services reported improvements in the quality of water and health services in most of the targeted municipalities, although the evaluation noted that the project had not developed appropriate indicators to measure results that were directly attributable to the project’s activities. Despite these improvements, the evaluation reported that the services remained largely unable to satisfy citizen needs adequately, and there was little evidence that municipalities would have the capabilities or resources to continue to improve the services without donor assistance. The evaluation also noted that the project promoted citizen advocacy by providing training to citizen oversight committees and establishing well-attended town halls in rural municipalities. However, it found no evidence such efforts were effective because the organizations remained too weak to advocate effectively for improved accountability and service. Another evaluation of a USAID project to prevent community based violence in Honduras reported significant reductions in homicide rates, ranging from 42 percent to 68 percent, in four of the six targeted communities, but also noted that these outcomes might not be attributable to the project’s activities. Although there were no evaluations of projects in the sector for professionalize the military and develop defense capabilities, DOD reported in its after action reports that it trained dozens of personnel who subsequently held positions of prominence within Northern Triangle militaries. The differences in results information for the three objectives are likely due, in part, to variations in the number of evaluations agencies conducted for their Northern Triangle projects. For example, we found that evaluations had been conducted unevenly across the agencies and six sectors we reviewed. Figure 9 shows the number of projects and completed evaluations of projects in the Northern Triangle that support the Strategy by agency and selected sector from fiscal years 2013 through 2018. From fiscal years 2013 through 2018, agencies completed 23 evaluations across the six sectors, which related to the 190 projects that agencies implemented in these sectors during this period. USAID completed 16 of these evaluations, with more than half of them in economic growth, although only 19 of the 116 projects USAID implemented in the sectors we reviewed related to economic growth. USDA completed six of these evaluations in agricultural development. State completed one evaluation in justice reform. DOD did not conduct any evaluations of its efforts to professionalize the military and develop defense capabilities in the Northern Triangle. In January 2017, DOD established agency-wide guidance for conducting assessment, monitoring, and evaluation of security cooperation programs and activities. Project Implementers Did Not Consistently Collect Key Information to Assess Progress toward the Strategy’s Objectives, but Officials Noted Improvements We found that project implementers for State and USAID did not consistently collect key information to evaluate progress towards outcomes. Specifically, 12 of the 23 evaluations we reviewed from fiscal years 2013 through 2018 cited instances in which projects had not established measures or collected data to measure outcomes. Six of the 17 evaluations we reviewed for the sectors for economic growth and agricultural development noted that implementers had not collected sufficient data to measure the projects’ outcomes. For example, an evaluation of a USAID project that supported municipalities to mobilize financial resources for economic development noted that evaluators were unable to measure whether the project’s activities improved the municipalities’ competitiveness in providing services to businesses and investors. The evaluators could not perform this assessment because the project implementers did not consistently collect data to measure improvements in the local business climate. An evaluation of USAID projects in agricultural development in Guatemala noted that evaluators were unable to assess the total welfare impacts of the projects, such as changes in household incomes, because the projects had not collected information on household or farmer incomes from all sources with which to compare results following project activities. All four evaluations we reviewed in the sectors for good government service and justice reform noted that the projects did not sufficiently establish or measure the projects’ outcomes. For example, an evaluation of two USAID projects in Honduras for good government service found that one project did not incorporate indicators to measure outcomes. While the other project incorporated outcome indicators, the evaluation found most of these indicators to be poorly defined and inadequate to measure the project’s results. An evaluation of a State project in justice reform in Honduras also found that project indicators were focused on outputs and not outcomes. The evaluation also noted that the indicators were established after the project started and thus did not establish a true baseline or capture results from the beginning. As a result, evaluators reported that they lacked the data to evaluate key results. The two evaluations of projects to prevent community based violence we reviewed discussed deficiencies with progress indicators. For example, an evaluation of a project in Honduras that focused on reducing homicide rates noted that the implementing partner relied on the Honduran government to obtain data on homicides, although the government had limited capability to document and report such data. USAID officials noted that USAID and project implementers have made improvements to projects’ monitoring and evaluation plans in response to evaluation findings. For example, project implementers have added outcome indicators and USAID officials have provided technical assistance to implementers to help them design new methods for collecting data in response to evaluation findings and recommendations, according to USAID officials. Agency Officials Described Progress and Challenges in Achieving Prosperity, Governance, and Security Although our review of various performance-related documents related to the six sectors show that limited information from evaluations is available on progress toward prosperity, governance, and security, agency officials described some important results from U.S. assistance in the Northern Triangle related to these sectors. For example, USDA officials noted that technical assistance and training helped to enhance crop research and water and soil conservation, which contributed to increased agricultural production. USAID officials noted that the technical assistance the agency has provided to small and medium sized firms has helped them access markets and increase sales. State and USAID officials also described improvements in the use of forensic evidence through technical assistance and training provided to judges and prosecutors and enhanced court management, which contributed to timely criminal investigations and prosecutions. In addition, State officials explained that U.S. assistance along with support from other donors and host governments has contributed to positive results, including the passage of laws that prevent organized crime from donating to political campaigns, multiple anti-corruption investigations, as well as reductions in homicide rates through community based violence prevention projects. Furthermore, DOD officials noted that assistance in defense planning and management helped support oversight and accountability in the use of military funds and enhanced the capacity of security forces to respond to disaster relief and drug interdiction efforts. Agency officials also noted that from fiscal years 2013 through 2018 they achieved results toward enhanced prosperity, governance, and security for the 180 projects that corresponded to the 12 sectors outside of the scope of our review. In particular, USAID officials noted that environment sector projects increased incomes for thousands of individuals through improved management and conservation of natural resources, such as watershed management. State officials also described important results from projects in the human rights sector, including strengthening the capacity of labor union networks to monitor and document hundreds of incidents of violence against union activists in Guatemala and Honduras and increasing the number of investigations into such incidents. In addition, State officials identified results in the police reform sector, including passage of police reform legislation, professionalization of police academies, and sharing of information among law enforcement. Agency officials we interviewed also cited examples of challenges to achieving progress toward prosperity, governance, and security. For example, USDA and USAID officials noted that drought and coffee rust— a fungal disease that harms coffee plants—reduced agricultural production in affected areas. USAID officials also pointed out that the health of the economy and labor markets affect the results of economic growth projects, particularly with regard to firms’ sales and the placement of individuals in jobs following their completion of workforce development programs. In addition, State and USAID officials cited the importance of government officials’ willingness to implement reforms as an important factor that affects the achievement of results across sectors. Furthermore, high turnover of civil service and military professionals affects the achievement and sustainability of results in various sectors, according to State, USAID, and DOD officials. Agency officials also explained that they have taken steps to modify projects to address such challenges. For example, USAID and USDA projects have provided technical assistance and training to farmers on how to prevent coffee rust and cultivate coffee varietals resistant to the disease. Strategy’s Monitoring and Evaluation Plan Is Not Comprehensive In its coordinating role for the implementation of the Strategy, State has not created a comprehensive monitoring and evaluation plan that specifies an approach to evaluating progress across all agencies. Our prior work regarding effective foreign assistance strategies found that development of a monitoring and evaluation plan is a key element in terms of assessing agencies’ common goals and objectives, and mutually reinforcing results. Additionally, we found that foreign assistance involves the collaborative efforts of multiple agencies, and strategies that consistently address agencies’ roles and responsibilities and include interagency coordination mechanisms can guide effective collaboration among agencies and prevent fragmentation. In addition, Standards for Internal Control in the Federal Government indicates that managers should identify the information needed to achieve objectives and use such information to evaluate performance in achieving objectives. State, in coordination with USAID, has developed and updated a monitoring and evaluation plan for funds appropriated to them to implement the Strategy in response to direction contained in committee reports accompanying several State, Foreign Operations, and Related Programs appropriations acts. However, the plan that State and USAID developed for the Strategy, while consistent with the committee reports’ direction, is not comprehensive. In particular, it does not incorporate all the relevant agencies, sectors, and activities that support the Strategy’s objectives. The plan notes that State and USAID will monitor and evaluate foreign assistance supporting the Strategy. While the plan specifies that State and USAID should track completed, ongoing, and planned evaluations of their projects supporting the Strategy’s objectives, it does not include a plan for evaluations of projects conducted by agencies other than State or USAID, such as DOD and USDA. Additionally, the plan notes that each agency requires project monitoring, including progress indicators, baselines, targets, and expected outcomes of projects. The plan specifies that State will compile and report performance data, which will provide an important source of information to assess progress toward Strategy objectives. However, the plan does not specify how State and USAID would include reporting on many activities conducted by other agencies that support the Strategy’s objectives. As a result, State officials noted the monitoring and evaluation plan does not include indicators for DOD and USDA activities that contribute to the objectives of the Strategy, with the exception of DOD activities funded through State. For example, State, in addition to determining the scope of security assistance and funding level for each recipient of International Military Education and Training (IMET) programs, also identifies annual IMET goals and objectives for each country. DOD administers IMET in coordination with State. State and USAID’s monitoring and evaluation plan includes indicators to measure progress of these programs. DOD, however, conducts a number of other programs to professionalize the military that State and USAID have not included in the monitoring and evaluation plan. For example, DOD provides training to Northern Triangle militaries and Ministries of Defense that is outside of the IMET program, such as Defense Government Management and Training engagements. The Progress Report for the Strategy for fiscal year 2018 indicated that under the IMET program there were 13 U.S.-trained personnel in positions of prominence, or positions of military or government leadership, in the Northern Triangle in fiscal year 2017. DOD, though, in a separate report on these military training and education programs, noted there were over 100 U.S.-trained personnel in positions of prominence in the Northern Triangle in fiscal year 2017. In addition, the monitoring and evaluation plan does not include any of USDA’s activities or activities related to the health sector that support the Strategy’s objectives, despite the fact USDA completed six evaluations of its agricultural development projects that could be used to inform an understanding of progress toward the Strategy’s objectives. By not capturing information on DOD and USDA activities, State and USAID have limited ability to assess the progress made by all U.S. government agencies in the Northern Triangle. State officials stated that the monitoring and evaluation plan is not inclusive of DOD and USDA activities because the legislative direction for the plan did not require it. The Strategy, however, intends to take a comprehensive, integrated, and whole of government approach to engagement in Central America. DOD and USDA officials in headquarters and at the Missions in El Salvador, Guatemala, and Honduras told us that their activities also support the Strategy’s objectives. Given its coordinating role in the Strategy’s implementation and in foreign policy objectives in general, State is well positioned to work collaboratively with officials from other agencies to develop a comprehensive approach to monitoring the impact of all activities across all sectors that directly support the Strategy’s objectives. A comprehensive monitoring and evaluation plan that specifies an approach to evaluating progress across all agencies would help State and USAID to determine to what extent U.S. government activities in the Northern Triangle are achieving the Strategy’s desired results. Conclusions The Northern Triangle, an area of strategic interest to the United States, faces high levels of poverty, weak governance, and widespread violence and insecurity. To respond to these challenges, the U.S. government has for many years provided assistance to the region. Multiple agencies have allocated billions of dollars to implement hundreds of projects that have provided technical assistance, equipment, and training to thousands of individuals and organizations. Agencies have reported mixed results from these projects, relative to targets set, yet little is known about progress on meeting broader objectives to improve prosperity, governance, and security in the region. Under the U.S. Strategy for Engagement in Central America, State and USAID developed a monitoring and evaluation plan, for their own projects, that is an important tool for assessing impact in the region. A more comprehensive approach to monitoring and evaluation of projects that may address the Strategy’s objectives to include all relevant agencies, sectors, and activities would enable the U.S. government to have a better understanding of progress under the Strategy and how U.S. assistance is addressing the underlying challenges that confront El Salvador, Guatemala, and Honduras. Given State’s coordinating role in the implementation of the Strategy among U.S. government agencies, including DOD and USDA, it is uniquely positioned to ensure that agencies collaborate effectively and that monitoring and evaluation are well coordinated and documented in a comprehensive plan. Recommendation for Executive Action The Secretary of State, working with the Administrator of the U.S. Agency for International Development, should collaborate with the Departments of Defense and Agriculture and other Departments as necessary, to develop a comprehensive approach to the monitoring and evaluation of projects that directly support the objectives of prosperity, governance, and security, and incorporate this approach into the Strategy monitoring and evaluation plan. Agency Comments and Our Evaluation We provided a draft of this report to State, USAID, DOD, USDA, DOJ, and DHS. We received written comments from State, USAID, and DOD, which we reprinted in appendixes V through VII. We received technical comments from State, USAID, DOD, and DHS, which we incorporated as appropriate. USDA and DOJ informed us in writing that they had no comments. State and USAID did not concur with our recommendations, indicating that neither agency has the authority to direct DOD or USDA to design and implement programs. USAID indicated that while greater interagency coordination would be appropriate, it does not have the authority to direct DOD or USDA to monitor and evaluate their projects against objectives developed for the Strategy. DOD noted that while some of its programs enable progress toward the Strategy’s objectives, it is not appropriate for State to specify how to monitor and evaluate DOD-funded programs. State also asserted that our recommendation is not consistent with the explanatory statements accompanying the Department of State, Foreign Operations, and Related Programs Appropriations Act, which directs State and USAID to develop a monitoring and evaluation plan for the Strategy for programs funded by appropriations to them, but does not direct that the plan include monitoring and evaluation of programs funded by appropriations to DOD and USDA. We are not recommending that State and USAID direct DOD and USDA to monitor and evaluate projects, but rather that State collaborate with DOD and USDA to develop a more comprehensive approach to monitoring and evaluating projects that support the Strategy’s objectives and that State document the results of this collaboration in the Strategy’s monitoring and evaluation plan. We do not prescribe the format or content for how the Strategy’s monitoring and evaluation plan might be updated. We have modified relevant sections of our report and our recommendation to make this clearer and eliminated the recommendation to USAID, since State coordinates implementation of the Strategy by the various agencies of the U.S. government. We found that DOD and USDA have designed and implemented programs that directly support the objectives of the Strategy. While we acknowledge that some coordination among agencies occurs in Washington and in the Northern Triangle, we found that such coordination does not formally extend to monitoring and evaluation. We agree with USAID’s comment that interagency coordination on a comprehensive monitoring and evaluation plan for the Strategy would be appropriate. Consistent with USAID’s comment, we believe that our recommendation encourages greater coordination among agencies, including DOD and USDA, by ensuring that comprehensive monitoring and evaluation efforts of the entire U.S. government are in sync with the monitoring and evaluation plan for the Strategy. Excluding DOD and USDA projects from the monitoring and evaluation plan for the Strategy could result in an incomplete or unclear understanding of the results of U.S. assistance in the Northern Triangle. Without a complete and clear understanding of the results across all agencies involved, agencies may miss important lessons about the types of assistance that are most effective in achieving U.S. objectives in this region, potentially limiting overall progress. Furthermore, while the explanatory statement accompanying Pub. L. No. 114-113 directs State, in coordination with USAID, to develop a monitoring and evaluation plan for funds appropriated to them, we are recommending that State, as coordinator for the implementation of the Strategy, work with the other agencies to develop a more comprehensive approach to monitoring and evaluating projects that support the Strategy’s objectives. State should update the monitoring and evaluation plan that was created in response to the congressional direction to document the comprehensive approach to monitoring and evaluation. State indicated that the credibility of our report was limited by the following five methodological issues: (1) our inclusion of projects implemented by DOD and USDA; (2) our inclusion of projects implemented with funds appropriated prior to fiscal year 2016; (3) our use of inconsistent reporting methods for funding allocations among the four State bureaus providing data and among State, USAID, DOD, and USDA; (4) our classification of program sectors, which was not consistent with the sub-objectives used by State and USAID as part of the Strategy; and (5) our exclusion of several “primary” sectors for our in-depth review, such as police professionalization, reducing violence at the local level, and reducing the influence of organized crime and gangs. We believe that our methodology enhanced the credibility and reliability of our report. Overall, we designed our objectives, scope, and methodology, as outlined in detail in appendix I, to provide a reasonably comprehensive review of the results of U.S. assistance to the Northern Triangle toward achieving key U.S. objectives. First, we chose to review all agencies that have allocated a significant amount of funding from their appropriations to implement projects in the Northern Triangle from fiscal year 2013 through fiscal year 2018 to support prosperity, governance, and security. DOD and USDA officials confirmed that DOD and USDA projects support these objectives and we believe that the inclusion of these agencies significantly enhanced the accuracy and completeness of our reporting on the results that have been achieved from U.S. assistance as well as the gaps in the current monitoring and evaluation approach and implications for State’s ability to assess results comprehensively. Second, we believe our inclusion of projects implemented from fiscal years 2013 through 2018 provided a reasonable time frame for our review because it included projects that supported the objectives of improving prosperity, governance, and security— long standing objectives that predated appropriations for the Strategy, and even the Strategy itself. Including projects implemented between fiscal years 2013 and 2018 increased our ability to report on the results of agencies’ projects and their overall progress toward the Strategy’s objectives because projects funded since fiscal year 2016 were in too early a stage of implementation to report meaningfully on such results. However, we considered, as appropriate, any results information we were able to obtain on such projects. Third, we acknowledge that the precision of our estimates for reporting on funding allocations was limited due to the inconsistent nature of reporting of financial data by different bureaus and agencies. However, taking into consideration qualifications noted throughout our report, we believe that our reporting of funding allocations provides a reliable description of how agencies used allocated funding to support prosperity, governance, and security objectives. Fourth, we believe that our classification of projects under different sectors we identified provides a detailed, comprehensive, and meaningful analysis of projects and related results. Because some of the sub-objectives developed by State and USAID, such as “reduce poverty,” were very broad and did not lend themselves to an analysis of specific project sectors that supported the Strategy’s objectives, we identified more specific sectors, including health, economic growth, and agricultural development. State and USAID officials validated the accuracy of our definitions, and we revised them as appropriate, given input from agency officials. Fifth, our selection of six sectors for in-depth review of projects and results limits the generalizability of our findings to all sectors, which we note. Due to the large number of projects, sectors, and sub-objectives associated with U.S. assistance to the Northern Triangle, we determined that a case study approach was the most effective methodology for our review. We devised selection criteria to reflect a meaningful selection of projects across sectors, agencies, and countries. Moreover, two of the sectors we selected for in-depth review—community based violence prevention, and justice reform—encompass several projects classified as relating to “reducing violence at the local level,” and “reducing the influence of organized crime and gangs.” Thus our report addresses results in these sectors. We omitted certain sectors, such as police professionalization, in part, because we had ongoing work related to this sector. We acknowledge limitations with this case study approach and do not attempt to generalize results beyond the sectors we reviewed. We believe that this methodological approach provides a reasonable basis for our overall conclusion that projects in the sectors we reviewed achieved mixed results. USAID also raised several methodological concerns, some of which were similar to those raised by State. In particular, USAID (1) questioned the validity of our analysis, since it was based on a case study of six of the 18 sectors we identified, and commented that we did not discuss the limitation of this approach; (2) questioned the validity of our use of monitoring information relating to the achievement of annual targets to analyze results; and (3) asserted that we focused on negative evaluation findings to assess results and did not mention or analyze planned and ongoing evaluations or programmatic changes made in response to monitoring and evaluation information. We believe our methodological approach provides a reliable basis for our findings and conclusions, and concerns USAID raised do not limit the credibility of our report. First, we acknowledge the limitations of our case study approach and included statements throughout our report to make these limitations clear. Second, we believe that the use of data on the achievement of annual targets is a valid approach to assessing results, although the agencies collecting the data may also intend to use it in making decisions about ongoing projects. Furthermore, these data provided only one element of our analysis. We also analyzed State and USAID implementer progress reports, mid-point and final evaluations, and other performance reports, which provide a longer-term perspective on results. Collectively, we believe that this information provides meaningful insight into the successes and shortcomings of the projects in the sectors we reviewed. Third, we sought to present a balanced picture of results within the sectors we reviewed, highlighting both positive and negative outcomes described in the reviewed documents. We reviewed completed evaluations to provide insight into project results, but excluded ongoing and planned evaluations because conclusions about project results are not available until such evaluations are completed. Similarly, our report acknowledges that agency officials described progress and challenges to achieving the prosperity, governance, and security objectives, as well as the steps taken to modify projects to address such challenges. However, such modifications fell outside the scope of our analysis of results, absent documentation of their specific impact on the achievement of objectives. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of the Department of State, the Administrator of the U.S. Agency for International Development, the Secretary of the Department of Defense, the Secretary of the Department of Agriculture, the Secretary of the Department of Homeland Security, the Attorney General, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7141 or groverj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Appendix I: Objectives, Scope, and Methodology This report examines (1) the projects that the U.S. government has implemented in the Northern Triangle from fiscal year 2013 through fiscal year 2018 to support prosperity, governance, security, (2) what is known about the results of these projects, and (3) what is known about progress toward the U.S. Strategy for Central America’s (Strategy) objectives. To determine the projects that the U.S. government has implemented in the Northern Triangle, we collected and analyzed agency project and funding data concerning foreign assistance projects supporting prosperity, governance, and security objectives from the U .S. Agency for International Development (USAID), and the Departments of State (State), Justice (DOJ), Homeland Security (DHS), Defense (DOD), and Agriculture (USDA). We focused our analysis on State, USAID, DOD, and USDA because they allocated the largest amounts of funding for the largest number of projects in the Northern Triangle from fiscal years 2013 through 2018. We included projects from fiscal years 2013 to ensure we examined projects that had undergone sufficient implementation to assess results. We obtained the data and information from several bureaus at State that administer these projects and funds: International Narcotics and Law Enforcement Affairs; Western Hemisphere Affairs; Political-Military Affairs; and Democracy, Human Rights, and Labor. We also obtained data from DHS and DOJ concerning projects implemented through agreements with State, which we included under State’s project and funding counts. Although agencies use different terms to describe agencies’ assistance, including programs, projects, and activities, we use the term “projects” to refer to assistance funded by the key agencies that are implemented directly by the agencies or through awards made to the implementing partners. In general, the term project consists of a set of activities that are designed and executed over a time frame to achieve a specific aim. While agencies and bureaus typically provided us with project-level data, some agencies and bureaus were unable to report data at the project level, and instead provided us with data that combined multiple activities or awards to implementing partners to accomplish a broader aim. In addition, most agencies reported project and funding data by country, including separating funding data for multi-country projects that were implemented in two or more countries, including at least one Northern Triangle country. Some agencies were not able to report multi-country projects by country, which we included in the multi-country project category. Since most agencies and bureaus provided us with project-level data separated by country, we use the term “projects” to encompass all available data on agencies’ assistance in each of the three countries. We analyzed agencies’ data and information to identify the number of projects implemented by agency and country and the total funding agencies allocated for these projects from fiscal years 2013 through 2018. We excluded from our analysis those projects that encompassed solely administrative and monitoring and evaluation activities and costs that did not provide technical assistance, although we included the funds allocated for these projects in our analysis of funds allocated by each agency for projects that supported prosperity, governance, and security. We assessed the reliability of the data that agencies reported for these projects. We requested and reviewed information from agency officials regarding the underlying data systems and the checks and reviews used to generate the data and ensure its accuracy and reliability. We also conducted logical checks and analysis to confirm the accuracy of the data. When we found potential duplicate data and discrepancies, we contacted relevant agency officials in Washington, D.C. and obtained information from them necessary to resolve these data issues. As a result of these steps, we determined that the data were sufficiently reliable for the purposes of reporting the number of projects that supported prosperity, governance, and security in El Salvador, Guatemala, and Honduras and funding allocations for these projects from fiscal years 2013 through 2018. To select a subset of the projects to review, we reviewed agencies’ project information as well as Strategy documents to categorize all projects into 18 different sectors of assistance that generally aligned with the current objectives of the Strategy. Specifically, we grouped similar projects by sector such as economic growth, justice reform, and community based violence prevention, and aligned them according to the Strategy’s three objectives of prosperity, governance, and security. We requested that officials from State, USAID, DOD, and USDA review our analysis to confirm our alignment of projects to the sectors and the three objectives. We incorporated revisions from agency officials as appropriate. We then selected a judgmental, nongeneralizable sample of six of the 18 sectors for an in-depth review of performance-related documentation for projects supporting each of the objectives. The six sectors selected included agricultural development, economic growth, good government service, justice reform, community based violence prevention, and professionalize the military and develop defense capabilities. We selected these six sectors to achieve variation by agency, funding allocation amount, country, and to include projects supporting each of the three objectives. Specifically, we selected the six sectors to include two sectors supporting each objective, a distribution of projects across the three Northern Triangle countries, and the largest amounts of allocated funding and number of projects. We excluded from our sample selection the migration and police reform sectors because of our ongoing work in those sectors concerning the Northern Triangle. To determine what is known about project results, we reviewed agency performance-related documents corresponding to the 190 projects implemented from fiscal years 2013 through 2018 in the six sectors we reviewed. Specifically, we examined State and USAID’s Performance Plans and Reports for El Salvador, Guatemala, and Honduras for each of fiscal years 2013 through 2018; State’s International Narcotics Control Strategy Reports for fiscal years 2013 through 2018; State and USAID’s Progress Report for the U.S. Strategy for Central America’s Plan for Monitoring and Evaluation for fiscal years 2018 and 2019; and State’s quarterly country cables reporting on agencies’ progress in implementing projects in support of prosperity, governance, and security objectives in each of the Northern Triangle countries for the available quarters of fiscal years 2016 through 2018. We also requested and reviewed all 23 evaluations completed from fiscal years 2013 through 2018 by State, USAID, and USDA related to the six selected sectors in each Northern Triangle country. In addition, we selected a nongeneralizable sample of 19 projects within the six selected sectors to gain more in-depth information and context about project implementation and results. For the nongeneralizable sample of projects, we reviewed performance-related documentation, including, among other things, implementing partners’ quarterly, semi-annual, and annual progress reports, to examine project results. We selected the19 projects based on a variety of criteria, including the types of project activities and the objectives they supported, as well as to obtain a range of funding allocation amounts, countries, and agencies. We excluded from our sample selection those projects that encompassed solely administrative and monitoring and evaluation activities and costs, and those that agencies reported as pilot projects not yet implemented. To examine what is known about progress toward the Strategy’s objectives, we reviewed Strategy documents, including monitoring and evaluation plans, to assess if they included key elements of effective strategies that we have identified as related to assessment of progress toward strategic goals. We developed these elements on the basis of prior work related to U.S. government strategies and interagency collaboration as well as prior work on addressing fragmentation, overlap, and duplication in the federal government. Our prior work suggests that strategic documents offer an opportunity to consider the roles and responsibilities of various stakeholders involved in achieving those goals, and information on how progress toward those goals will be measured. The Strategy documents were reviewed and rated by two analysts to determine the extent the planning and reporting procedures aligned with the key elements for foreign assistance strategies in situations where multiple agencies work together to deliver foreign assistance. These elements related to (1) delineation of agencies’ roles and responsibilities and coordination mechanisms; and (2) assessment of progress toward strategic goals, including identifying activities to achieve results, performance indicators, and monitoring and evaluation plans. Additionally, in assessing the monitoring and evaluation plan, we considered the Standards for Internal Control in the Federal Government, which specify that managers should identify the information needed to achieve objectives and use such information to evaluate performance in achieving objectives. To determine State and USAID’s rationale for not including other agencies’ activities that support the objectives of the Strategy, we met with State and USAID officials in Washington, D.C. We also reviewed relevant Strategy documents and Congressional legislation, particularly Public Law 115-31, 131, the Consolidated Appropriations Act, 2017, which State and USAID cited as the basis for the creation of the Strategy’s results architecture and monitoring and evaluation plan. To support our work on all three objectives, we conducted fieldwork in El Salvador, Guatemala, and Honduras. During the fieldwork, we observed selected project activities, and interviewed agency officials, implementing partners, and project beneficiaries about the project activities and results, and factors that affected project results. We also interviewed agency officials in Washington, D.C. from relevant State bureaus, USAID, DOD, and USDA Foreign Agricultural Service as well as officials of the U.S. Southern Command in Doral, Florida about project activities, project results, factors affecting results and actions to address these factors, as well as efforts to monitor and evaluate project results. We conducted this performance audit from December 2017 to September 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate, evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Summary of U.S. Assistance to the Northern Triangle for Selected Sectors, Fiscal Years 2013 through 2018 This appendix provides a summary of information on U.S. Agency for International Development (USAID), and the Departments of State (State), Defense (DOD), and Agriculture (USDA) assistance projects in the three Northern Triangle countries—El Salvador, Guatemala, and Honduras—to support the prosperity, governance, and security objectives of the U.S. Strategy for Central America (Strategy) from fiscal years 2013 through 2018. We provide a summary of information for the following sectors we selected by country, agency, funding allocation amount, and objective of the Strategy. The sectors include economic growth, agricultural development, good government service, justice reform, community based violence prevention, and professionalize the military and develop defense capabilities. For each sector, we provide an overview and examples of projects, including project objectives, activities, and results that State, USAID, DOD, and USDA reported toward improving prosperity, governance, and security in the Northern Triangle. The information about each sector also includes the following data, selected to illustrate the scope of U.S. assistance in each sector and the underlying conditions that impact prosperity, governance, and security in the Northern Triangle: Total number of projects: The total number of projects we identified that supported each sector in each country from fiscal years 2013 through 2018. Approximate Reported Funding: An estimate of the total allocated funding reported for the projects in each sector. Context Indicators: Data reported from various organizations relevant to each of the sectors, including the World Bank, and reported in State and USAID’s Progress Report for the Strategy for fiscal years 2018 and 2019. We did not independently verify these reported data. Economic growth projects are intended to assist populations living below the poverty line meet basic needs, help businesses improve their business practices and access markets and investment, and promote workforce development. USAID and State implemented 26 economic growth projects in the Northern Triangle from fiscal years 2013 through 2018. Selected Examples of Economic Growth Projects Context Indicators National Poverty Rate industries and higher education institutions to develop educational programs and research. The project trained 100 researchers at universities on how to complete applied research studies on the economy. This training, along with 26 applied research studies funded by the project, allowed for collaborative research between academia and the private sector that had not previously existed in El Salvador. The project also upgraded or created 28 new degree programs to align with industry demands. The project awarded 900 scholarships to students enrolled in these degree programs. A USAID project in Honduras provided assistance to rural micro- enterprises to improve their access to markets and competitiveness. The project helped 2,270 of these enterprises adopt new inputs, technologies, and practices for a range of entrepreneurial activities, such as installing solar panels and cultivating organically grown coffee, according to an evaluation. It also helped micro-enterprises achieve certifications from trade and business associations to help them access new markets with higher quality standards to obtain better prices for products, such as high quality chocolate. A USAID project in El Salvador encouraged public-private partnerships and provided funds to help municipalities mobilize financial resources for improving economic development. It also intended to help municipalities streamline their administrative procedures to improve the local business climate. The streamlined procedures reduced the time required to complete business processes and diminished the chances for bribery and other illegal practices, according to an evaluation. A USAID project in El Salvador targeted over 10,000 micro- enterprises and 20 local governments to strengthen the capacity of providers of business development services to help these micro- enterprises improve innovation and technology, access financing, and increase exports. According to an implementer progress report, the project provided trade capacity building assistance to at least 369 micro-enterprises to help them export. It also trained at least 491 entrepreneurs and 14 business consultants to develop export opportunities. A USAID project in El Salvador offered assistance to help workers obtain employment. It provided training to more than 5,600 individuals, including at-risk youth and disabled persons, to improve their job placement opportunities, according to an evaluation. The project also placed 4,886 participants in new or improved jobs. The evaluation also noted that the firms participating in the project reported that the project’s methods reduced their recruiting and hiring costs and risks and contributed to a decrease in employee turnover. A USAID project in Honduras installed irrigation systems to grow lettuce and other crops. Agricultural development projects are intended to assist farmers to increase the quantity and quality of crops through training, research, and better access to capital. They also sought to assist farmers to gain access to markets and address food security. USAID and USDA implemented 40 agricultural development projects in the Northern Triangle from fiscal years 2013 through 2018. Selected Examples of Agricultural Development Projects Context Indicators Rural Population (Approximate percent of total population, 2017) Rural Poverty (Approximate percent of rural population, 2014) productivity and expand trade. The project provided training to more than 500 individuals, approximately 99 percent of whom reported using the lessons they learned to improve their farm management practices, according to an evaluation. In addition, approximately 97 percent of them reported that they made business decisions based on economic considerations or analysis following the training. The project provided 35,215 microfinance loans, valued at approximately $37.5 million. Approximately 82 percent of the beneficiaries reported an increase in agricultural production and approximately 88 percent reported an increase in business sales because of the loans, according to an evaluation. Although the evaluation noted that the loans had the potential to expand agricultural trade, the effects were mixed. A USDA project in Guatemala that provided school meals doubled the number of schools that reported having access to food in six municipalities and provided more than 40,400 school-age children with daily meals, according to an evaluation. The evaluation also reported that the reduction in hunger from the project contributed to a decline in absentee rates for students at the participating schools, from 20 percent before its implementation to 5 percent. The project also constructed or rehabilitated kitchens at 106 schools and provided utensils and equipment for preparing food. A USDA school feeding project in Honduras provided meals to more than 50,000 children in 1,047 schools. The project also conducted education campaigns using local media to inform the population about the importance of education and the steps for enrolling children in school. Following the project’s implementation, school attendance for boys increased by approximately 6 percent and for girls by approximately 2 percent, according to an evaluation. USAID projects in Guatemala that aimed to help small farmers improve their farming practices and gain access to markets had mixed results. For example, the evaluation noted that per capita incomes or household incomes of municipalities included in the projects fared worse than municipalities that were not. However, municipalities included in the projects fared better in access to electricity and rates of home ownership. Municipal watershed reforestation project in Guatemala supported by USAID. Good government service projects are intended to increase the effectiveness, efficiency, accountability, and transparency of government services and institutions. They do so by providing training and technical assistance to improve revenue collection and management, promote transparency and citizen oversight, and enhance the quality of government services. USAID funded 29 good government service projects from fiscal years 2013 through 2018. Selected Examples of Good Government Service Projects Context indicators Government effectiveness (est.): (Percentage points changed 2013 to 2017) government services to better respond to citizen needs. An evaluation noted that the project helped draft stronger decentralization laws, but these were not passed due to lack of political will. The evaluation also reported the project provided technical assistance and training to municipal governments on revenue collection, fiscal management, and financial software systems intended to help raise revenue. However, the evaluation also found that 39 percent of municipalities reported decreases in fiscal autonomy. The evaluation also cited resource constraints, data inconsistencies in income records, and concerns about the sustainability of the training. A USAID project in Guatemala sought to strengthen select municipalities to better manage public resources and deliver services in a efficient and transparent manner in order to foster development. According to the project’s 2017 annual report, 76 percent of the target municipalities increased their average monthly revenues by 19 percent following finance management trainings. A USAID project in El Salvador aimed to improve government transparency and accountability. It did so by supporting citizen oversight and government compliance with regulations and standards related to transparency, professionalism, and ethics. According to a 2018 implementor monitoring report, the project met a majority of its expected performance goals. In addition, 11 of the targeted municipalities noted in their self-assessments an increased capacity to provide access to information and promote ethics in their institutions. US-provided forensic equipment at a criminal forensic lab in Honduras. Justice reform projects are intended to provide training, equipment, and technical assistance to the justice system to decrease impunity, combat corruption, improve prosecution and forensic capacities, and increase the efficiency and management of courts. USAID and State implemented 42 projects in justice reform from fiscal years 2013 through 2018. Selected Examples of Justice Reform Projects Context Indicators Percentage of the Population with Trust in the Courts: (Percentage points change, 2014 and 2018) transparency, accountability, and ethics, and increase civil society participation in government through technical assistance and training. An evaluation found that the project increased awareness of these topics, and led to some improvements in laws and regulations, such as improving the legal framework for anti-corruption efforts. However, the project was unable to achieve any significant changes intended due to lack of political will. A State project in Honduras implemented activities that sought to reduce violence and homicide by increasing access to justice, strengthening institutions and local organizations’ capacity to deliver legal and support services for victims of violence and rehabilitation and reintegration services for prisoners. A mid-term evaluation found that the project successfully convened stakeholders to discuss women and children’s access to justice and carried out a campaign to disseminate information on human rights and access to justice. The evaluation also found the project helped maintain, but not increase rehabilitation and reintegration services for prisoners. El Salvadoran police meeting with youth in a police athletic league. Community based violence prevention (CBVP) projects are intended to reduce the levels of crime and violence, including addressing some of the root causes of insecurity. USAID and State implemented 31 CBVP projects from fiscal years 2013 through 2018. These projects sought to support anti-gang education, employment opportunities for at-risk youth, and efforts to increase institutional capacity and citizen responsibility for crime prevention in municipalities plagued by violence. Selected Examples of Community Based Violence Prevention Projects Context Indicators aimed at increasing access to comprehensive, long-term social, education, and health services for high-risk populations. As part of these efforts, 242,029 individuals participated in U.S. government- funded gang prevention and education programs in Honduras in fiscal year 2017. USAID projects in Honduras worked with civil society organizations to provide violence prevention services with a focus on vulnerable populations. In fiscal year 2018, USAID reported that 202 people received U.S. government-funded gender-based violence services, including health, legal, and counseling services. Percentage of the Population Who Feel Safe Walking in their Neighborhood at Night: (Difference in percentage points, 2014 and 2017) A USAID project in Honduras sought to lower rates of homicide and other violent crime through alliances of communities and government institutions, especially the police. A mid-term evaluation of the project reported significant decreases in homicide rates, ranging from 42 percent to 68 percent, in three of the six communities where USAID targeted its assistance. A USAID project aimed to improve educational options for out-of- school youth by offering them alternatives to criminal and gang activity. An evaluation of the project reported that more than 90 percent of the more than 15,000 individuals who enrolled in school did not pass exams to demonstrate competency at the end of courses. The evaluation further noted that 30 percent of the youth did not remain in school, which likely resulted in a small fraction of them meeting the goal of increasing their income. Honduran Special Forces demonstrate U.S. training. Projects to professionalize the military are intended to increase the acountability, competency, and capabilities of militaries in the Northern Triangle. DOD and State implemented a number of these activities from fiscal years 2013 through 2018. The projects provided military equipment and training to military personnel and technical assistance to Ministry of Defense personnel. Selected Examples of Professionalize the Military and Develop Defense Capabilities Projects Context Indicators Policy and a budgeting system for its Ministry of Defense that supports transparency and accountability. Total Number of US Trained Personnel at National Leadership Levels: (Fiscal year 2018) An After Action Report of a DOD Defense Governance workshop in Guatemala noted that DOD continued to support the Guatemalan Ministry of Defense to identify national policy and strategy priorities, determine capabilities, and develop a data-driven approach to problem solving and making decisions on resources. A DOD report noted that DOD training in El Salvador that focused on fighting corruption had improved relations between military and civilian institutions. Appendix III: Evaluations Related to Selected Sectors of U.S. Assistance to the Northern Triangle, FY 2013 through 2018 Andrade Costa, Melissa, and Irene García Palud, Evaluation Report: Mid- term Evaluation of the Program, “Reducing Violence and Homicide Through Access to Justice in Chamalecón, Satelite, and Rivera Hernández Neighborhoods of San Pedro Sula, Honduras”, August 2018. USAID/El Salvador Monitoring, Evaluation and Learning Initiative. Final Performance Evaluation of the Higher Education for Economic Growth Activity, May 17, 2018. DevTech Systems, Inc. Programa de Monitoreo y Evaluación: Evaluación final del Poyecto Cadenas de Valor Rurales (PCVR), August 2017. Mendéz England and Associates, Evaluación de Desempeño de Medio Término de la Actividad de Educación para la Niñez y Joventud 2011- 2017, August 2017. Management Systems International, A Tetra Tech Company. Performance Evaluation of the Partnership for Growth in El Salvador, March 20, 2017 (Revised July 24, 2017). Asociación de Desarrollo Organizacional Comunitaria (ADOC). Mid-term Evaluation of the Investment for Educational Development of the Highlands (IDEA) Project, Save the Children/USDA, 2016. Advisem Services, Inc. Final Evaluation Report: Final Evaluation of FINCA’s Food for Progress (FFPr) in El Salvador, November 30, 2016. The Cadmus Group, Inc. Performance Evaluation of USAID/Honduras Proparque Program, June 2016. Boston College School of Social Work, Final Evaluation Report: Food for Education (FFE) Project – USDA Catholic Relief Services (CRS) Honduras, April 2016. Khanti, S.A. Project Concern International, Food for Education II, Mid- term Evaluation Final Report, December 2015. Boston College School of Social Work. Mid-term Evaluation Report: Food for Education “Learning for Life” Guatemala, October 2015. Social Impact, Inc. Honduras Convive! Mid-term Evaluation Report, July 10, 2015. DevTech Systems, Inc. Final Evaluation of the USAID/Alianzas Project, December 12, 2014. Optimal Solutions Group, LLC. Partnership for Growth: El Salvador– United States (2011-2015), Mid-term Evaluation Final Report, September 30, 2014. DevTech Systems, Inc. Informe final Evaluación del Proyecto Apoyo en Políticas y Regulaciones para el Crecimiento Económico de Guatemala (PRS), September 20, 2014. Optimal Solutions Group, LLC. Final Report: Does Assistance to Farmers Translate into Community Welfare Improvements? Non-Experimental Program Evaluation of USAID Assistance to Smallholder Farmers in Guatemala, August 18, 2014. Notre Dame Initiative for Global Development. Food for Education Mid- term Evaluation, July 2014. Democracy International, Inc. Final Report: Mid-term Performance Evaluation of the Transparent Local Governance and Improved Service Delivery Project (USAID/NEXOS) and the Decentralized Enabling Environment Project (USAID/DEE), May 2014. International Business and Technical Consultants, Inc. Evaluation Report: Final Performance Evaluation of the USAID Municipal Competitiveness Project in El Salvador, January 29, 2014. Development Training Services, Inc. Report on the Mid-term Performance Evaluation of the USAID Transparency and Governance Project El Salvador, December 24, 2012. Rivera Cira Consulting, Inc. USAID/Guatemala Final Performance Evaluation for the Project Against Violence and Impunity (PAVI), December 20, 2012. Amex International and DevTech Systems, Inc. USAID/Guatemala Mid- term Performance Evaluations for Two Economic Growth Office Projects, October 25, 2012. International Business and Technical Consultants, Inc. Performance Evaluation of the “Improving Access to Employment Program in El Salvador”. October 17, 2012. Appendix IV: U.S. Strategy for Central America Results Architecture The Department of State (State) and the U.S. Agency for International Development (USAID) produced the results architecture for the U.S. Strategy for Central America (Strategy). The results architecture presents the desired end-state of the Strategy; the three primary objectives of prosperity, governance, and security; and sub-objectives that support each of the primary objectives. State and USAID defined the Strategy’s mission as to secure U.S. borders and protect U.S. citizens by addressing the economic, governance, and security drivers of illegal immigration and illicit trafficking, and to promote private sector investment in Central America. The result architecture’s overall objective is an economically integrated Central America that is fully democratic; provides economic opportunities to its people; enjoys more accountable, transparent, and effective public institutions; and ensures a safe environment for its citizens. The Strategy’s prosperity objective is to work with Central American governments to improve the business environment, create jobs, enhance food security, expand energy security, and increase U.S. investment and trade. The Strategy’s governance objective focuses on reducing impunity and corruption through the creation of more transparent, efficient governments that deliver services, including justice, effectively. The Strategy’s security objective includes enhancing citizen security, re-establishing state presence and security in communities at risk, scaling up violence prevention and law enforcement activities in communities, and targeting individuals most susceptible to gang recruitment. Figure 10 depicts the overall summary of the Strategy’s results architecture, which focuses on the objectives of prosperity, governance, and security. Appendix V: Comments from the Department of State GAO Comments 1. We are not recommending that State direct DOD and USDA to monitor and evaluate projects, but rather that State collaborate with DOD and USDA to develop a more comprehensive approach to monitoring and evaluating projects that support the Strategy’s objectives and that State document the results of this collaboration in the Strategy’s monitoring and evaluation plan. We do not prescribe the format or content for how the Strategy’s monitoring and evaluation plan might be updated. We have modified relevant sections of our report and our recommendation to make this clearer and directed the recommendation to the Secretary of State, since State coordinates implementation of the Strategy by the various agencies of the U.S. government. We found that DOD and USDA have designed and implemented programs that directly support the objectives of the Strategy. While we acknowledge that some coordination among agencies occurs in Washington and in the Northern Triangle, we found that such coordination does not formally extend to monitoring and evaluation. We believe that our recommendation encourages greater coordination among agencies, including DOD and USDA, by ensuring that monitoring and evaluation efforts by U.S. government agencies are in sync with the monitoring and evaluation plan for the Strategy. Excluding DOD and USDA projects from the monitoring and evaluation plan for the Strategy will continue to result in an incomplete or unclear understanding of the results of U.S. assistance in the Northern Triangle. Without a complete and clear understanding of the results across all agencies involved, agencies may miss important lessons about the types of assistance that are effective in achieving U.S. objectives in the region, potentially limiting overall progress. 2. While the explanatory statement accompanying Pub. L. No. 114-113 directs State, in coordination with USAID, to develop a monitoring and evaluation plan for funds appropriated to them, we are recommending that State, as coordinator for the implementation of the Strategy, work with the other agencies to develop a more comprehensive approach to monitoring and evaluating projects that support the Strategy’s objectives, and that they utilize the monitoring and evaluation plan that they have already created in response to the congressional direction as a place to document the comprehensive approach to monitoring and evaluation. 3. We chose to review all agencies that have allocated a significant amount of funding from their appropriations to implement projects in support of prosperity, governance, and security objectives in the Northern Triangle. State, USAID, DOD, and USDA officials confirmed that DOD and USDA projects support the objectives of the Strategy, and we believe that the inclusion of these agencies enhanced the accuracy and completeness of our reporting on the results that have been achieved from U.S. assistance as well as the gaps in the current monitoring and evaluation approach and implications for State’s ability to assess results comprehensively. 4. We believe our inclusion of projects implemented from fiscal years 2013 through 2018 provided a reasonable time frame for our review because it includes projects that supported the objectives of improving prosperity, governance, and security—long standing objectives of U.S. assistance to the Northern Triangle that predated appropriations for the Strategy, and even the Strategy itself. Including projects implemented between fiscal years 2013 and 2018 increased our ability to report on the results of agencies’ projects and their overall progress toward the Strategy’s objectives because projects funded since fiscal year 2016 were in too early a stage of implementation to report meaningfully on such results. However, we considered, as appropriate, any results information we were able to obtain on such projects. 5. We acknowledge that the precision of our estimates for reporting on funding allocations was limited due to the inconsistent nature of reporting of financial data by different bureaus and agencies. However, taking into consideration qualifications noted throughout our report, we believe that our reporting of funding allocations provides a reliable description of how agencies used allocated funding from fiscal years 2013 through 2018 to support prosperity, governance, and security objectives in the Northern Triangle. 6. We believe that our classification of projects under the different sectors we identified enabled us to provide a more detailed, comprehensive, and meaningful analysis of projects and related results. Because some of the sub-objectives that State and USAID developed, such as “reduce poverty,” were very broad and did not lend themselves to an analysis of specific project sectors that supported the Strategy’s objectives, we identified more specific sectors, including health, economic growth, and agricultural development. State and USAID officials validated the accuracy of our definitions, and we revised them as appropriate, given input from agency officials. 7. Our selection of six sectors for in-depth review of projects and results limits the generalizability of our findings to all sectors, which we note. Due to the large number of projects, sectors, and sub-objectives associated with U.S. assistance to the Northern Triangle, we determined that a case study approach was the most effective methodology for our review. We devised selection criteria for our case study to reflect a meaningful selection of projects supporting each of the three objectives across a range of sectors, agencies, and countries. Moreover, two of the sectors we selected for in-depth review—community based violence prevention and justice reform— encompass several projects classified as relating to “reducing violence at the local level,” and “reducing the influence of organized crime and gangs.” Thus our report addresses results in these sectors. We omitted projects relating to police professionalization, in part, because we had ongoing work related to this sector. We acknowledge limitations with this case study approach and do not attempt to generalize results beyond the sectors we reviewed, but we believe our methodological approach provided a reasonable basis for our overall conclusions. Appendix VI: Comments from the U.S. Agency for International Development GAO Comments 1. We eliminated the recommendation to USAID because State plays a coordinating role in the Strategy’s implementation and is well positioned to work collaboratively with officials of other agencies, including DOD and USDA. We believe our recommendation to State, in which we recommend that they work with USAID, encourages greater coordination among agencies, including DOD and USDA, to ensure that their efforts are included in a comprehensive monitoring and evaluation plan for the Strategy. 2. We believe our inclusion of projects implemented from fiscal years 2013 through 2018 provided a reasonable time frame for our review because it included projects agencies implemented to support the long standing objectives of prosperity, governance, and security in the Northern Triangle—objectives that the U.S. government has supported under various initiatives that predated the Strategy and appropriations for the Strategy. Furthermore, including projects implemented between fiscal years 2013 and 2018 increased our ability to report on the results of agencies’ projects and their overall progress toward prosperity, governance, and security because projects funded since fiscal year 2016 were in too early a stage of implementation to report meaningfully on results. However, we considered, as appropriate, any results information we were able to obtain on such projects. 3. We requested and reviewed all USAID evaluations completed during the time frame for our review—from fiscal years 2013 through 2018 or October 2012 through September 2018—to gain insight into the results of projects supporting the long standing U.S. assistance objectives of prosperity, governance, and security in the Northern Triangle. While we reviewed four evaluations that USAID completed at the beginning of fiscal year 2013, as shown in appendix III, three of these were mid-point evaluations of ongoing projects that continued implementation in fiscal years 2013 and 2014, during the time frame for our review. Although we reviewed one final evaluation of a project that had ended prior to the beginning of fiscal year 2013, the evaluation was a key aspect of the project’s implementation and lessons learned, which provided information pertinent to future USAID programming in the areas of justice reform and security. Furthermore, while our report noted examples of actions that agencies took in response to challenges to achieving progress toward prosperity, governance, and security, analysis of actions taken in the design of specific projects based on the findings and recommendations of the evaluations we reviewed was outside the scope of our review. 4. We believe that our classification of projects under different sectors we identified provides a detailed, comprehensive, and meaningful analysis of projects and related results. Because some of the sub- objectives developed by State and USAID, such as “reduce poverty,” were very broad and did not lend themselves to an analysis of specific project sectors that supported the Strategy’s objectives, we identified more specific sectors, including health, economic growth, and agricultural development. State and USAID validated the accuracy of our definitions, and we revised them as appropriate, given input from agency officials. We acknowledge that our selection of a judgmental sample of six sectors for in-depth review of projects and results limits the generalizability of our findings to all sectors, which we noted throughout our draft report. However, due to the large number of projects, sectors, and sub-objectives associated with U.S. assistance to the Northern Triangle and the extensive amount of documentation to obtain and analyze for each project, we determined that this case study approach was the most effective methodology for our review. We devised our selection criteria for our case study to reflect a meaningful selection of a significant number of projects across objectives, sectors, agencies, and countries. We do not believe that omitting some sectors from our in-depth review limited the credibility of the findings of our report. 5. We believe that the use of data on the achievement of annual targets is a valid approach to assessing project results, although the agencies collecting the data may also intend to use it in making decisions about the progress of ongoing projects. These data were only one element of our analysis. We also analyzed data and information from USAID implementer progress reports, mid-point and final evaluations, and other performance reports, which provided a longer-term perspective on results. Collectively, we believe that this information provided meaningful insight into the successes and shortcomings of the projects in the sectors we reviewed. Our report acknowledges that agency officials described progress and challenges to achieving the prosperity, governance, and security objectives, as well as the steps taken to modify projects to address such challenges. However, such modifications fell outside the scope of our analysis of results, absent documentation of the specific impact of such modifications on the achievement of objectives. 6. We reviewed completed evaluations to provide insight into project results, but excluded ongoing and planned evaluations because conclusions about project results are not available until such evaluations are completed. Similarly, our draft report acknowledged that agency officials described progress and challenges to achieving the prosperity, governance, and security objectives, as well as the steps taken to modify projects to address such challenges. However, such modifications fell outside the scope of our analysis of results, absent documentation of their specific impact on the achievement of prosperity, governance, and security objectives. Appendix VII: Comments from the Department of Defense GAO Comment 1. We believe that the inclusion of DOD projects significantly enhanced the accuracy and completeness of our reporting on the projects that the U.S. government has implemented in the Northern Triangle from fiscal years 2013 through 2018, and the important lessons learned from these projects on progress toward the Strategy’s objectives. State and DOD officials confirmed that DOD has designed and implemented projects from its appropriation that support the security objective of the Strategy in the Northern Triangle. Furthermore, we are not recommending that State and USAID specify how DOD monitors and evaluates such projects, but rather that State and USAID collaborate with DOD to specify a comprehensive approach to the monitoring and evaluation of projects across all agencies that directly support the Strategy’s objectives. Excluding DOD projects from the monitoring and evaluation plan for the Strategy could result in an incomplete or unclear understanding of the results of U.S. assistance in the Northern Triangle. Without a complete and clear understanding of the results across all agencies involved, including DOD, agencies may miss important lessons learned about the types of assistance that are most effective in this region, potentially limiting overall progress. Appendix VIII: GAO Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, James Michels (Assistant Director), Bradley Hunt (Analyst-in-Charge), Sophie Broach, Jon Fremont, Kayli Westling, Pedro Almoguera, Neil Doherty, Mark Dowling, Justin Fisher, Christopher Mulkins, Zamir Ruli, Aldo Salerno, and John Villecco made key contributions to this report.
Why GAO Did This Study The United States has provided assistance to the Northern Triangle of Central America for many years to address poverty, weak governance, and insecurity. Introduced in 2014, and updated in 2017, the U.S. Strategy for Engagement in Central America (Strategy) supports the objectives of improving prosperity, governance, and security. State coordinates implementation of the Strategy's objectives among agencies. This report examines: (1) the projects the U.S. government has implemented from fiscal years 2013 through 2018 to support the Strategy's objectives in the Northern Triangle, (2) what is known about project results, and (3) what is known about progress toward the objectives. GAO reviewed results for a subset of 190 projects in a nongeneralizable sample of six sectors selected based on funding, country, and objective; analyzed Strategy documents and key elements of effective strategies; interviewed officials; and conducted fieldwork in the Northern Triangle. What GAO Found To support their prosperity, governance, and security objectives, the Departments of State (State), Defense (DOD), Agriculture (USDA), and the U.S. Agency for International Development (USAID) allocated about $2.4 billion from fiscal years 2013 through 2018 for 370 projects in the Northern Triangle—El Salvador, Guatemala, and Honduras. USAID and State implemented most of these projects, with some supporting more than one sector and objective. For example, USAID implemented projects to address poverty, while State trained prosecutors and police to address governance and security needs. State, USAID, and other agencies reported mixed results for the 190 projects in the six sectors GAO reviewed. For example, in fiscal year 2018, USAID assisted 1,376 individuals in workforce development programs in Guatemala, exceeding the target of 1,000, while it assisted 651 individuals in Honduras, falling short of the target of 5,000. State and USAID trained 12,557 justice system personnel in the Northern Triangle, exceeding the target of 2,275. USDA rehabilitated school kitchens in Honduras as part of its school feeding program. DOD helped Guatemala establish a budget system to increase accountability for military funds, but DOD reported persistently low public trust in Northern Triangle militaries. Limited information is available about how U.S. assistance improved prosperity, governance, and security in the Northern Triangle. Agencies generally reported more information about progress toward prosperity than toward governance and security, in part because evaluations were conducted unevenly across agencies and sectors. In addition, project implementers did not consistently collect key information needed to evaluate progress, but officials noted improvements. Nevertheless, agency officials described examples of progress through technical assistance, and noted challenges, such as drought. GAO has reported that development of a monitoring and evaluation plan is key to assessing agencies' common goals and objectives, and mutually reinforcing results. While State has a monitoring and evaluation plan for the Strategy, the plan does not include activities by DOD and USDA that support the Strategy's objectives and thus does not establish a comprehensive approach to assessing progress. What GAO Recommends GAO recommends that State collaborate with DOD and USDA to develop a comprehensive approach to monitoring and evaluation of projects that support Strategy objectives. State did not concur, citing lack of authority to direct other agencies' actions. GAO modified the recommendation to clarify that a collaborative effort would allow State to include information about all relevant projects as it evaluates progress under the Strategy as discussed in this report.
gao_GAO-20-238
gao_GAO-20-238_0
Background This section provides information about abandoned hardrock mines, sites, and features; and federal and state agency roles in addressing abandoned hardrock mines. Abandoned Hardrock Mines, Sites, and Features Federal and state agencies generally describe abandoned hardrock mines in terms of mine sites, the individual features that comprise a site, or both. However, these agencies do not all have a common definition for what constitutes an abandoned hardrock mine or mine site, as we found in 2008. The agencies generally agree on what constitutes an individual feature—for example, a feature can be a mine opening (such as a tunnel, pit, or vertical shaft), a structure, or a pile of discarded materials (known as mine tailings or waste rock) that is left behind after ore is crushed and the valuable minerals are extracted. They also generally agree that an abandoned mine site can be comprised of only one feature (e.g., an isolated mine shaft) or many features (e.g., an area with multiple entries, shafts, open pits, mill buildings, and tailings piles). There is no universally agreed-upon average number of features per site. Also, not all federal and state agencies count both sites and features—some agencies only count sites, some only count features, and some count both. The individual features that make up a mine site may pose hazards to physical safety and risks to human health and the environment. Physical safety hazards. Abandoned hardrock mine features that pose physical safety hazards generally present immediate danger of injury or death. Examples of physical safety hazards include unstable mine tunnels that can collapse without warning; unmarked open mine shafts and deep pits that pose a danger to individuals who may inadvertently drive off-road vehicles into them; and deadly concentrations of gases, such as carbon monoxide and methane, present inside some mines that can asphyxiate explorers. To address physical safety hazards, federal and state agencies typically focus on identifying and mitigating the risk from individual features. They may safeguard these features by, for example, filling, capping, or gating the abandoned mine openings with engineered structures. After a dangerous feature is identified, an agency may post a warning sign or erect a fence to temporarily limit access to the feature until the agency can permanently close it. According to a 2008 Interior Inspector General report, physical hazards require the least expertise to identify and evaluate and the least funding to fix or mitigate. Environmental hazards. Mine features can also contribute to degradation of the environment and present short- and long-term risks to human health. For this report, we refer to these collectively as environmental hazards. People may be exposed to these hazards when recreating or living near an abandoned mine. Examples of environmental hazards include a mine tunnel that drains acidic water laden with heavy metals into a waste rock or tailings piles located along the banks or in the middle of streams that release hazardous substances such as arsenic, lead, and mercury into the water; and tailings that have dispersed into a surrounding community’s soil, exposing residents to harmful substances. The extent of environmental hazards at abandoned mines can vary widely, from sites that contain one draining tunnel and a few waste rock piles to sites with extensive underground tunnel networks, many waste rock piles, and miles of dispersed tailings. Some contaminated hardrock mine sites are included on the National Priorities List, which includes some of the most seriously contaminated sites that EPA identifies for long-term cleanup. The work required to address environmental hazards varies depending on the extent, type, and concentration of contaminants. For example, agencies may take one or more of the following actions at a site: remove waste rock or tailings from streams; develop passive water treatment systems that allow water to flow out of mines into treatment ponds; manage the waste on-site or transport it off-site for disposal; or establish active water treatment systems for the most contaminated sites that require continuous long-term monitoring, among other actions. According to EPA documents, sites with environmental hazards can cost hundreds of millions of dollars and take many years to address. For example, as of July 2019, the actual costs at the 25 most expensive mine and mineral processing sites ranged from $50 million to $583 million per site, and EPA had been working on some of the sites for over 20 years. Furthermore, agencies monitor remedies after completion to help ensure that they are achieving the desired results. Figure 1 depicts examples of physical safety and environmental hazards found at abandoned hardrock mine sites and activities that could take place to address them. Land ownership at abandoned mine sites is often complicated. The General Mining Act authorizes miners to patent, or purchase, the land associated with their mining claims—thereby mined land often passed from federal to private ownership. Partly because of this, many abandoned mine sites are a patchwork of federal, private, and other lands, and the ownership boundaries are not always clear. Agencies refer to these sites as mixed ownership sites. Federal and State Agency Roles in Addressing Abandoned Hardrock Mines The Forest Service, BLM, the Park Service, EPA, and OSMRE, as well as states with abandoned hardrock mines, administer programs that address abandoned hardrock mines. Specifically, these federal and state agencies collect information about abandoned hardrock mine sites and features, and the associated hazards, on land under their jurisdictions. These agencies also safeguard the physical safety hazards and clean up the environmental hazards present at these mines. Agencies inventory and address these mines based on their different authorizing statutes, regulations, and missions. Forest Service The Forest Service is responsible for managing about 193 million acres of national forests and grasslands throughout the United States. The Forest Service’s Safety and Environmental Restoration program oversees the agency’s work on abandoned hardrock mines. The Forest Service distributed $15.9 million in appropriations to the Safety and Environmental Restoration program in fiscal year 2019. USDA also distributed about $6.9 million in fiscal year 2019 to the Forest Service to address environmental hazards at several abandoned hardrock mines. USDA seeks recovery of cleanup costs under the Comprehensive Environmental Response, Compensation, and Liability Act of 1980, as amended (CERCLA), from responsible parties, such as current and former owners and operators of a contaminated site, to help reimburse costs at such sites. The Forest Service develops and maintains its information about abandoned hardrock mines primarily at its regional, national forest, and district offices. In general, the Forest Service tracks physical safety hazards by feature and environmental hazards by mine site. As of November 2019, the Forest Service did not have a national inventory of abandoned hardrock mine features or sites and the physical safety hazards they may pose. However, information about environmental hazards at abandoned hardrock mine sites is contained in a database maintained by USDA that tracks progress on all hazardous waste cleanup projects funded by the department, including projects at abandoned hardrock mines. Forest Service regional and national forest staff inventory, assess, mitigate, and monitor the physical safety and environmental hazards at abandoned hardrock mine sites on Forest Service-managed land as part of their daily responsibilities. BLM manages 245 million acres of public lands in the United States, located primarily in the western states and Alaska. BLM’s Abandoned Mine Lands program is aimed at protecting public safety and reducing liabilities by eliminating or minimizing physical safety and environmental hazards posed by abandoned mines, among other objectives. BLM’s Hazardous Materials Management program also addresses environmental hazards at all types of contaminated sites, including abandoned hardrock mines. In fiscal year 2019, BLM received a total of $38.5 million in appropriations for these programs. In addition, Interior distributed $2.7 million to BLM in fiscal year 2019 from the Central Hazardous Materials Fund—an Interior account that supports response actions undertaken at contaminated sites pursuant to CERCLA—for work at abandoned hardrock mines. BLM maintains a national inventory of abandoned hardrock mines in its Abandoned Mines and Site Cleanup Module database to help track information about sites, features, and hazards. However, as of 2019, BLM officials said the agency is shifting from tracking information by site, which can be subject to interpretation, to primarily tracking and reporting abandoned mine features. In addition to its abandoned mine database, BLM submits a subset of information about its contaminated abandoned mines to Interior for inclusion on the department’s list of contaminated sites, the Environmental and Disposal Liabilities list. BLM state, district, and field office staff inventory, assess, and mitigate the physical safety and environmental hazards at abandoned hardrock mines on BLM- managed land while conducting their daily work. Park Service The Park Service manages more than 85 million acres in 419 park units across the country. The Park Service addresses abandoned hardrock mines on this land through an abandoned mine safety program and an environmental compliance and cleanup program. In fiscal year 2019, the Park Service received $5 million in appropriations to address physical safety hazards on abandoned mineral lands. Interior also distributed $890,000 from the Central Hazardous Materials Fund to the Park Service to address contaminated abandoned hardrock mine sites in fiscal year 2019. The Park Service recovers costs from responsible parties at abandoned mine sites through CERCLA. The Park Service maintains information about abandoned hardrock mine sites and features in its Abandoned Mineral Lands Database. In 2013, the Park Service completed a system-wide inventory and assessment project to identify abandoned mines on lands it manages. In addition, the Park Service submits information to Interior about contaminated abandoned mine sites for inclusion on the Environmental and Disposal Liabilities list. Park Service headquarters and regional offices may assist park units in addressing hazards and preserving cultural resources and wildlife habitat at abandoned hardrock mines on Park Service-managed land. EPA administers the Superfund program, which was established under CERCLA to address the threats that contaminated waste sites pose to human health and the environment. As part of the Superfund program, EPA oversees and conducts investigations and cleanup actions at a variety of hardrock mine and mineral processing sites on private and other nonfederal lands and mixed ownership sites. The Superfund program operates on the principle that polluters are to pay for the cleanups rather than passing on the costs to taxpayers. EPA may compel parties statutorily responsible for contamination at sites to clean them up or to reimburse EPA for its cleanup costs. Responsible parties at abandoned hardrock mines could include current or former owners or operators of a site; persons who arranged for disposal, treatment, or transportation of hazardous substances; or the transporters of hazardous substances. To address contaminated sites—including abandoned mines—that do not have viable responsible parties, EPA uses funding from appropriations to the Superfund program, which were approximately $1.1 billion in fiscal year 2019. EPA maintains information about abandoned hardrock mine and mineral processing sites on nonfederal lands, including tribal lands, and mixed ownership sites in its national database of contaminated sites, the Superfund Enterprise Management System. EPA counts these mines and processing facilities by site and not by individual mine feature. According to EPA officials, many of the mine sites included in the database may contain tens to hundreds of individual features. In addition, EPA does not count sites that pose solely a physical safety hazard since they fall outside of the Superfund program mission. In addition, EPA and authorized states address certain abandoned hardrock mines in accordance with the Clean Water Act. Specifically, EPA and state agencies regulate discharges of pollutants to waters of the United States at abandoned mine sites under the act, such as mine tunnels draining contaminated water that exceeds water quality standards. To comply with the act, an entity operating a cleanup project involving a draining mine tunnel or other concentrated discharge source must obtain a permit, under which the discharge must be treated or managed to meet and maintain applicable water quality standards. OSMRE’s Abandoned Mine Land program primarily focuses on reclaiming and restoring land and water resources degraded by past coal mining, but the program also supports reclamation at abandoned hardrock mines. In accordance with the Surface Mining Control and Reclamation Act of 1977, as amended, OSMRE can provide grants for the reclamation of certain abandoned hardrock mines under limited circumstances—in particular, after a state or Indian tribe certifies that it has cleaned up its abandoned coal mine sites and the Secretary of the Interior approves the certification. Absent such certification, OSMRE can award these grants at the request of a state or Indian tribe where necessary to protect the public health, safety, general welfare, and property from extreme danger of adverse effects from the abandoned hardrock mine, and the Secretary of the Interior grants the request. In fiscal year 2019, OSMRE distributed a total of $310.5 million in grants to states and tribes to address abandoned coal and non-coal mines. OSMRE does not maintain an inventory of abandoned hardrock mines since the Abandoned Mine Land program’s primary objective is to address abandoned coal mines. States that receive grants from OSMRE to address non-coal abandoned mines may maintain their own inventories of abandoned hardrock mines. According to OSMRE budget documents, western states in particular often use OSMRE grants to address physical safety hazards at high-priority abandoned hardrock mines for which there is no other source of federal funding. State Agencies States identify and address physical safety and environmental hazards at abandoned hardrock mines on state, county, and private lands within their borders, often through state abandoned mine programs. States may also work with federal agencies to identify and address these hazards on federal land. Some state agencies manage or oversee cleanup activities under CERCLA at abandoned hardrock mines. State agencies may receive funds to support their work at abandoned hardrock mines from nonfederal and federal sources, including state-appropriated funds, responsible parties under CERCLA, and cooperative funding agreements or grants from federal agencies. States with abandoned hardrock mines generally maintain databases or inventories that identify the locations of these mines and any associated hazards. Federal and State Agencies Identified Several Hundred Thousand Abandoned Hardrock Mine Features, Over 100,000 of Which May Be Hazardous As of May 2019, the Forest Service, BLM, the Park Service, and EPA together identified in their databases at least 140,652 abandoned hardrock mine features—of which over 60 percent are known to pose or may pose physical safety or environmental hazards. Officials from 13 western states also identified from their state databases about 246,000 abandoned hardrock mine features on federal and nonfederal lands within their states, including about 126,000 features that pose physical safety or environmental hazards. Some state information overlaps with federal agency information, but the extent of overlap is unknown, according to state officials. Federal and state officials also estimated that there likely are hundreds of thousands of additional abandoned hardrock mine features that they have not yet captured in their databases. Federal Agencies Identified At Least 140,652 Abandoned Mine Features, about 89,000 of Which Pose or May Pose Physical Safety or Environmental Hazards The Forest Service, BLM, the Park Service, and EPA identified in their databases at least 140,652 abandoned hardrock mine features, as of May 2019. Of this amount, BLM identified 103,029 features and the Park Service identified 20,675 features. As previously noted, the Forest Service and EPA track abandoned mines by site and not by features associated with the sites; the Forest Service identified 16,375 sites and EPA identified 573 sites. According to agency officials, many abandoned hardrock mine sites contain more than one feature. Since there is no agreed-upon average number of features per site, we counted the minimum of one feature per Forest Service and EPA site for the purpose of this analysis. As a result, the total number of features identified by federal agencies likely is underestimated. Of the 140,652 total features, about 89,000 features are known to pose or may pose a physical safety or environmental hazard, according to information in the federal agencies’ databases. Specifically, agencies confirmed 7,802 features pose a hazard, of which 6,439 pose a physical safety hazard and 1,363 pose an environmental hazard; and identified 81,541 features with an unconfirmed hazard (whereby agency staff had not assessed current conditions in person to confirm the hazard), of which 60,279 may pose a physical safety hazard and 21,262 may pose an environmental hazard. Table 1 shows information about abandoned hardrock mine features that pose or may pose physical safety and environmental hazards, by agency. However, agency officials said there could be approximately 393,000 more abandoned hardrock mine features on federal land that the agencies identified on historic maps but have not captured in their captured in a central database. In addition, BLM officials estimated there are about 380,000 abandoned hardrock mine features on the land BLM manages that are not captured in its abandoned mine database. Park Service officials did not estimate a number of additional abandoned mines that might be in Park Service units; they said they believe their database is relatively comprehensive. Given the Forest Service and BLM estimates of additional features not found in their databases, the total number of estimated and identified abandoned hardrock mine features on lands within Forest Service, BLM, Park Service, and EPA jurisdiction is at least 533,652. Figure 2 depicts federal agency information about the numbers of confirmed and unconfirmed physical safety and environmental hazards on the lands under these agencies’ jurisdictions, in relation to the total estimated abandoned hardrock mine features, as of May 2019. To develop more comprehensive information about the total number of abandoned hardrock mine features on the lands they manage, the Forest Service and BLM are taking steps to improve their databases, including capturing information about abandoned mines that are not currently in a database. Specifically, Forest Service officials told us that they are establishing a centralized geospatial database that will consolidate information about abandoned hardrock mine features with physical safety hazards that is currently maintained in regional and national forest offices. They said they expect the new database will be populated in fiscal year 2020 and that it will provide regional and headquarters managers with better information about the extent of features with physical safety hazards. In addition, BLM officials said that field staff have been identifying and adding new features each year to its database, prioritizing features located close to communities and recreational areas. BLM officials said that they plan to update the database and communicate this information to field staff in fiscal year 2020 to help ensure staff enter information about new features into the database consistently. Agencies in 13 States Identified about 246,000 Abandoned Mine Features in Their States, Including about 126,000 That Pose Physical Safety or Environmental Hazards Officials with the 13 western states that we reviewed identified about 246,000 abandoned hardrock mine features on the federal, state, and private lands within their state borders, as of May 2019. As with the federal agencies, officials with five of the 13 states provided information about total mine sites and not features; as a result, we counted the minimum of one feature for each reported mine site for the purpose of our analysis. Of the 246,000 total features in these states, state officials estimated that about 115,000 features pose a physical safety hazard and about 11,000 features pose environmental hazards. State officials said that many of the features identified in their databases were also likely to be found in the federal agencies’ databases, but the extent of overlap is unknown. Specifically, the state officials’ estimates include abandoned mine features on federal, state, and private land because states may work on abandoned hardrock mines on both federal and nonfederal lands. However, state officials are not always able to quantify the number of mine features on federal land versus private or state land. For example, some states’ inventories are based on information from maps and databases that did not always include details about land ownership boundaries, which are necessary to differentiate on what lands the features are located. In addition, in instances in which the states could identify the features that are on federal land, such as in Utah and Nevada, state officials did not know how many of those features were also captured in federal agency databases. Similar to the federal agencies, officials with the 13 states estimated that the actual number of abandoned hardrock mine features in their states is higher than the information contained in their databases. State officials noted that their inventories are incomplete, in part because they have not conducted comprehensive, on-the-ground work to identify all the abandoned mine features in their states. They primarily focus on addressing the hazards they have already identified. Nevertheless, state officials estimated that the number of abandoned hardrock mine features in the 13 states could total more than 620,000. For example, California officials we interviewed said field staff had identified more than 70,000 individual abandoned mine features in the state as of May 2019. However, based on information from topographic maps, they estimated that 274,000 total mine features exist statewide, with an undetermined number of physical safety and environmental hazards. The states’ estimates of abandoned hardrock mine features reflect the different ways states collect information about abandoned hardrock mines. For example, California and Nevada officials explained that they count each individual abandoned mine feature in their states, whereas Colorado and Utah officials said that they only collect information about potentially hazardous features. Colorado officials estimated that there are 23,000 potentially hazardous abandoned hardrock mine features in the state. However, if the state were to count all of the features in Colorado, including shallow prospecting pits that are unlikely to pose a physical safety hazard, the officials said the total estimate would be hundreds of thousands of mine features. In addition, some states, including Idaho and California, reported numbers of abandoned mine features that included non-hardrock mines, such as sand and gravel pits, because their abandoned mine programs address different types of abandoned mines. Agencies Spent about $300 Million Annually from Fiscal Years 2008 through 2017 to Address Abandoned Hardrock Mines and Estimate Billions More in Future Costs Federal agencies spent, on average, about $287 million annually identifying, cleaning up, and monitoring abandoned hardrock mines, for a total of about $2.9 billion, from fiscal years 2008 through 2017. The Forest Service, BLM, the Park Service, EPA, and OSMRE primarily worked in partnership with other federal and state agencies and some nongovernmental stakeholders when addressing these mines, according to federal officials. Officials from the 13 western states we reviewed estimated spending an additional total of about $117 million in nonfederal funds over the 10-year period, or an average of nearly $12 million annually, to address abandoned hardrock mines within their states. Federal agency officials said they estimated it would cost billions more to address abandoned hardrock mines in the future. Federal Agencies Spent about $287 Million Annually from Fiscal Years 2008 through 2017 to Address Abandoned Hardrock Mines, Collaborating with Other Agencies and Stakeholders Federal agencies spent, on average, about $287 million annually, or a total of about $2.9 billion, to identify, clean up, and monitor hazards at abandoned hardrock mines from fiscal years 2008 through 2017. (See fig. 3.) EPA spent 80 percent of the total federal expenditures—about $2.3 billion—to address environmental hazards. Of the $2.9 billion in total federal expenditures, approximately $1 billion was reimbursed by responsible parties. Appendix II contains additional information about Forest Service, BLM, Park Service, EPA, and OSMRE expenditures by state. The agencies used some expenditures to address physical safety hazards but used most to address environmental hazards at abandoned hardrock mines. Physical safety hazards. The Forest Service, BLM, and the Park Service spent a total of over $105 million from fiscal years 2008 through 2017 to address mine features that posed physical safety hazards. According to officials with these agencies, this included filling in holes and installing gates at tunnels and other mine openings to allow bats, tortoises, and other wildlife to continue accessing important habitat. (See fig. 4.) Officials also said that their expenditures include funds provided to state agencies and others through cooperative funding agreements for projects where the state or other entity managed the work at the sites. Environmental hazards. From fiscal years 2008 through 2017, the Forest Service, BLM, the Park Service, and EPA spent a total of about $2.5 billion to address environmental hazards at abandoned hardrock mines. According to agency officials, work at these sites included conducting initial site investigations, designing and implementing remedies to address contamination, operating water treatment facilities, and monitoring completed cleanup actions. The agencies either managed this work themselves or provided funding through cooperative agreements to state agencies or others to manage the work. EPA spent about $2.3 billion at 394 sites, with about 40 percent spent at five sites. Of EPA’s total expenditures, $983 million (43 percent) was reimbursed by responsible parties. In addition, the Forest Service, BLM, and the Park Service spent a total of about $232 million to address various environmental hazards on lands they manage, of which about $40 million was reimbursed by responsible parties. Further, OSMRE reported that 12 states and two Indian tribes spent approximately $190 million in OSMRE grants to address abandoned hardrock mines and other non-coal sites from fiscal years 2008 through 2017. OSMRE officials did not specify how much of the $190 million was spent to address physical safety hazards versus environmental hazards since the agency does not require states and tribes to report such information. Table 2 shows federal agency expenditures by agency and type of hazard. Forest Service, BLM, and Park Service officials we interviewed said they conducted most of their work to address physical safety and environmental hazards at abandoned hardrock mines in collaboration with state agencies, nonfederal stakeholders, and other federal agencies, including EPA. These officials noted that it is important to partner with state agencies and EPA because many of the abandoned mine sites are of mixed ownership and the federal land management agencies generally do not have authority to address mine features on nonfederal lands. Federal agency officials said it is also helpful to pursue partnerships at mixed ownership sites to leverage limited funding. For example, Forest Service and BLM officials told us that they have partnered with Trout Unlimited, a nongovernmental organization focused on conserving freshwater fisheries and their watersheds, on projects to address environmental hazards at mixed ownership abandoned hardrock mine sites in several western states. Examples of projects that federal agencies undertook with partners include: Flat Creek-Iron Mountain Mine and Mill, Montana. Since 2014, the Forest Service has coordinated with EPA and the state of Montana to address contamination from this abandoned mine and mill site on private and Forest Service-managed lands upstream from the town of Superior. Silver, lead, and other hardrock mining operations left mill tailings piles that contaminated soil, groundwater, and surface water in Flat Creek, which flows for 3.5 miles from the mine site through Forest Service and private lands into the town. The local government and individuals also used tailings as fill material in yards, roadways, and other locations, including the high school track. The Forest Service took the lead on the portion of the site on the land it manages, and EPA and the state took the lead on various nonfederal portions of the site. At the state’s request, in 2000, EPA started assessing and cleaning up 79 residential and community properties in Superior; it completed this effort in 2013. In 2017, the state removed mine tailings from the private lands along Flat Creek with EPA oversight. As of November 2019, the Forest Service has been working with Trout Unlimited and the state to remove the mine tailings from the banks of Flat Creek on Forest Service land. Trout Unlimited representatives and Forest Service officials said they are also planning to reconstruct the stream channel and floodplains and restore fisheries habitat in the summer of 2020 after the tailings are removed. Gold Butte National Monument, Nevada. In 2018, BLM and the Nevada Division of Minerals worked with other federal, state, and local agencies to address 40 features that posed physical safety hazards within the historic Gold Butte Mining District in southern Nevada. The abandoned mine features were within the BLM- managed Gold Butte National Monument, which was established in 2016. According to project documents, the anticipated increase in recreation as a result of the monument designation prompted BLM and the state to evaluate the area for potential physical safety hazards. The 40 abandoned mine features included horizontal mine tunnel openings and deep vertical openings. BLM and the Nevada Division of Wildlife conducted cultural and wildlife surveys, respectively, to help determine appropriate closure methods. The state then filled the hazardous openings with foam and rock or installed gates that provide access to bats and desert tortoises. The local county government also contributed to the installation of the bat gates. Agencies in 13 States Estimated Spending a Total of about $117 Million of Nonfederal Funds from Fiscal Years 2008 through 2017 to Address Abandoned Hardrock Mines Officials from the 13 states in our review estimated spending about $117 million in total, or an average of nearly $12 million annually, of nonfederal funds from fiscal years 2008 through 2017 to address physical safety and environmental hazards at abandoned hardrock mines within their states. Spending in three of the 13 states—California, Colorado, and Idaho—represented over 86 percent of the total nonfederal expenditures. Of the approximately $117 million, states spent about $26 million addressing physical safety hazards and about $91 million addressing environmental hazards. (See table 3.) State officials said that the sources of nonfederal funds that the states spent to address abandoned hardrock mines included (1) state-generated funds and (2) funding from settlements with responsible parties. State-generated funds. Officials from eight of the 13 states reported that they expended revenue raised by the state government to work on abandoned hardrock mines. Revenue sources include mine license taxes and royalties on oil and gas, hardrock mines, and other mineral extraction, and other sources such as the state general fund. For example, officials from the California Department of Conservation said the agency spent funds generated by state fees on active gold and silver operations to address physical safety hazards at abandoned mines on public lands. In addition, Colorado officials said they spent funds from a state severance tax collected on oil and gas, coal, metallic minerals, and other mineral production to address physical safety and environmental hazards. Settlements with responsible parties. Officials from five of the 13 states reported that they spent funds received from settlements with responsible parties to either conduct cleanup actions or oversee the responsible parties’ work to address environmental hazards. For example, from fiscal years 2008 through 2017, the state of New Mexico spent over $3.8 million that it had collected from responsible parties at two abandoned hardrock mine sites, according to state documents. Nevada and Washington officials said that their agencies’ expenditures to address environmental hazards during the 10-year period were entirely funded by collections from responsible parties. State officials we interviewed said they spent these nonfederal funds to address abandoned hardrock mines located primarily on private, county, state, or other nonfederal lands, including at mixed ownership sites. Officials from two of the 13 states (Colorado and Nevada) said they also spent state-generated funding to address hazards on federal land. Officials from the Nevada Division of Minerals’ abandoned mine program said that they generally spend about 80 to 90 percent of the program’s nonfederal funding addressing physical safety hazards on federal land. These officials explained that fees from unpatented mining claims on federal land are the division’s main funding source and, therefore, the state spends most of this funding to address hazards on federal land. Officials with the 13 states also told us that, in addition to spending about $117 million in nonfederal funds over the 10 years, states also spent more than $440 million they received from federal agencies, primarily through grants and cooperative agreements, during this period. Officials with seven states reported that they receive significantly more federal funds than nonfederal funds to work on abandoned hardrock mines and that federal funding is critical to addressing hazards at these mines. Federal Agencies Estimated Billions More Would Be Needed to Address Abandoned Hardrock Mine Hazards The Forest Service, BLM, the Park Service, and EPA estimated that their future costs to inventory and address physical safety and environmental hazards at abandoned hardrock mines would be in the billions of dollars. Each agency has generated some information about estimated future costs using a variety of methods and covering a range of activities. Given the level of uncertainty associated with the estimates, they likely understate the amounts that will be needed to comprehensively inventory and address these hazards. Estimated Costs to Inventory The Forest Service and BLM estimated that it could cost over $650 million to finish inventorying abandoned hardrock mines on lands they manage. Specifically, Forest Service information indicated it could cost about $147 million to complete the agency’s inventory, which includes identifying potential environmental hazards at 15,247 sites as well as the locations and conditions at approximately 13,000 sites not currently captured in a database. In addition, BLM officials estimated that it would cost about $510 million to complete the agency’s inventory of abandoned hardrock mines. This estimate includes about $130 million to evaluate approximately 66,000 features identified as posing an unconfirmed physical safety or environmental hazard. It also includes another $380 million to confirm the locations and presence of hazards at the approximately 380,000 additional features that may be on BLM-managed land but are not in its database. The Park Service and EPA did not provide estimates for future inventory work. Park Service officials said they have not estimated costs for additional inventory work because they believe that their inventory is largely comprehensive. EPA officials explained that the agency does not manage lands so they do not work to identify the existence of contaminated abandoned mines. Rather, EPA relies on external sources, such as state agencies and local governments, to alert it of potentially contaminated sites on nonfederal lands that may need attention. Estimated Costs to Address Physical Safety Hazards BLM and the Park Service estimated it could cost nearly $5 billion to address the physical safety hazards at abandoned hardrock mines on the lands they manage, and the Forest Service has not estimated this amount. Specifically, BLM estimated it could cost about $4.7 billion to fill in, gate, or otherwise address the nearly 65,000 features it has identified with confirmed and unconfirmed physical safety hazards and the estimated 380,000 additional features that are not yet included in the agency’s database. Park Service officials said they estimated that it would cost about $86 million to address the physical safety hazards at the abandoned hardrock mines identified in the agency’s database. These officials said that they plan to revise this estimate once they have better information about the actual costs to close the features where they are currently working. The Forest Service and EPA did not have estimates for addressing physical safety hazards. The Forest Service has not comprehensively estimated these costs, although the individual forests identify priority projects for spending each year, according to agency officials. EPA has not separately estimated costs to address physical safety hazards since those costs are included in its estimates to address environmental hazards. Estimated Costs to Address Environmental Hazards The Forest Service, BLM, the Park Service, and EPA have partly estimated costs to address environmental hazards at abandoned hardrock mines. Agency officials said that they do not have comprehensive estimates, in part because they have not yet selected the cleanup remedy at numerous sites—information they need to develop detailed estimates—nor have they identified all of the contaminated sites that will need to be addressed. The officials explained that a remedy to address an abandoned mine site with one waste rock pile (e.g., removing the pile from a creek and constructing a repository for it) is different from a remedy needed to address a site with perpetually draining mine tunnels, which could include operating and maintaining water treatment systems over the long term. As a result, the costs of cleanup remedies can vary from hundreds of thousands to hundreds of millions of dollars per site. Estimates of future costs to address environmental hazards at abandoned hardrock mines and what the estimates included varied by agency: Forest Service. Forest Service and USDA officials said that they estimated in 2014 that it could cost about $6 billion to address environmental hazards at 6,600 abandoned hardrock mine sites on Forest Service-managed land. This estimate includes costs to assess the extent of contamination, search for responsible parties, design and implement an action to remove a small waste rock or tailings pile, and monitor and maintain each site for 30 years after the cleanup is complete. According to the estimate, costs to maintain the completed sites make up half of the $6 billion in estimated future costs. These officials also said they assumed that all 6,600 sites are relatively simple and not complex with more extensive contamination. In developing this estimate, the Forest Service did not assume that responsible parties would cover any of these costs. BLM. BLM estimated a portion of the costs associated with addressing environmental hazards at abandoned hardrock mines on BLM-managed land, since BLM officials said there are too many unknowns and unique circumstances at each feature to comprehensively estimate total costs. These officials said the agency has estimated costs for some sites with confirmed environmental hazards in accordance with Interior’s environmental liabilities reporting guidance. Specifically, as of June 2019, BLM estimated that future costs to address environmental hazards at 105 abandoned hardrock mine sites on BLM-managed land range from $61 million to about $265 million. Interior and BLM officials explained that these costs do not represent all future costs needed to clean up these sites. Instead, the range includes the future costs that the agency determines are reasonably estimable at the time for these sites. In some cases, these costs are limited to the cost of conducting a study if the agency has not selected a cleanup remedy. As a result, officials said they expect that BLM’s estimate of total future costs will increase once the agency selects the cleanup remedies and estimates their costs. Officials also said they have not estimated future costs for sites where the agency has not determined the type or extent of the contamination or where BLM is not likely to fund the cleanup, for example, because a responsible party may pay for it. Park Service. Similar to BLM, Park Service officials estimated the future costs associated with addressing environmental hazards at 50 contaminated abandoned hardrock mines, based on Interior’s guidance. As of June 2019, the Park Service estimated that these future costs range from $21 million to $35 million, exclusive of any reimbursements from responsible parties. The Park Service did not estimate the future costs to address 19 additional sites that the agency identified as posing environmental hazards because either work at these sites is in the early stages, the agency was unable to estimate costs, or the Park Service is not likely to fund the cleanup, according to Park Service and Interior officials. EPA. EPA officials told us that they do not have a comprehensive estimate of costs to clean up hardrock mines. Specifically, officials said EPA tracks planned obligations to be incurred for sites where the agency anticipates taking action within the next 3 years to help support its budget development process. As of fiscal year 2018, EPA identified about $519 million in planned obligations for 115 hardrock mine or mineral processing sites. EPA officials said the planned obligations do not necessarily reflect the total estimated costs remaining at a site because the agency typically requires its regions to report known planned obligations for 3 years, or longer, if available. According to EPA data, future costs to address hardrock mines likely will exceed these obligations. For example, EPA did not report planned obligations for 423 mine and mineral processing sites where the agency has not completed site assessment work or selected a cleanup remedy. According to EPA officials, they generally do not plan obligations for future cleanup work while conducting an assessment. However, they said that if an assessment reveals a need for a time-sensitive response at a site, the agency may fund it. EPA officials also told us that they expect responsible parties to pay a portion of the future costs associated with these sites, but that amount is unknown. Federal and State Agencies and Stakeholders Cited Availability of Resources and Legal Liability Concerns as Factors That Limit Efforts to Address Abandoned Hardrock Mines Federal and State Officials Cited Availability of Resources as a Limiting Factor Federal agency officials, state officials from three selected states (Colorado, Montana, and Nevada), and stakeholders cited availability of resources and legal liability concerns as factors that limit efforts to identify, clean up, and monitor hazards at abandoned hardrock mines. Federal and state officials said their backlog of work on abandoned mines is greater than current staff and budget levels. In addition, state agency officials and other stakeholders we interviewed, such as nongovernmental organizations and mining companies, have limited their participation in projects to address environmental hazards at abandoned mines because of concerns about their potential legal liability under CERCLA and the Clean Water Act. All of the officials we interviewed from the Forest Service, BLM, the Park Service, and EPA, as well as from Colorado, Montana, and Nevada, cited availability of resources as a factor that limits their efforts to identify and address the physical safety and environmental hazards at abandoned hardrock mines. Representatives from state associations and nongovernmental organizations we interviewed also cited this factor as limiting federal and state efforts. Federal and state officials said that their backlog of work on these mines far exceeds their current staff and budget levels. For example, BLM officials estimated that with the agency’s current abandoned mine budget and staff resources, it could take up to 500 years to confirm the presence of physical safety or environmental hazards at the approximately 66,000 features in its database and the estimated 380,000 features not yet captured in its database. Officials from Colorado and Montana and representatives from a state association noted that these two states regularly receive reclamation funding from OSMRE to address abandoned coal mines in their states. As a result of having access to such funds, five states, including Montana and Wyoming, as well as three tribes have certified that they have addressed all of their known priority abandoned coal mines. These officials also noted that there is not a similar or consistent source of funding for states to address hazards at abandoned hardrock mines. In Nevada, although state-collected mining fees contribute to addressing safety hazards at abandoned hardrock mines, state officials said they do not have a consistent source of funding to address environmental hazards. As a result, Nevada officials explained that they tend to work primarily on mines where there is a viable responsible party to fund the cleanup. However, one official said that most of the approximately 190 abandoned hardrock mine sites in the state that pose or may pose environmental hazards do not have a viable responsible party. Federal and state agency officials described several steps they have taken to work more efficiently within existing limited resources. For example, federal agency officials said they prioritize proposed projects to address abandoned mines that pose the highest safety and environmental risks. In addition, federal officials explained that they have established several formal mechanisms for national and local collaboration to facilitate leveraging resources. For instance, federal and state officials working in Colorado said they formed a working group in 2010 to jointly identify and prioritize watersheds that have been contaminated by abandoned hardrock mines. The agencies work collaboratively to evaluate the extent of contamination in each watershed, leading to a more holistic approach to addressing contamination, according to EPA and Colorado state officials. Regional Forest Service officials we interviewed who also work outside of Colorado said the group is a national model for collaboration and efficient use of resources. Forest Service, BLM, Park Service, EPA, and state officials also said that they work to leverage federal and state resources by searching for responsible parties to contribute funding to their efforts at abandoned hardrock mines. However, officials told us that identifying such parties is difficult and can be resource intensive given the length of time that has elapsed since the mines were abandoned and the lack of a clear chain of custody and land ownership boundaries at mine sites. State Officials and Stakeholders Cited Legal Liability Concerns as a Limiting Factor All of the state officials and nearly all of the stakeholders from nongovernmental organizations, state associations, and industry we interviewed cited concerns over legal liability—that is, being held legally responsible for addressing environmental contamination—as a factor that limits efforts to address certain abandoned hardrock mine hazards on nonfederal land. Specifically, liability concerns can prevent third parties— entities who offer assistance in addressing environmental hazards that they did not create and are not legally required to clean up—from taking actions to help address such hazards that are on private land and on nonfederal portions of mixed ownership sites. These parties are often referred to as Good Samaritans and may include state agencies, nongovernmental organizations, local governments, private landowners, and mining companies, among others. Federal and state officials and stakeholders we interviewed said that Good Samaritans have avoided taking certain cleanup actions—in particular, addressing mine tunnels that perpetually drain highly contaminated water—at abandoned hardrock mines because they are concerned about potentially being held legally responsible under CERCLA and the Clean Water Act. Specifically, a Good Samaritan undertaking cleanup actions at an abandoned hardrock mine might become a responsible party under CERCLA and thereby would be responsible for the entire cost of cleaning up the site. As a result, representatives from an industry association and a nongovernmental organization told us that while they are interested in addressing contamination on private land in the West, they generally have not done so, in part because of concerns about becoming responsible under CERCLA for cleaning up all of the contamination present at the site. In addition, a Good Samaritan undertaking cleanup actions to address draining mine tunnels may be required to do so in accordance with a discharge permit under the Clean Water Act. Complying with such a permit requires that the cleanup meet and maintain water quality standards, which can be expensive and may require perpetual water treatment. State officials and stakeholders explained that meeting and maintaining such standards at certain mines is difficult because of naturally occurring heavy metals and continual drainage from the mines. They said they are interested in undertaking smaller-scale projects to address mine tunnel drainage that may significantly improve water quality and aquatic habitat but would not fully meet water quality standards. However, Colorado and Montana state officials and various stakeholders said they generally decide not to undertake such projects, even if they could make incremental improvements, because of the risk of being held responsible for meeting and maintaining water quality standards in perpetuity. To encourage nongovernmental organizations, other stakeholders, and states to participate in abandoned hardrock mine projects at mixed ownership sites and on other private land, EPA developed administrative tools aimed at limiting Good Samaritans’ CERCLA and Clean Water Act liability. In 2007, EPA developed guidance for issuing “comfort/status letters” to Good Samaritans willing to perform cleanup work under EPA oversight and for entering into settlement agreements—legally enforceable documents signed by EPA and a Good Samaritan that include a federal covenant not to sue under CERCLA in exchange for cleanup work. In 2012, EPA also issued guidance stating that, as a general matter, the agency would not require a Good Samaritan to obtain a Clean Water Act discharge permit if they successfully complete a cleanup action under a comfort/status letter or settlement agreement with EPA. Good Samaritans have participated in some projects at abandoned hardrock mines using EPA’s administrative tools. As of January 2020, EPA had issued four comfort letters and entered into three settlement agreements, generally to address hazards at sites that did not require a Clean Water Act permit. Some state officials and stakeholders we interviewed said they have not pursued using EPA’s administrative tools because, in part, these tools do not sufficiently alleviate liability under the Clean Water Act. For example, they explained that the tools and guidance provide reassurance that EPA may not sue the Good Samaritan but do not ensure that certain outside parties will not sue to require they meet water quality standards. State officials and stakeholders we spoke with said that they believe that resolving the concerns over CERCLA and Clean Water Act liability may require federal legislation. However, other stakeholders expressed concerns that legislative changes, such as amending CERCLA or the Clean Water Act, could inadvertently result in weakening the existing environmental protections in these and other laws or could limit the ability of outside parties to enforce their provisions. Since 1999, several bills have been introduced that would have responded to liability concerns but as of December 2019, none had been enacted. State officials and stakeholders have been involved in efforts to draft legislation that would address liability concerns, but the interested parties have disagreed about the specific provisions to include. While federal agency officials did not cite liability concerns as a factor that limits their agencies’ efforts to address abandoned hardrock mines on lands under their jurisdictions, Forest Service, BLM, and EPA officials concurred that legal liability concerns deter Good Samaritans from participating in projects with federal agencies at mixed ownership sites. Federal officials explained that, unlike Good Samaritans, the abandoned hardrock mines the federal agencies address are under their jurisdiction and the agencies are already responsible for meeting the requirements of CERCLA and other applicable laws. However, federal agency officials have observed the effects of Good Samaritan legal liability concerns on projects. For example, Forest Service officials in Colorado said that potential partners have expressed interest in addressing contamination on the private land portions of mixed ownership sites but declined once they learned they would be subject to liability under CERCLA. In the absence of legislative changes, EPA officials said they are looking for new ways to encourage Good Samaritan participation in abandoned hardrock mine projects. For example, they are working to update and refine the agency’s administrative tools and identify new solutions to better address Good Samaritans’ concerns. They are also looking to encourage Good Samaritan participation in more projects that would not require a Clean Water Act permit, such as moving mine tailings piles away from streams. Agency Comments We provided a draft of this report to the Department of Agriculture, the Department of the Interior, and EPA for their review and comment. The Forest Service Audit Liaison provided comments by email, stating that the Forest Service generally agreed with the report. USDA and EPA provided technical comments, which we incorporated as appropriate. Interior told us they had no comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretaries of Agriculture and the Interior, the Administrator of EPA, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report describes (1) what is known about the number of abandoned hardrock mines in the United States; (2) federal and state agency expenditures to address abandoned hardrock mines from fiscal years 2008 through 2017, and what is known about future costs to address these mines; and (3) factors that limit federal and state agencies’ and stakeholders’ efforts to address abandoned hardrock mines. To address these objectives, we reviewed our previous work on abandoned hardrock mines, including a March 2008 report in which we summarized information about the number of abandoned hardrock mines in the United States and the amount of federal spending on these mines from fiscal years 1998 through 2007. We also reviewed federal agency reports to identify the federal agencies that track numbers of abandoned hardrock mines, conduct work to address hazards at these mines, or fund projects to address these hazards. We identified the U.S. Department of Agriculture’s (USDA) Forest Service; the Department of the Interior’s Bureau of Land Management (BLM), National Park Service (Park Service), and Office of Surface Mining Reclamation and Enforcement (OSMRE); and the Environmental Protection Agency (EPA) to include in our review. We reviewed agency documents detailing these agencies’ cleanup efforts and abandoned hardrock mine programs. We also selected 13 western states to include in our review: Alaska, Arizona, California, Colorado, Idaho, Montana, Nevada, New Mexico, Oregon, South Dakota, Utah, Washington, and Wyoming. We selected these states because our March 2008 report and other federal and state agency reports indicated that most of the abandoned hardrock mines are in these states. We conducted two site visits to abandoned hardrock mines in Colorado in February 2019. We selected sites in Colorado because they provided opportunities to observe examples of physical safety and environmental hazards on federal and nonfederal lands. We visited sites with physical safety hazards that BLM and the state had addressed on BLM and county lands. We also visited a National Priorities List site where EPA and the state were addressing environmental hazards on private land. To describe what is known about the number of abandoned hardrock mines in the United States, we obtained and summarized information about abandoned hardrock mine features and sites—including the number of features and sites that pose confirmed and unconfirmed physical safety and environmental hazards—that the Forest Service, BLM, the Park Service, and EPA maintained in databases as of May 2019, the most current at the time of our review. Specifically: the Forest Service provided information about abandoned hardrock mine sites from USDA’s National Environmental Accomplishment Tracking system; BLM provided information about abandoned hardrock mine features from the Abandoned Mines and Site Cleanup Module; the Park Service provided information about abandoned hardrock mine sites and features from the Abandoned Mineral Lands Data Entry and Edit database and from Interior’s Environmental and Disposal Liabilities list; and EPA provided information about hardrock mining and mineral processing sites from its Superfund Enterprise Management System. In addition, we obtained information on the agencies’ estimates of the number of additional abandoned hardrock mine sites or features that are not captured in their databases, where applicable. We assessed the reliability of the agencies’ databases by testing the data for accuracy by cross-referencing with relevant data sets and checking for missing data and errors. We also reviewed agency documents about the databases and our previous related work regarding the use of these data. We also interviewed headquarters officials from each agency and discussed the data and any limitations. We determined that the information in the agencies’ databases about the number of abandoned hardrock mines was sufficiently reliable to summarize in our report. We calculated the agencies’ total number of abandoned hardrock mines in terms of the number of features. According to agency officials, many abandoned hardrock mine sites contain more than one feature, but there is no agreed-upon average number of features per site. Since the Forest Service and EPA reported information only by mine site, we counted the minimum of one feature per site in our calculations. As a result, the total number of features likely is underestimated. Further, we collected information about the number of abandoned hardrock mines in the 13 western states through semi-structured interviews with state officials. For each state, we interviewed officials with the relevant state agencies that address abandoned hardrock mines through, for example, a dedicated abandoned mine program or a broader program focused on addressing environmental hazards. In each interview, we asked the officials to provide information about the numbers of abandoned hardrock mine sites they identified in their state, features that posed a hazard to public health and safety, and features that caused environmental degradation as of the time of our review. We provided the states with a common definition of abandoned mine site and feature. However, officials with five states provided information only for abandoned mine sites and not features. For those states, we counted the minimum of one feature per site to calculate the states’ total number of abandoned hardrock mine features. As a result, the states’ total number of features likely is underestimated. We assessed the reliability of the states’ information by reviewing documents about the data systems, checking for missing data and errors, and discussing the data and their sources with state officials, including any limitations. We determined that the data were sufficiently reliable to describe what the state agencies know about abandoned hardrock mines within their jurisdictions. To describe federal agency expenditures to address abandoned hardrock mines from fiscal years 2008 through 2017, we summarized expenditure information from the Forest Service, BLM, the Park Service, EPA, and OSMRE for this time period, the most recent 10 years of information available at the time of our review. Specifically, we collected information about total expenditures to address abandoned hardrock mines, expenditures to address physical safety hazards, expenditures to address environmental hazards, and expenditures of collections from responsible parties under the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA), as applicable. We assessed the reliability of the agencies’ information by testing the data for accuracy and completeness by checking for missing data and errors. We also reviewed our previous related work regarding the use of the information and interviewed agency officials involved with collecting or analyzing the information. We determined that the information obtained from the agencies was sufficiently reliable for our descriptive purposes. Additional details on agency-specific information we used follows: Forest Service. The Forest Service provided expenditure information for fiscal years 2008 through 2017 for its Abandoned Mine Land and Environmental Compliance and Protection programs from its Foundation Financial Information System. The Forest Service also provided information from this system about expenditures of reimbursements from responsible parties. USDA provided information about the Forest Service’s expenditures from the department’s Hazardous Materials Management Account for fiscal years 2008 through 2017 from the Financial Management Modernization Initiative system. BLM. BLM provided expenditure information from Interior’s Financial Business Management System for fiscal years 2009 through 2017. BLM’s budget office provided expenditure information for fiscal year 2008 since information prior to fiscal year 2009 is not included in Interior’s current financial system. BLM provided information about abandoned hardrock mine expenditures from relevant subactivity codes, including Abandoned Mine Lands, Hazardous Materials Management, American Recovery and Reinvestment Act-Abandoned Mine Land projects, and Central Hazardous Materials Fund, among others. Park Service. The Park Service provided expenditure information from Interior’s Financial Business Management System and the Park Service’s Project Management Information System and Administrative Financial Systems 3 and 4 for fiscal years 2008 through 2017 for its Abandoned Mine Lands program and the Contaminants Cleanup Branch. The Park Service also provided information from Interior’s system about expenditures of reimbursements from responsible parties. OSMRE. OSMRE provided expenditure information from Interior’s Financial Business Management System for fiscal years 2008 through 2017 from its non-coal account, which includes spending for projects to address abandoned hardrock mines, non-hardrock abandoned mines, and other eligible projects. To further narrow the non-coal account expenditures to spending on abandoned hardrock mines, we reviewed information for projects that states completed during the 10-year period and eliminated expenditures that were clearly identified for non-hardrock-related projects. We also compared the expenditure information from OSMRE with expenditure information we obtained during our semi-structured interviews with officials from six state agencies that reported spending OSMRE grants specifically on hardrock abandoned mines—Alaska, Colorado, New Mexico, Montana, Utah, and Wyoming. We determined that Alaska’s and Colorado’s reported expenditures were more specific to abandoned hardrock mines than the information OSMRE provided for those states. As a result, we used Alaska’s and Colorado’s information to report expenditures for those states and used OSMRE’s information to report expenditures for all other states. OSMRE officials agreed with this approach. EPA. EPA provided information about the Superfund program’s expenditures at mine and mineral processing sites from the Integrated Financial Management System for fiscal years 2008 through 2011 and the Compass Financial System for fiscal years 2012 through 2017. EPA provided expenditures from its (1) Superfund appropriation accounts, (2) special accounts through which EPA receives resources from settlements with responsible parties for EPA to conduct site- specific work, and (3) state cost-share accounts, through which states contribute 10 percent of costs for EPA’s Superfund-financed remedial actions. EPA also reported expenditures of funds provided by other federal agencies; we excluded these expenditures from our reporting of EPA’s spending to avoid potential double counting. Further, we obtained information through our semi-structured interviews with officials from the 13 selected states about their expenditures of nonfederal and federal funds at abandoned hardrock mines for state fiscal years 2008 through 2017. We obtained and summarized information on total expenditures to address abandoned hardrock mines, expenditures to address physical safety hazards, and expenditures to address environmental degradation. We also obtained information about the sources of the agencies’ funding, such as collections from responsible parties. The states provided expenditure information by state fiscal year and not federal fiscal year because their financial systems are organized by state fiscal year. We assessed the reliability of the states’ expenditure information by testing for missing data and errors, reviewing documents, and discussing the information and any limitations with state agency officials. Three states were unable to provide expenditure information specific to abandoned hardrock mines for the entire 10-year period. Therefore, we discussed and agreed with each of these states how they could provide information that most closely responded to our request—for example, by providing information for the years that were available—and we are reporting the state agencies’ total expenditures as estimates. We determined that the data were sufficiently reliable to describe an estimate of how much in nonfederal and federal funds the state agencies spent to address abandoned hardrock mines. We are reporting both federal and state agency expenditures in nominal dollars. We are doing so for several reasons, including that there was a relatively low rate of inflation from fiscal year 2008 through 2017 (about 1.5 percent per year, on average); not all states reported annual expenditures that could be adjusted for inflation; and federal and state agencies reported annual expenditures differently, with federal agencies reporting by federal fiscal year and state agencies reporting by state fiscal year. To describe what is known about future costs to address abandoned hardrock mines, we reviewed and summarized documentation of the federal agencies’ most recently available estimates of costs to inventory additional abandoned hardrock mine features and to address physical safety and environmental hazards that have not been addressed. We discussed these estimates, and the assumptions used to create the estimates, with relevant agency officials. We describe the estimates and their underlying assumptions in the report. To identify factors that limit federal and state agencies’ and stakeholders’ efforts to address abandoned hardrock mines, we reviewed relevant agency documents and independent reports that describe limiting factors. We interviewed federal agency officials, state agency officials, and stakeholders. More specifically, we interviewed Forest Service, BLM, Park Service, EPA, OSMRE, and Interior headquarters officials and officials from these agencies’ regional or state-based offices who work in Colorado, Montana, and Nevada. We also interviewed officials with the relevant state agencies that address abandoned hardrock mines in these three states. We selected these states for geographic diversity, higher numbers of abandoned hardrock mines, and variation in the types of hazards posed by abandoned hardrock mines in these states. The sample of states is not generalizable, and the results of our work do not apply to all states where abandoned hardrock mines are located, but provide illustrative examples. In addition, we obtained perspectives from stakeholders that have participated in or expressed interest in participating in projects to address abandoned hardrock mines. We interviewed a sample of stakeholders, selected to provide perspectives from industry associations, nongovernmental organizations, state agency associations, and individuals with long-standing involvement in issues related to addressing abandoned hardrock mines. We identified and selected these stakeholders based on our previous work, including the stakeholders we interviewed for our March 2008 report; a review of relevant literature, including written testimony statements and a summary of proceedings from a 2018 conference on abandoned hardrock mines; interviews with federal and state agency officials; and recommendations from stakeholders. Our sample of stakeholders is not generalizable to all stakeholders involved with abandoned hardrock mines, but provides perspectives on factors that limit efforts to address abandoned hardrock mines. In total, we obtained responses from officials with 13 federal agency offices, including six headquarters offices and seven regional or state- based offices; officials with three states; and representatives of 11 stakeholder organizations, including three state associations that represent states with abandoned mine programs, two nonprofit conservation organizations, two mining industry associations, one mining company, and three individuals with long-standing involvement in abandoned hardrock mine policy. In our discussions, officials and representatives with each entity identified the factors that limit their or others’ efforts to address abandoned hardrock mines. We reviewed the responses and identified the factors that officials and stakeholders in each group (i.e., federal agencies, state agencies, and stakeholders) frequently mentioned. Two factors arose frequently both within and across the groups—we describe these factors in our report. We conducted this performance audit from June 2018 to March 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings based on our audit objectives. Appendix II: Federal Expenditures to Address Abandoned Hardrock Mines, by State, Fiscal Years 2008 through 2017 Table 4 includes expenditures to address abandoned hardrock mines for the Bureau of Land Management, Environmental Protection Agency, Forest Service, Office of Surface Mining Reclamation and Enforcement, and National Park Service. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Elizabeth Erdmann (Assistant Director), Leslie Kaas Pollock (Analyst-in-Charge), Matthew Elmer, William Gerard, Anne Rhodes-Kline, Sheryl Stein, Sara Sullivan, and Rajneesh Verma made key contributions to this report.
Why GAO Did This Study The General Mining Act of 1872 allowed individuals to obtain exclusive rights to valuable hardrock mineral deposits on land belonging to the United States. Miners explored, mined, and processed valuable minerals, but many did not reclaim the land after their operations ended. Unsecured mine tunnels, toxic waste piles, and other hazards—known as mine features—are found at abandoned hardrock mines across federal and nonfederal lands. The Forest Service, BLM, National Park Service, EPA, and OSMRE—as well as state agencies—administer programs that identify and address hazardous features at abandoned hardrock mines. Addressing features could include, for example, sealing mine tunnels or treating contaminated water. GAO was asked to provide information about abandoned hardrock mines. This report describes (1) what is known about the number of abandoned hardrock mines in the United States; (2) agency spending to address abandoned hardrock mines from fiscal years 2008 through 2017 and estimated future costs; and (3) factors that limit federal and state agencies' and stakeholders' efforts to address abandoned mines. GAO obtained and summarized information from agency databases about the number of abandoned mines, features, and hazards as of 2019; summarized agency spending data from fiscal years 2008 through 2017, the most currently available; and interviewed federal and state agency officials and stakeholders, selected to provide diverse perspectives. What GAO Found The U.S. Department of Agriculture's Forest Service, the Department of the Interior's Bureau of Land Management (BLM) and National Park Service, and the Environmental Protection Agency (EPA) identified at least 140,000 abandoned hardrock mine features, such as a tunnel, on lands under their jurisdictions. Of these, about 67,000 pose or may pose physical safety hazards—danger of injury or death—and about 22,500 pose or may pose environmental hazards—risks to human health or wildlife from long-term exposure to harmful substances. Agency officials also estimated there could be more than 390,000 abandoned hardrock mine features on federal land they have not captured in their databases, and agencies are developing more comprehensive information about these mines. Forest Service, BLM, National Park Service, EPA, and Interior's Office of Surface Mining Reclamation and Enforcement (OSMRE) spent, on average, about $287 million annually to address physical safety and environmental hazards at abandoned hardrock mines from fiscal years 2008 through 2017, for a total of about $2.9 billion (see figure). Of this total, the agencies spent about 88 percent ($2.5 billion) addressing environmental hazards, and about $1 billion was reimbursed by private parties, such as former mine owners. Federal officials also estimated that it would cost billions more to address these mines in the future. Nearly all of the federal and state agency officials and stakeholders GAO interviewed cited availability of resources and legal liability concerns as factors that limit efforts to address hazards at abandoned hardrock mines. Federal and state officials said their backlog of work is greater than what can be done with available staff and budgets, but they have taken steps to collaborate to help leverage resources. State officials and stakeholders, such as conservation groups, said they want to help address environmental hazards that they did not cause at abandoned hardrock mines. However, they generally do not do so because they are concerned about becoming legally responsible for the entire cost of addressing contamination at an abandoned mine if they attempt partial cleanup. EPA officials said they are considering new ways to encourage volunteer participation, in addition to existing administrative tools.
gao_GAO-19-491
gao_GAO-19-491_0
Background Charitable Contributions Section 501 of the Internal Revenue Code provides for tax-exempt status of certain corporations, trusts, and other organizations. This status allows qualifying organizations to claim exemption from federal income taxes. Subsection (c) of section 501 recognizes 28 categories of tax- exempt organizations, ranging from cemetery companies to multiemployer pension plan trusts. Section 501(c)(3), the section that recognizes charitable organizations, applied to approximately 1.3 million organizations in fiscal year 2017. These groups represent the largest number of 501(c) organizations. Federal tax law permits individual taxpayers and organizations to reduce their tax liability by deducting contributions to charitable organizations on their income tax returns. Individual taxpayers may deduct the amount of a contribution to charitable organizations from their gross income if they itemize their deductions. Charitable organizations provide many types of assistance, such as services for the aging or food and shelter for those in need. Taxpayers may support these activities by making contributions in the form of financial donations or in-kind gifts to qualified organizations. Federal law allows taxpayers to deduct charitable contributions from their adjusted gross income (AGI). This policy has been in place since 1917. An individual taxpayer may deduct up to 60 percent of his or her AGI for cash contributions, with 20 percent to 30 percent limits applying in some cases. A corporation may claim a limited deduction for charitable contributions made in cash or other property up to 10 percent of its taxable income for the year. Charitable Organizations An entity seeking tax-exempt status under 501(c)(3) from IRS must submit either a completed Form 1023, Application for Recognition of Exemption Under Section 501(c)(3) of the Internal Revenue Code, along with organizing documents, or a completed Form 1023-EZ. Both Form 1023 and Form 1023-EZ require the entity seeking recognition of its tax- exempt status to provide information regarding its charitable purpose, as well as certain financial data. IRS employees then review the forms to determine the entity’s eligibility for tax-exemption status. Most tax-exempt charitable entities are required to file an annual information return from the Form 990 series. Certain small entities with gross receipts that are normally $50,000 or less may file Form 990-N Electronic Notice providing abbreviated information. Although the entity is filing its information return as a tax-exempt organization, the entity must pay employment taxes and taxes on unrelated business income, if applicable. IRS provides programs and products to help the entity understand specific issues related to its tax responsibilities. IRS’s Auditing History IRS personnel can audit an organization’s or individual’s submitted tax returns and financial information to verify that the reported tax is correct. IRS personnel audited 933,785 individual income tax returns in fiscal year 2017, according to IRS data. This was 0.6 percent of individual returns filed in calendar year 2016. From fiscal year 2006 to fiscal year 2017, the largest number of individual returns IRS audited was 1,564,690 in fiscal year 2010. There was a decrease in audits of individual tax returns after fiscal year 2011, which occurred about the same time that IRS’s budget declined by about $2.1 billion (15.7 percent) from fiscal years 2011 through 2018, after adjusting for inflation. Concurrent with IRS’s declining resources were increasing responsibilities, such as implementing aspects of the Foreign Account Tax Compliance Act and the Patient Protection and Affordable Care Act. We reported in 2014 that budget cuts had resulted in a significant staffing decline and uneven performance at IRS. In March 2019, we reported that IRS was in the early stages of defining and addressing its workforce needs, but IRS officials stated that there was room for improvement in implementing its workforce plans, and that it was working on a corrective action plan that would address deficiencies noted in our report. IRS’s Primary Operating Divisions The operating divisions that, along with conducting audits, carry out service and enforcement, and that deal most often with abusive tax schemes or tax-exempt entities are TE/GE, SB/SE, and LB&I. These divisions interact with taxpayers and entities that file tax returns. In particular, each of the three divisions may audit taxpayers or entities to determine whether information filed was reported accurately. IRS has set one of its cross-divisional objectives as identifying “new types of tax transactions or promotions that are either abusive or potentially abusive requiring different levels of coordination and varying strategies.” Another of TE/GE’s audit objectives is to “promote the highest degree of voluntary compliance with the statutes governing qualification of plans and exemption of certain types of organizations from tax and to determine the extent of compliance and the causes of noncompliance with the tax laws by plans and organizations.” TE/GE accomplishes this objective by auditing charitable organizations’ compliance with the tax code through its Exempt Organizations unit. In addition to this function, Exempt Organizations also reviews organizations’ tax-exempt status applications and makes tax-exempt status determinations. It also coordinates with other state and federal agencies. Additionally, it audits entities to identify and address noncompliance, where it may propose tax assessments or changes to the tax-exempt status of the audited entity. TE/GE uses various enforcement processes, such as referrals from the public and other parts of IRS and data-driven approaches, to select tax- exempt organization for possible audits. IRS projects that Exempt Organizations will receive approximately 1.6 million filings from tax- exempt and government entities in fiscal year 2019, primarily Form 990 series information returns. SB/SE mainly oversees small businesses and self-employed taxpayers and all other businesses with assets of less than $10 million. Examples of the types of businesses that SB/SE covers include small-business start-ups, small businesses with or without employees, taxpayers with rental properties, taxpayers with farming businesses, and individuals investing in businesses such as partnerships. Overall, IRS projects that SB/SE will receive approximately 59.4 million tax returns in fiscal year 2019. The Lead Development Center, an office within SB/SE, receives referrals from and facilitates communication between SB/SE and TE/GE on the subject of abusive tax schemes. LB&I oversees tax compliance of large partnerships, S Corporations, and C corporations with assets of $10 million or more, as well as individuals with high wealth (those with tens of millions of dollars in assets or earnings) or international tax issues. IRS projects that LB&I will receive approximately 400,000 corporate tax-return filings in fiscal year 2019. LB&I has developed a compliance strategy to identify potential issues that arise during audits of tax returns. LB&I also oversees the processing of reportable transaction disclosure filings by those involved in reportable transactions. A transaction includes all the factual elements relevant to the expected tax treatment of any investment, entity, plan, or arrangement. It also includes any series of steps carried out as part of a plan. Transactions become “reportable” (meaning a taxpayer must report it to IRS) when they fall under one or more of the following categories: listed transactions, confidential transactions, contractual protection transactions, loss transactions, and transactions of interest. A listed transaction is any transaction that IRS has identified as an abusive tax avoidance transaction and has identified in published guidance as a listed transaction. Taxpayers that have engaged in transactions that have tax consequences or tax strategies described in published IRS guidance are required by law to disclose the transaction to IRS. The fact that a transaction must be reported does not mean IRS will disallow the tax benefit, but IRS uses the reports to assess compliance. Appendix IV discusses reportable transaction types in greater detail. Taxpayers are required to disclose all types of reportable transactions on Form 8886, Reportable Transaction Disclosure Statement. Similarly, advisers helping taxpayers conduct reportable transactions are required to file Form 8918, Material Advisor Disclosure Statement. Tax-exempt entities are required to file Form 8886-T, Disclosure by Tax- Exempt Entity Regarding Prohibited Tax Shelter Transaction, when the entity is a party to a listed, confidential, or contractual protection transaction, and the entity knows the identify of any other party in the transaction. Tax-exempt entities that are party to a listed or confidential transaction may be subject to an excise tax of 100 percent of the income from the transaction. Transactions that require the filing of form 8886-T constitute a different, smaller range of activity than transactions requiring the filing of Form 8886. The Office of Tax Shelter Analysis, a unit within LB&I, supports LB&I’s work by coordinating its tax shelter planning and operations. This office also analyzes information collected from disclosure forms. According to IRS policy, if the Office of Tax Shelter Analysis determines a formal investigation is warranted, it presents the information to the LB&I Technical Tax Shelter Promoter Committee, an office within LB&I that has sole authority to approve any proposed investigations. Examples of Abusive Tax Schemes Illustrate Various Ways That Tax- Exempt Status Can Be Exploited by Individuals or Organizations Taxpayers seeking to reduce their tax liability through charitable donations may participate in legal tax planning strategies that allow them to maximize their deductions while giving to charitable organizations. In contrast to these legal tax planning strategies involving charitable donations, abusive tax schemes occur when taxpayers conduct transactions that are not supported by established law to improperly claim tax benefits, or that have no economic significance or business purpose other than the avoidance of tax, among other factors. IRS has long recognized that some charitable donors and tax-exempt organizations have engaged in abusive tax schemes. One such scheme can consist of a donor grossly overvaluing a charitable contribution to obtain a larger deduction on his or her filed tax returns. Another abusive tax scheme can entail a tax-exempt organization providing benefits to a private shareholder or individual. As we previously have reported, the abusive transactions that comprise abusive tax schemes have been a long-standing, ever-changing, and often hidden problem for IRS. The following three examples illustrate various ways that an entity’s tax- exempt status can be used in transactions that are not supported by law or are inconsistent with the law’s intent, and how otherwise legitimate tax- exempt activity can be exploited improperly. Syndicated Conservation Easements A conservation easement is a legal agreement that grants an organization the right to restrict the development and use of property for conservation purposes with the intent of preserving the land or buildings. If statutory requirements are met, taxpayers may donate an easement to a qualified organization and receive a charitable income tax deduction for the appraised value of the easement. A conservation easement becomes “syndicated” if a person or company promoting the easement (a promoter) offers multiple investors in a partnership or pass-through entity the opportunity to claim charitable deductions based on the value of the easement in return for cash. The Brookings Institution estimated that investments in syndicated conservation easements totaled $623 million in 2016, an increase of 29 percent from $484 million in 2015. It further estimated that because tax deductions from syndicated conservation easement contributions generate a benefit greater than the value of the investments themselves, the tax deductions resulted in federal tax revenue loss between $1 billion and $1.9 billion in 2015 and between $1.3 billion and $2.4 billion in 2016. According to IRS, in a syndicated conservation easement, promoters purchase land and convey ownership to a pass-through entity, such as a partnership. The promoters offer interests in the pass-through entity to prospective investors who are then able to deduct their share of the value of the easement as a charitable contribution. In its guidance, IRS said the conservation easement becomes noncompliant if, for example, the promoters obtain an appraisal that purports to be a qualified appraisal, but that greatly inflates the value of the conservation easement based on unreasonable assumptions about the development potential of the real property. Because the promoters inflate the value of the property, the investors may benefit by claiming a charitable deduction on their tax returns that exceeds their initial investment. Figure 1 shows the steps in the formation of a syndicated conservation easement and the point at which the easement becomes noncompliant when promoters obtain an inflated value for the easement. IRS has indicated its concern about the potential for abuse of conservation easements, whether syndicated or otherwise, when used in ways not supported by the law. In December 2016, the Department of the Treasury (Treasury) and IRS issued Notice 2017-10 designating syndicated conservation easements as listed transactions. This notice provides that certain syndicated conservation easements promoted with a return on investment of at least 250 percent will be identified as listed transactions. It also provided details on how Treasury and IRS view these transactions as forms of abuse. Although promoters who abuse syndicated conservation easements exploit tax-exempt entities, the law does not treat the tax-exempt entity as a participant, meaning that even when a promoter is found to use a syndicated easement in a noncompliant manner, the tax-exempt entity associated with the scheme may still be considered compliant. In addition to the potential for overvaluation of easements, Treasury and IRS considered that syndicated conservation easements may become problematic because of the potential they have to involve transactions that violate the economic substance doctrine. Because of its concerns, IRS has identified taxpayer abuse of conservation easements as a risk area for noncompliance. Syndicated easements also illustrate how noncompliance can cross the areas of responsibility of IRS’s audit divisions. In this case, the beneficiary of the scheme may be a small-business taxpayer (SB/SE’s responsibility) or a corporation (LB&I’s responsibility), even though the scheme hinges on an inflated appraisal and being able to donate to the tax-exempt recipient (TE/GE’s responsibility). Donor-Advised Funds A donor-advised fund is a fund or account held by a charity that receives contributions from donors who may advise, but not control, how the organization uses the money. The Pension Protection Act of 2006 defined donor-advised funds in the Internal Revenue Code and subjected the funds to new requirements. Because donor-advised fund accounts are operated by charities, contributions to these funds are deductible at a higher percentage of adjusted gross income (generally 50 percent or 60 percent for cash contributions) than donations to private foundations (generally 30 percent). Some donors may use the donor-advised funds in ways that IRS considers improper. For example, prior to tax-law changes in 2006, IRS said that abusive donor-advised funds are those that appear to be established to generate questionable charitable deductions, and provide impermissible economic benefits to donors and their families (including tax-sheltered investment income for the donors). Figure 2 illustrates how donor-advised fund accounts operate and highlights where in the process the parties involved could abuse the funds or raise policy concerns about how donor advised funds have been used. Donor-advised funds have grown in various measures in recent years, according to data compiled by the National Philanthropic Trust. For example, it reports that from 2013 to 2017, the total grants made by donor-advised funds grew from $9.83 billion to $19.08 billion, and contributions grew from $17.24 billion to $29.23 billion. Total assets held in donor-advised funds increased from $57.1 billion to $110.01 billion as well, according to the organization’s study. In 2017, about 463,000 donor- advised funds existed in the United States, with an estimated $110 billion in assets, according to the National Philanthropic Trust. Some of the largest of these funds in terms of assets are sponsored by financial institutions, religious groups, and community foundations, while others are independent, according to our review of selected donor-advised funds’ sponsoring organizations’ websites and data from the National Philanthropic Trust. Patient Assistance Programs Patient assistance programs help patients afflicted with certain medical ailments obtain financial assistance for medical care or free drug products and these programs may qualify for tax-exempt status. Pharmaceutical companies may establish their own patient assistance programs or make monetary donations to independent charities’ patient assistance programs. In addition to financial support, pharmaceutical companies may donate medication (through in-kind product donations) to patient assistance programs. Donations such as these allow pharmaceutical companies to claim a limited tax deduction for charitable contributions. If they claim deductions, the deductions may be up to 10 percent of the corporations’ taxable income when donating to charities. The possibility of donors receiving private benefits in excess of the charitable deduction creates potential risks to participating pharmaceutical companies and compliance challenges for IRS, according federal regulators. For example, because independent charity patient assistance programs may be 501(c)(3) tax-exempt organizations, pharmaceutical manufacturers’ profits generated from sales of their products to individuals receiving help from patient assistance programs that they donate to may raise issues of inurement. Figure 3 summarizes how a hypothetical patient assistance program works and highlights points in the process where potential abuse of the program may occur. The federal government has investigated cases of potential private benefit by pharmaceutical companies and patient assistance programs. For example, IRS filed a court summons in May 2017 in an ongoing investigation of a patient assistance program over concerns that it spent the majority of its donations on copayment support that went to patients who were prescribed medication from companies that had donated money to the patient assistance program. The Number of Audits Involving Tax-Exempt Entities Generally Declined and Few Tax-Exempt Entities Filed Prohibited Transaction Reports The Number of Audits Involving Tax-Exempt Entities Generally Declined Across TE/GE, SB/SE and LB&I over a 10-year Period As shown in the tax scheme examples previously discussed, abusive schemes with tax-exempt entities can involve the tax-exempt entity directly or leverage an entity’s tax-exempt status indirectly to reduce taxes. Consequently, the characteristics of audits involving abusive tax schemes, such as which IRS operating division is responsible for the audit, will differ according to the type of scheme. In addition, IRS generally presents information about abusive tax schemes under a category it calls abusive tax avoidance transactions. The abusive tax schemes we have been discussing in this report are a subset of abusive tax avoidance transactions in which the transaction or arrangement involves multiple types of entities. IRS data do not allow us to identify separately the transactions involving multiple entities. The discussion that follows describes trends under the assumption that over time abusive transactions involving multiple entities would closely track total abusive transactions. TE/GE audited 2,294 tax-exempt entities with what IRS identified as abusive tax avoidance transactions in the 10-year period from fiscal year 2008 through 2017. As shown in figure 4, the number of abusive- transaction audits fell from a high of 886 in fiscal year 2009 to 10 or less in fiscal year 2017. This decline represented at least a 98.9 percent decrease in audits performed by TE/GE (see appendix V, table 6). The decline in abusive-transaction audits generally corresponds with the overall decrease in audit activity by IRS over recent years (see appendix V, tables 2, 3, and 4). During the same 10-year period, TE/GE assessed a total tax increase of $107 million based on its audits of tax-exempt entities and the average tax increase per audit was $46,804. The amount assessed for the tax increase declined from 45.3 million in 2008 to 1.2 million in the merged years of 2016 and 2017. The effectiveness and efficiency of the audit process may be reflected in the no-change rate and staff days associated with the audits. The no- change rate—the percentage of audits that results in no tax change—was 13.9-percent (see appendix V, table 11). IRS uses this ratio as an indicator of how effectively IRS identifies noncompliant taxpayers (a lower no-change rate on its audits is consistent with more effective audit selection methods). The lower rate may also reflect higher economic efficiency because less IRS and taxpayer time and other resources are used for auditing compliant returns. On average, TE/GE spent 70 hours per audit of tax-exempt entities from fiscal year 2008 through 2017 (see appendix V, table 9). Audits involving abusive schemes where taxpayers leverage an entity’s tax exempt status—but the tax-exempt entities are not the subject of the audit—are the responsibility of SB/SE and LB&I. To determine the minimum number of audits these divisions conducted on abusive schemes involving tax-exempt entities, we used IRS project codes that IRS agreed were relevant. For these project codes, SB/SE and LB&I conducted 4,207 audits over the 10-year period. The numbers of audits generally decreased over the period except for increases in 2012, 2015, and 2017 for LB&I audits and increases in fiscal year 2015 and 2017 for SB/SE audits. Combined SB/SE and LB&I audits fell from 1,176 in fiscal year 2008 to 99 in fiscal year 2017, a 91.6 percent decrease (see appendix V, table 13). SB/SE and LB&I recommended about $8.3 billion in tax changes over the 10-year period. As shown in figure 5, the average recommended amount was larger for LB&I, but tended to fluctuate more than the SB/SE amounts. The average tax change amount per audit over the 10-year period recommended by SB/SE was $89,399. The average amount recommended by LB&I was $8.6 million. Figure 5 also shows how both divisions had a surge in recommended tax amount changes for 2017 compared to prior years. SB/SE’s recommended changes increased from $270,131 in fiscal year 2016 to $127 million in fiscal year 2017. LB&I recommended changes increased from $299 million in 2016 to $555 million in 2017. IRS officials could not provide an explanation for the surge in 2017 (see appendix V, table 14). Again, the divisions’ resource use may be reflected in staff days and the no-change rate. SB/SE and LB&I combined spent 218 hours, on average, per audit for the audits involving tax-exempt entities identified by project codes (see appendix V, table 15). The no-change rate for SB/SE audits we examined involving tax-exempt entities identified by project code was 10.9 percent. LB&I audits involving tax-exempt entities had a no-change rate of 15.5 percent (see appendix V, table 16). Audits Involving Tax- Exempt Entities Had Larger Recommended Tax Changes and Used More Staff Hours on Average than the Total of All Abusive Transaction Audits Numbers of audits of all types of abusive transactions showed a pattern of decline similar to audits involving tax exempt entities. SB/SE and LB&I conducted a total of 155,467 audits involving all types of abusive transactions from fiscal year 2008 to fiscal year 2017. As shown in figure 6, the total number of these audits conducted by each of the operating divisions fell in most years. Abusive transaction audits conducted by SB/SE and LB&I fell from 26,519 in fiscal year 2008 to a low of 4,248 in fiscal year 2017, an 84 percent decrease in audits closed during this period (see appendix V, table 5). Audits involving tax-exempt entities resulted in higher average tax changes than audits for the total of all abusive transactions. Combined, SB/SE and LB&I recommended a total of $39 billion in tax changes for the total of all for abusive-transaction audits. As shown in figure 7, SB/SE recommended tax amount changes that averaged $40,834 per audit and LB&I recommended tax amount changes that averaged $3 million per audit. The recommended tax change per abusive-transaction audit was larger for audits involving tax-exempt entities than for the total of all abusive-transaction audits in both operating divisions which were (as described above) $89,399 for SB/SE and $8.6 million for LB&I. The total recommended tax amount change for SB/SE decreased from $1.4 billion to $339 million, a 75 percent decrease over the period. For LB&I, the recommended tax amount change decreased from $7.5 billion to $866 million, an 89 percent decrease (see appendix V, table 7). We estimated audits involving tax-exempt entities identified on the basis of project codes led to SB/SE and LB&I recommending about $8.3 billion in tax changes over the 10-year period. The no-change rate for all SB/SE abusive transaction audits over the period was 8.8 percent. The no-change rate for all LB&I abusive- transaction audits was 14 percent (see appendix V, table 10). Combined, SB/SE and LB&I spent a total of 6.6 million staff hours for the total of all abusive transaction audits from fiscal year 2008 to 2017, spending, on average, 42 hours per audit for all abusive-transaction audits (see appendix V, table 8). As described above, SB/SE and LB&I spent more in resources, 218 hours, on average, per audit of tax-exempt schemes, than the average for the total of all abusive-transaction audits. Taxpayers with Audits Involving Tax-Exempt Entities Differed by Income The majority (88 percent) of taxpayer audits involving tax-exempt entities identified on the basis of project codes for both SB/SE and LB&I had an Adjusted Gross Income (AGI) of more than $50,000, with about 40 percent of the audits involving the taxpayers with AGI falling between $100,000 and $500,000. The SB/SE audits had an average AGI of $1.2 million and median AGI value between $200,000 and $500,000. LB&I audits had an average AGI of $6.2 million and a median AGI value between $1.0 million and $1.5 million. The majority of business taxpayers with abusive-transaction audits involving tax-exempts (about 70 percent) reported zero gross receipts (see appendix V, tables 17 and 18). Taxpayers Reported Tax- Exempt Entities on Thousands of Reportable Transaction Disclosures, While Few Tax-Exempt Entities Filed Prohibited Transaction Reports While the audit data examined above show the noncompliance IRS has found regarding abusive schemes with tax-exempt entities, information about the taxpayers involved in the transactions can also be derived from the IRS disclosure forms. Most of the taxpayers identified partnerships as the entities involved in the listed transactions that they reported. Of the taxpayer disclosures identifying a tax-exempt entity on Form 8886, 97.8 percent identified the type of reportable transaction as a listed transaction and 95.5 percent listed a partnership for type of entity involved in the transaction. Further, 98.1 percent of taxpayers claimed a deduction from their AGI as the benefit generated by the transaction and 5 percent claimed an ordinary loss as the tax benefit. The different disclosure reports that IRS receives from tax-exempt entities, taxpayers, and tax advisors contain data that identify the potential involvement of tax-exempt entities with reportable transactions. However, there are differences in the legal filing requirements, the types of information supplied, and the number of disclosure forms filed. Few tax-exempt entities directly disclose their involvement in prohibited transactions to IRS. Regulations require that certain tax-exempt entities disclose information on a prohibited tax shelter transaction to which the entity is a party. For calendar years 2004 through 2016, IRS received 71 Form 8886-T disclosures from tax-exempt entities that were a party to a prohibited transaction. Moreover, the actual number of filers making disclosures was smaller, only 33, because some submitted multiple forms during the period. Many more tax-exempt entities were identified by taxpayers filing the Form 8886, which requires a different, broader range of transactions to be reported than the Form 8886-T. For calendar years 2000 through 2017, IRS received more than 979,900 Form 8886 disclosure reports from taxpayers. Of that number the taxpayer identified a tax-exempt entity as part of the reportable transaction on 32,847 disclosures or 3.4 percent of all Form 8886 reports. A smaller number was identified by tax advisors on Form 8918. For calendar years 2007 through 2018, out of the 16,477 Form 8918 disclosure statements received from tax advisors, 155 submissions identified a tax-exempt entity as part of a reportable transaction. While detail about the transactions themselves—when they appear in the form narratives—is not readily available from the Form 8886 disclosure databases. IRS’s Research, Applied Analytics and Statistics Division has created an analytic tool for analyzing narrative information that it has tested on the Form 8886. When we performed a test analysis using this tool on the narrative fields on the Form 8886, we identified keywords that may help isolate tax-exempt organization involvement in potentially abusive schemes and ultimately help select returns for more detailed review. This more detailed review is required because transactions reported on the Form 8886 are not necessarily noncompliant. For our test analysis, we selected certain terms related to known abusive tax schemes involving tax-exempt entities such as “conservation easement” or related to the tax-exempt sector such as “charitable organization” and counted the number of times the terms appeared in the narrative field of 26,632 Form 8886 disclosures made in fiscal year 2017. For example, the term “conservation easement” occurred in the narrative field of 6,767 disclosure forms and the term charitable organization occurred on 17 disclosure forms. Through further searching on terms that might relate to charitable organizations, such as “charity,” “sports,” “children,” “animals,” “foundation,” and “scientific,” we identified 211 occurrences. IRS is not undertaking this type of analysis of taxpayer disclosures, which would expand its ability to identify tax-exempt entities and evaluate their potential involvement with reportable transactions, as discussed later in this report. IRS Has a Variety of Programs Working Collectively to Identify Abusive Schemes Involving Tax-Exempt Entities, but Some Internal Control Weaknesses Exist in Its Approach Various IRS Programs and Offices Identify and Coordinate on Abusive Tax Schemes IRS operates various programs to identify abusive tax schemes involving tax-exempt entities. Not all of these programs exclusively address abusive tax schemes with tax-exempt entities but nevertheless can provide relevant information on that issue. For example, the Office of Tax Shelter Analysis processes disclosures of reportable transactions, including those related to tax-exempt entities, and the Lead Development Center may collect information about abusive schemes related to tax- exempt entities as part of its role in dealing with abusive tax transactions in general. As figure 8 illustrates, several of these programs in practice are linked by the Service-wide Compliance Strategy Executive Steering Committee. This committee is responsible for collecting input from the operating divisions (TE/GE, SB/SE, and LB&I), as well as other parts of IRS, about abusive tax schemes that cross divisional responsibilities, including schemes involving tax-exempt entities. The Executive Steering Committee also may make decisions about how to address abusive tax schemes that cross the operating divisions’ responsibility. IRS officials said that the operating divisions are individually responsible for monitoring the committee’s performance. Therefore, the committee’s decisions depend on what information the operating divisions provide. As figure 8 also shows, the operating divisions pass information about abusive schemes among themselves through referrals, making clear communication among the operating divisions critical for IRS in identifying abusive tax schemes. An IRS office that more directly addresses potential abusive schemes with tax-exempt organizations is TE/GE’s Compliance Planning and Classification office (CP&C). This office has several responsibilities relating to identifying abusive tax schemes and communicating with other parts of IRS, as well as coordinating with other operating divisions on potential noncompliance. For example, CP&C is responsible for reviewing emerging abusive tax schemes, conducting research, and reviewing suggestions from a computer portal through which staff can raise potential issues about compliance. The portal also serves as the foundation to TE/GE’s compliance issue identification process. IRS Met Some, but Not All, of the Internal Controls Criteria Relating to How the Agency Identifies Abusive Tax Schemes with Tax-Exempt Entities We found that IRS maintains a variety of programs to identify tax schemes involving tax exempt entities agency-wide, and these programs together fully met seven of our 10 criteria. Appendix I contains more information about the criteria we used in our analysis and a table that summarizes the results of our analysis. One criterion that IRS fully met was identifying areas of authority. All of the programs we reviewed had documentation showing the responsibilities the program was to fulfill and the roles it was to perform. IRS’s programs also fully met the criterion for ensuring competency by having documented procedures for training to enhance staff’s responsibilities across the programs we reviewed and met the communication criterion by, for example, having coordination meetings among officials representing the different operating divisions. In addition, IRS met the criterion for conducting monitoring activities by, for example, having inventory reports on TE/GE’s issue submission portal and maintaining a monitoring group over TE/GE’s audit plans. Finally, IRS met all three of our fraud-related criteria with programs or procedures that specifically identify fraud, such as TE/GE’s Fraud Investigation Unit, or that assist auditors in identifying fraud on returns, such as IRS’s Fraud Handbook. Reviewing whether auditors assessed fraud risk is also part of TE/GE’s quality review system. In the following sections, we discuss how IRS did not meet the other three internal control criteria. IRS Has Not Assessed Risks That Tax-Exempt Entities Do Not Properly File Form 8886-T A relatively low number of tax exempt entities filing Forms 8886-T combined with our analysis of audit data raises questions about whether tax-exempt entities are filing these forms as often as they should. As we discussed above, tax-exempt entities filed only 71 Forms 8886-T over a 12-year period from fiscal year 2004 through 2016, where they listed prohibited transactions. At about the same time, taxpayers in general filed thousands of Forms 8886 annually where they identified tax-exempt entities as part of their reportable transactions. In addition, when we compared Form 8886 filings that identified tax- exempt entities as part of the reportable transaction with SB/SE and LB&I audit data, again for the same time period, we found 432 closed cases with tax changes. Although we did not determine whether the subject of these audits was the abusive scheme involving a tax-exempt entity, the result of 432 closed audit cases suggests that tax-exempt entities may be part of more prohibited transactions than those reported on the 71 Form 8886-T filed during the period. The audit cases identified in SB/SE and LB&I data resulted in about $1.9 billion in tax changes. The average per audit tax change recommended by SB/SE was $65,143 and by LB&I was $19 million. A similar analysis could be conducted comparing audit results with data from Form 8918, which is filed by tax advisors. IRS officials said the disparity between the number of Form 8886 filings and the small number of 8886-T filings has not raised concerns because the legal requirements for tax-exempt entities filing Form 8886-T are narrower than the requirements taxpayers must follow to file Form 8886, as we discussed earlier. However, IRS has not undertaken a risk assessment to test whether this explanation—that the lower number of filings should be expected because the filing requirement is narrower—is valid, which is inconsistent with the internal control standards for risk assessment. The Office of Tax Shelter Analysis sends Form 8886-T filings it receives to TE/GE, and the Compliance Planning and Classification office reviews these filings, but no documented process exists to determine whether all tax-exempt entities that should file Form 8886-T were filing the form as required. In addition, IRS provided us with no studies investigating the causes and consequences of such a small number of filings. While IRS has adopted processes to help ensure proper filing for other disclosures, such as Form 8886, it has not extended these to Form 8886- T. In 2011, we recommended that IRS establish a process to periodically check whether Form 8886 filers met their reporting obligations. In response to that recommendation, IRS implemented a new indicator and matching process to review whether filers met their obligations. IRS officials told us that similar controls do not exist for 8886-T filings. TE/GE officials said one way that they ensure forms are filed is through penalties, yet they said they have never assessed the penalty for nonfiling of Form 8886-T. TE/GE officials also said that another way they ensure proper filing is through education and cited such documentation as IRS Publication 557, Tax-Exempt Status for Your Organization. IRS said it provides other information through its website informing charities of their responsibilities. Despite this education effort, it may still be the case that a lack of knowledge about filing requirements reduces the number of tax-exempt entities that file. An IRS official suggested that charities may not have the financial sophistication to realize that they are involved in a prohibited tax shelter transaction and therefore are required to file a Form 8886-T. Without a better understanding of the reasons behind the low filing, IRS cannot be reasonably certain that tax-exempt entities are following the law on filing Form 8886-T and ensuring tax-exempt entities’ compliance. IRS Data Do Not Facilitate Some Analysis of Abusive Tax Schemes Involving Tax- Exempt Entities We were able to use the IRS audit and disclosure data to perform certain analyses on abusive tax schemes with tax-exempt entities for this report, but data deficiencies prevented us from undertaking more complete analysis and hinder management’s use of the data. These deficiencies— which are inconsistent with internal control standards for quality information—weaken divisions’ ability to identify abusive tax schemes involving tax-exempt entities as well as the Executive Steering Committee’s ability to make decisions about how to address abusive tax schemes across divisions and develop compliance strategies. First, the descriptions of project codes in audit data do not always clearly identify abusive tax schemes across operating divisions. For example, one code LB&I uses to identify abusive transactions in audit data is “domestic tax shelters.” TE/GE uses two codes both titled, “Abusive Tax Avoidance Transactions,” and SB/SE uses a code titled, “Tax Shelter List Projects.” IRS officials provided no additional documentation on what these codes mean. The lack of specificity hinders analyses of abusive tax schemes involving tax-exempt entities. IRS officials said that they do not keep an overall list of project codes that cover abusive schemes involving tax-exempt entities. This limits their ability to readily assess and manage audits of abusive tax schemes involving tax-exempt entities. However, they did say such a list, which would be effective in certain circumstances or operating divisions, might be possible to produce. Cross-operating division analysis could enhance the Executive Steering Committee’s objective to assess emerging issues and develop policy responses. Second, we found that there were no project codes consistently identifying abusive schemes involving tax-exempt entities that crossed operating divisions. Instead, IRS officials said each operating division assigned its own project codes that identify abusive tax schemes. Having no uniform way to identify abusive schemes across the operating divisions makes analysis of schemes that overlap with different operating divisions’ responsibilities problematic and inhibits IRS from accomplishing its objectives. The lack of cross-divisional project codes echoes findings from our 2011 report on abusive tax avoidance transaction data, where we found that some abusive tax avoidance transaction data were reported inconsistently across IRS divisions. We said in that report that without comprehensive or consistent information, IRS does not have the best information to decide how to evaluate the results of its audits. Our recommendation to separately track the tax amounts recommended, assessed, and collected between abusive tax avoidance transaction issues and nonabusive transaction issues remains open because IRS said resource and capability constraints preclude it from capturing information in this way. Similarly, IRS officials told us it would be costly and logistically prohibitive to create new project codes identifying abusive schemes involving tax- exempt entities that crossed divisions. However, as we said in our previous report, tracking audit results for abusive and nonabusive transactions would provide IRS management with the data needed to make more informed decisions about program effectiveness and resource allocation. If, as IRS indicated above, it would be possible to make an overall list of codes, such a list could be used to achieve the same results as adjusting the database system. IRS Does Not Use Available Tools to Identify Abusive Schemes with Tax-Exempt Entities That Cut Across Operating Divisions Although IRS does not identify some data that would facilitate analysis of abusive tax schemes involving tax-exempt entities spanning the operating divisions, we found evidence that TE/GE’s Returns Inventory and Classification System (RICS) could at least partially support analysis and monitoring of audit data across the operating divisions. For example, the RICS user manual states that RICS can access a variety of forms outside of TE/GE’s purview, such as Form 1065 and the Form 1120 series tax returns, which typically are handled by SB/SE or LB&I respectively. While TE/GE uses RICS, officials we spoke with at LB&I, for example, were not familiar with RICS’ capabilities. TE/GE officials said IRS would have to study whether using RICS in other divisions would generate productive audits. As we discussed earlier in this report, IRS’s Research, Analysis and Statistics office also has developed the capability to analyze narrative information, which it has tested on the Form 8886. However, this analytical tool is not being used operationally to review the Form 8886 or any other disclosure report. Our analysis shows that the tool has the potential to help IRS better search disclosure reports for additional information about transactions that could help IRS identify potentially abusive schemes involving tax-exempt entities. For example, it can be used to identify keywords in disclosure reports that could help determine whether a tax-exempt entity was a party to a reportable transaction that warrants further investigation for compliance. However, IRS officials told us they have no plans to use this tool but agreed that it may be beneficial. IRS officials also told us that TE/GE does not routinely review Form 8886 filings that show tax-exempt entities as being part of the reported transaction because the data are not clear indicators of noncompliance. However, by not using these data for possible leads, IRS may be missing opportunities to identify known abusive schemes, which is inconsistent with internal controls on using quality information. Again, our analysis of the 8886 filings combined with audit results suggests that there is potential for IRS to use the Form 8886 to identify potential noncompliance. Without conducting such an analysis, IRS may be missing opportunities to identify leads on tax-exempt entities in abusive tax schemes. Quality Control over Cross- Operating Division Referrals Is Limited We previously showed that abusive tax schemes involving tax-exempt entities can involve multiple types of entities that cross IRS’s operating divisions’ areas of responsibility. We also showed that IRS relies on auditors to refer potentially noncompliant entities involved in an abusive scheme to the responsible operating division. Consequently, IRS needs assurance that auditors’ make referrals when appropriate. However, IRS lacks a control to ensure that auditors make referrals correctly. An IRS audit official said that managers are tasked with reviewing auditors’ work and identifying referrals that should have been made during case closings. However, there is no documented guidance specifically directing managers to assess whether auditors correctly identified referrals involving abusive tax schemes, reducing assurance that such auditors will make such identifications correctly and route them appropriately. IRS’s audit quality review systems, which generally measure how well auditors follow procedures from a random sample of audits, also do not assess whether referrals of abusive schemes involving tax-exempt entities are properly identified and routed. The lack of guidance to ensure auditors make referrals across the operating divisions increases risk that the responsible division will not be alerted to potential noncompliance to make further assessments for enforcement action. Absent specific guidance, there also is increased risk that even when one entity in an abusive tax scheme is audited, other entities in the scheme may go unexamined. This is inconsistent with internal controls standards for control activities. Conclusions Abusive tax schemes involving tax-exempt entities pose enforcement challenges for IRS, as schemes can cross IRS’s operating divisions’ areas of responsibility and evolve over time. While IRS has established programs to help identify new abusive schemes, opportunities exit to better ensure that IRS accomplishes its objectives of identifying existing and emerging schemes. In particular, opportunities exist for IRS to improve the quality of its data and how it is using the data it has in managing its programs. Because IRS uses codes to identify abusive schemes that are not consistent across the operating divisions, its efforts to formulate policy across operating divisions may be made more difficult. Also, IRS may not be making the best use of its data by not using existing tools that may be helpful in analyzing data to identify abusive schemes involving tax-exempt entities. Next, IRS has an opportunity to reduce the risk that tax-exempt entities are noncompliant by assessing the number of Form 8886-T filings. Finally, referrals across divisions play an important role in IRS’s ability to identify schemes with tax-exempt entities, but IRS’s internal control activities over referrals are limited. By taking actions to further strengthen its internal controls, IRS could enhance its efforts to identify and combat abusive tax schemes that involve tax-exempt entities. Recommendations for Executive Action We are making the following five recommendations to IRS: The Commissioner of Internal Revenue should undertake a risk assessment of tax-exempt entity Form 8886-T filings. Based on the findings of the risk assessment, IRS should then determine whether steps are needed to increase compliance, such as, for example, through increased outreach to tax-exempt entities or assessment of nonfiling penalties. (Recommendation 1) The Commissioner of Internal Revenue should link audit data on abusive tax schemes involving tax-exempt entities across operating divisions and use the linked data to assess emerging issues and develop policy responses. (Recommendation 2) The Commissioner of Internal Revenue should test the ability of the Return Inventory Classification System to facilitate analysis and monitoring of audit data across the operating divisions and to support the IRS’s enforcement objectives. (Recommendation 3) The Commissioner of Internal Revenue should use existing data analytic tools to further mine Form 8886 and Form 8918 data, which could be used to find audit leads on tax-exempt entity involvement in potentially abusive tax schemes. (Recommendation 4) The Commissioner of Internal Revenue should develop guidance to help managers ensure referrals about abusive schemes involving tax-exempt entities are made across operating divisions. This could be accomplished by, for example, adopting specific guidance for audit managers to look for referral accuracy in their reviews of case closings. (Recommendation 5) Agency Comments We provided a draft of this report to the Commissioner of Internal Revenue for review and comment. On August 16, 2019, the IRS Deputy Commissioner for Services and Enforcement provided written comments stating that IRS agreed with GAO’s recommendations. In the letter, which is reproduced in appendix VII, the Deputy Commissioner said that GAO’s recommendations would provide IRS with additional opportunities for improving the identification of tax schemes involving exempt entities. IRS also sent us technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or mctiguej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Appendix I: Scope and Methodology To describe ways in which taxpayers have abused a tax-exempt entity through abusive tax schemes, we conducted interviews with knowledgeable professionals and researchers. We chose the interview sources by reviewing relevant articles from academic databases and reaching out to professional organizations. We narrowed our list of examples of abusive tax schemes by focusing on those professionals and researchers who: had recent professional experience as an attorney, accountant, or other industry professional with a firm specializing in tax-exempt entities or tax shelters; had recent professional experience in nonprofit management or affiliation with professional associations specializing in nonprofit organization or oversight; had published books, articles, or other research on tax-exempt entities or tax shelters within the last 10 years; were recommended to us by a relevant professional organization, such as the American Bar Association or the American Institute of Certified Public Accountants; work for or previously worked in charity tax enforcement at the state previously worked for the Internal Revenue Service (IRS), specifically in the Tax Exempt and Government Entities Division (TE/GE); or would (in our professional judgment) be able to speak on the topics of abusive tax avoidance schemes or IRS investigations of tax-exempt entities. We conducted literature and court case reviews using academic and legal databases and covered years 2008 through 2018 using search terms such as “tax avoidance,” “tax-exempt,” and “shelter.” We combined the information found in interviews with reviews of relevant literature and court cases. We categorized the observations from our research by the following criteria. Involved multiple entities, at least one of which was tax exempt, and Involved a transaction or scheme already known to IRS, such as a listed transaction or transaction of interest or Involved a transaction mentioned in expert interviews. We then applied the following factors to make the final three choices for the examples: how representative the example was of abusive tax- schemes involving tax-exempt entities; how well-documented we found the example to be in literature reviews; how recent the example had been used by abusers; and how much impact the example had in terms of prevalence and tax revenues. To examine trends in IRS’s compliance and the characteristics of taxpayers audited for using abusive tax schemes involving tax-exempt entities, we collected data from the following IRS business operating divisions that conduct audits on abusive transactions: (1) TE/GE, (2) Small Business/Self-Employed (SB/SE), and (3) Large Business and International (LB&I). We received data extracts from the following computer data systems (1) the Returns Inventory and Classification System data extracts from TE/GE; (2) the Automated Information Management System Centralized Information System (A-CIS), utilized by SB/SE and LB&I; and (3) the Compliance Data Warehouse (CDW) utilized by SB/SE and LB&I. IRS performs a number of quality control steps to verify the internal consistency of the Return Inventory Classification System, A-CIS, and CDW data. Additionally, we reviewed documentation from the operating divisions on the data, discussed the data with IRS officials, and conducted electronic reliability testing. For example, we verified the completeness of analysis variables and the date ranges for our analysis. We excluded 178 records from our analysis of SB/SE data because they were not within our date range. Based on our review, we believe the data are sufficiently complete and accurate for our purposes. We identified audits with potential tax exempt entities by selecting audits based on IRS project codes that IRS agreed were relevant to determine the minimum number of audits conducted on abusive schemes involving tax-exempt entities. We also matched the SB/SE and LB&I data with IRS’s Form 8886, Reportable Transaction Disclosure Statement, data file of the tax-exempt records. We used these data to produce descriptive statistics on audit and taxpayer characteristics and IRS compliance efforts for 2008 through 2017. Tax return information came from Form 1040, U.S. Individual Income Tax Return; Form 1120, U.S. Corporation Income Tax Return; and Form 990, Return of Organization Exempt from Income Tax. Dollars amounts reported for the 10-year period have been adjusted for inflation in 2018 dollars based on a Fiscal Year, Gross Domestic Product Price index. Separately, we compiled descriptive statistics on disclosures of reportable transactions that also involved tax-exempt entities from Form 8886 and Form 8918, Material Advisor Disclosure Statement. IRS’s Office of Tax Shelter Analysis provided the data for Forms 8886 and 8918. We also performed an analysis of the narrative portions of Form 8886 from tax year 2017 to identify more information about the descriptions of the reported transactions. We identified that IRS could conduct Python optical character recognition (OCR) analysis of the text fields on IRS Form 8886. We worked with officials at IRS’s Research, Analysis and Statistics office on using Python computer programming language to conduct the analysis. IRS ran the OCR using keywords associated with 29 different tax-exempt organizations we identified. The keywords we used were based on characteristics of tax-exempt entities, such as “charity” and “foundation”— terms found in 26 U.S.C. Section 501. We received summary tables and copies of PDFs of all Form 8886-T, Disclosure by Tax-Exempt Entity Regarding Prohibited Tax Shelter Transaction, for tax years 2004 through 2016. We checked the reliability of IRS’s summary tables and manually reviewed the PDF submissions to generate descriptions of the Form 8886-T data. We conducted reliability testing for all of the data we used for this objective. For the audit and tax return data, we interviewed relevant IRS officials and compared our statistical runs with publicly available statistics. For the Form 8886 and Form 8918 disclosure data, we interviewed relevant IRS officials. For the 8886-T data, we compared the summary tables IRS provided with the PDFs of the original Form 8886-T submissions. To assess how IRS identifies emerging abusive tax schemes and to identify potential improvements, we reviewed documentation on programs that help IRS identify possible abusive tax schemes involving tax-exempt entities. We identified the programs by reviewing IRS documentation, including the Internal Revenue Manual, in combination with IRS’s determination of relevant programs (see appendix VI for more details about these programs). We then identified criteria appropriate for assessing the programs’ alignment selected principles from Standards for Internal Control in the Federal Government (Green Book) and leading practices from our Fraud Risk Management Framework. To select these criteria, we reviewed the Green Book and Fraud Risk Management Framework to identify principles relevant to specific aspects of IRS’s programs for identifying and initiating enforcement actions against abusive tax schemes involving tax exempt entities. IRS agreed that these criteria were appropriate. The following list shows the criteria we selected through this process. Green Book Principle 3: Establish structure, responsibility, and Green Book Principle 4: Demonstrate a commitment to recruit, develop, and retain competent individuals Green Book Principle 7: Identify, analyze, and respond to risks Green Book Principle 8: Assess fraud risk GAO Fraud Risk Management Framework Overarching Concept 1.2 (structure) GAO Fraud Risk Management Framework Overarching Concept 2.1 (plans exist to assess fraud) Green Book Principle 10: Design control activities Green Book Principle 13: Use quality information Green Book Principle 14: Communicate Internally Green Book Principle 16: Perform Monitoring Activities After establishing appropriate criteria, two analysts independently reviewed appropriate evidence and determined whether the evidence aligned with the criteria for the programs was based on the attributes for the Green Book criteria and Fraud Risk Management Framework guidance. We also considered how the programs met TE/GE’s objective to “promote the highest degree of voluntary compliance with the statutes governing qualification of plans and exemption of certain types of organizations from tax and to determine the extent of compliance and the causes of noncompliance with the tax laws by plans and organizations,” and IRS’s objective to “identify new types of tax transactions or promotions that are either abusive or potentially abusive requiring different levels of coordination and varying strategies.” We determined the criterion was met only if all of the programs under review offered sufficient support. Table 1 shows how we assessed the programs we reviewed on the criteria. Appendix II: Types of Tax-Exempt Organizations Listed in Internal Revenue Code Section 501 The federal tax code provides a variety of tax benefits to organizations often referred to as “tax exempt.” This appendix focuses on organizations or entities qualifying for a tax-exempt status under 26 U.S.C. § 501. We discussed the tax benefits and requirements for different types of tax- exempt organizations in our 2014 report on oversight of charitable organizations. In addition to section 501, there are various other scattered provisions which give a full or partial tax exemption to certain specific types of entities and income. Section 501 distinguishes between charitable organizations, also known as 501(c)(3) organizations (after the subsection in which they are defined) from all other organizations qualifying for an exemption under section 501. Organizations that qualify for an exemption under section 501, but are not charitable organizations have been referred to as mutual benefit organizations or non-charitable nonprofits. Section 509 further divides charitable organizations between those that are private foundations and all other charitable organizations, and private foundations are divided between operating and nonoperating foundations in section 4942. Appendix III: Tax-Exempt and Government Entities Budget over Time Figure 9 shows the declines in the Internal Revenue Service’s Tax- Exempt and Government Entities Division’s budget since an increase from fiscal years 2009 through 2011. Appendix IV: Financial Transactions the Internal Revenue Service Requires Taxpayers to Report The Internal Revenue Service (IRS) defines a transaction as one that includes all the factual elements relevant to the expected tax treatment of any investment, entity, plan, or arrangement, and it includes any series of steps carried out as part of a plan. Department of the Treasury (Treasury) regulations require that certain transactions be registered and that lists of investors be maintained by parties who organize or sell interests in the transaction. A transaction becomes “reportable” (i.e., a taxpayer must disclose it to IRS on Form 8886) when it falls under one or more of the following categories: listed, confidential, contractual protection, loss transactions, and transactions of interest. Listed transactions: A listed transaction is reportable when it is the same or substantially similar to one of the types of transactions that IRS has determined to be an avoidance transaction. IRS provides a detailed list of the 36 recognized listed transactions on its website. Confidential transactions: A confidential transaction is offered to a taxpayer or a related party under conditions of confidentiality and is a type of transaction for which a taxpayer has paid a minimum advisor fee. A transaction is considered offered under conditions of confidentiality for two reasons: the advisor places a limitation on the taxpayer’s disclosure of the tax treatment or tax structure of the transaction, and the limitation on disclosure protects the confidentiality of the advisor’s tax strategies. The transaction is treated as confidential even if the conditions of confidentiality are not legally binding on the taxpayer. Contractual protection transactions: A contractual protection transaction is a transaction for which a taxpayer or a related party has the right to full or partial refund of fees if all or part of the tax consequences from the transaction are not sustained. It also includes a transaction for which fees are contingent on a taxpayer’s realization of tax benefits from the transaction. Loss transactions: A loss transaction is a transaction that results in a taxpayer claiming a loss. The type of taxpaying individual or entity determines the applicable amount of the loss. The types of loss transactions IRS has described are as follows for: Individuals: at least $2 million in any single tax year or $4 million in any combination of tax years. Corporations (excluding S corporations): at least $10 million in any single tax year or $20 million in any combination of tax years. Partnerships with only corporations (excluding S corporations) as partners: at least $10 million in any single tax year or $20 million in any combination of tax years, whether or not any losses flow through to one or more partners. All other partnerships and S corporations: At least $2 million in any single tax year or $4 million in any combination of tax years, whether or not any losses flow through to one or more partners or stakeholders. Trusts: At least $2 million in any single tax year or $4 million in any combination of tax years, whether or not any losses flow through to one or more beneficiaries. Transactions of interest: A transaction of interest is one that IRS and Treasury believe to have the potential for tax avoidance or evasion, but which lacks enough information for IRS and Treasury to determine whether the transaction should be identified as a tax avoidance transaction. Appendix V: Internal Revenue Service Tax Return and Audit Data Tables 2 and 3 below show Internal Revenue Service (IRS) data for forms filed and audited, and the audit coverage rate, for individual income tax and corporate tax returns during fiscal years 2006 to 2017. Table 4 shows the number of returns processed and audited, and the audit coverage rate, for tax-exempt organizations during fiscal years 2006 to 2015. All three tables show declines in audit coverage rates: the decline occurred for individual income tax returns after fiscal year 2011 and for corporate income tax returns after fiscal year 2012. The audit coverage rate for tax-exempt organizations’ returns declined from fiscal years 2013 to 2015, the last fiscal year for which we have complete data on tax- exempt organization returns. Appendix VI: Descriptions of Internal Revenue Service Programs Addressing Abusive Schemes with Tax-Exempt Entities Table 21 lists 10 programs that the Internal Revenue Service (IRS) operates that may identify or conduct enforcement action on abusive tax schemes that involve a tax-exempt entity. Appendix VII: Comments from the Internal Revenue Service Appendix VIII: GAO Contact and Staff Acknowledgments GAO Contact James R. McTigue, Jr. (202) 512-9110 or mctiguej@gao.gov. Staff Acknowledgments In addition to the contact named above, Kevin Daly (Assistant Director); Susan Baker; Jehan Chase; Sara Daleski; Steven Flint; Eric Gorman; Gina Hoover; Andrew Howard; Edward Nannenhorn; Kevin Newak; Carolyn Ours; Robert Robinson; Dylan Stagner; and Elwood White made significant contributions to this review. Also contributing to this report were Toni Gillich; Sarah Gilliland; John Hussey; Jessica Lucas-Judy; Cynthia Saunders; Stewart Small; Rebecca Shea; and Janet Temko-Blinder.
Why GAO Did This Study Abusive tax schemes contribute to the tax gap and threaten the tax system's integrity. When abusive tax schemes involve tax-exempt entities, they also can erode the public's confidence in the charitable sector. GAO was asked to review what is known about abusive transactions involving tax-exempt entities and how IRS addresses them. This report, among other things, (1) describes ways in which taxpayers have abused an entity's tax-exempt status; (2) examines trends in IRS's compliance efforts; and (3) assesses how IRS identifies emerging abusive tax schemes involving tax-exempt entities. GAO reviewed research on tax schemes involving tax-exempt entities, and interviewed relevant professionals and researchers about tax schemes involving tax-exempt entities; compiled statistics from IRS audit and disclosure data; and compared documentation and testimony from IRS officials on IRS programs and guidance from its operating divisions with certain internal control and GAO fraud framework criteria. What GAO Found Taxpayers have used a variety of abusive tax schemes involving tax-exempt entities. In some schemes, the tax-exempt entity is complicit in the scheme, while in others it is not. For example, an abusive tax scheme could involve multiple donors grossly overvaluing charitable contributions, where the tax-exempt entity is not part of the scheme. Conversely, some patient assistance programs—which can help patients obtain medical care or medications—have been used by pharmaceutical manufacturers to make charitable donations that can be viewed as furthering private interests. Internal Revenue Service (IRS) audits of abusive tax schemes are trending downward, as the figure below shows audits by IRS's Large Business and International division. This trend has occurred amid generally declining IRS resources and corresponds with an overall decrease in audit activity by IRS over recent years. IRS has a variety of programs working collectively to identify abusive tax schemes involving tax-exempt entities, but some internal control weaknesses exist in its approach. For example, GAO found three ways that IRS data or programs were inconsistent with internal control standards for using quality information. First, database project codes used for identifying data on abusive tax schemes are not linked across IRS's audit divisions and do not consistently identify whether a tax-exempt entity was involved. Second, IRS has not leveraged a database with cross-divisional information to facilitate its analysis and monitoring of audit data across divisions. Finally, IRS has not used existing analytic tools to mine the narrative fields of tax forms. Doing so could provide audit leads on abusive schemes involving tax-exempt entities. These deficiencies inhibit IRS's ability to identify abusive tax schemes and develop responses to those schemes. What GAO Recommends GAO is making five recommendations to IRS to strengthen its internal controls, including that it link data across operating divisions, test the ability of a database to facilitate analysis of audit data, and use existing analytic tools to further mine information on tax forms. In commenting on a draft of this report, IRS agreed with all of GAO's recommendations.
gao_GAO-19-257
gao_GAO-19-257_0
Background Automation, Artificial Intelligence, and Advanced Technologies Throughout history, new technologies have transformed societies. Many technological advances, ranging from the steam engine to electricity and personal computers, have enhanced productivity and improved societal standards of living. At the same time, many technological advancements have led to increases in automation—modifying processes to become more automatic by reducing human involvement—and corresponding changes in the workforce. For example, researchers have noted that automation has replaced tasks performed by workers and also increased production, creating a greater demand for other types of workers. Although automation has historically been a labor disrupter in manufacturing and physical work, various researchers have observed that recent progress in fields such as artificial intelligence (AI) and robotics are enabling machines to perform cognitive tasks currently performed by humans. Artificial intelligence refers to machines and computers that attempt to mimic various aspects of human intelligence, as we have reported. The field of AI can be traced back to the 1950s. Early AI often consisted of expert systems programmed by humans to perform predefined tasks. This form of AI resulted in some degree of productivity gains and remains an active area of development. However, numerous factors, primarily the trends underlying big data (i.e., increased data availability, storage, and processing power), have contributed to rapid innovation and accomplishments in AI in recent years. Present-day AI innovation centers more on machine learning, including deep neural network architectures, in which systems are trained against observational or simulated outcomes—applications include language translation and machine vision (i.e., systems that use cameras, radar, or lasers to observe their surroundings or recognize content). Industrial robots and robotic machinery are often more comparable to expert systems that are programmed to perform predefined tasks, but they can also incorporate machine learning, such as having machine vision capabilities (e.g., object recognition). Below are some examples of expert system and machine learning applications of artificial intelligence. Examples of expert system applications of AI: software programs that prepare tax filings or schedule logistics; and industrial robots that perform predefined or routine tasks, such as lifting, placing, and welding pieces of metal together. Examples of machine learning applications of AI: software that uses a training dataset to “learn” how to read information from a form filled out by a person; collaborative robots that can sense when they touch a physical obstruction and shut down to safely work alongside humans; industrial robots with machine vision incorporated to identify and pick up specific parts from a collection of randomly strewn pieces; and automated guided vehicles that transport materials around a production plant and use cameras and radar to navigate independently and re-route around obstacles. Advanced technologies, including AI and other technological drivers of workforce changes, are continually progressing and new developments emerge regularly. For example, automated vehicles have varying levels of autonomy. Similarly, while robots have existed for decades, today’s generation of robots may be equipped with machine vision and learning capabilities that enable them to perform a more expansive array of tasks. How, when, or whether technologies progress from development to commercialization (i.e., readiness for adoption), and how, when, or whether firms adopt the technologies is generally dependent on context- specific considerations, which are difficult to predict. To better understand these developments and how they affect the economy, the National Academies report recommended developing three indexes (technology progress index; AI progress index; and organizational change and technology diffusion index) to measure technology progress and the extent of adoption. The study suggested that indexes could be valuable for identifying what fields are advancing rapidly and what benchmarks might indicate the imminence of significant economic impact, as well as tracking and predicting the types of human tasks that can be automated and the impacts of technology adoption by industry. Stanford University’s AI Index project is another initiative that aims to track, collate, and visualize data related to artificial intelligence. The data collected by the AI index measure, among other things, volume of AI activity (e.g., published papers, course enrollment, AI-related startups, job openings) and technical performance (e.g., object detection and speech recognition). However, the potential uses and limitations of the data being compiled are yet to be seen, as this initiative is still in its early stages. Projected Workforce Effects of Advanced Technologies While national employment data measure jobs and workers by occupation and industry, the adoption of advanced technologies generally affects specific work tasks, and can materialize in a variety of ways. As shown in figure 1, industries are made up of various occupations, which in turn are formed by a group of jobs. Underlying all, jobs are comprised of a collection of varied work tasks. By analyzing tasks within jobs or occupations to determine their susceptibility to automation, a number of studies have developed models to estimate the future workforce effects of advanced technology adoption. The three example studies below each developed similar models, though differences in methods and data sources produced varying conclusions about the number of jobs that may be automated in the future. In a 2016 article, researchers Frey and Osborne estimate that 47 percent of total U.S. employment is in occupations that are at high risk of automation over the next decade or two (i.e., by 2030). For example, the authors observe both that industrial robots will be able to perform a wider scope of non-routine manual tasks and that a substantial share of employment in services, sales, and construction occupations exhibit high probabilities of automation. A 2017 report by the McKinsey Global Institute estimates that 23 percent of total U.S. work hours could be automated by 2030 or as high as 44 percent under other assumptions. The report predicts that while labor demand will enable some re-employment of displaced workers, up to one-third of the workforce may need to change occupational categories. In a 2016 paper, researchers Arntz, Gregory, and Zierahn estimate that 9 percent of all U.S. workers hold jobs that are at high risk of automation. The authors observe that susceptibility to automation is lower for jobs that require cooperating or influencing others. Studies by Autor and others also develop theoretical models exploring the effects of automation. For example, they noted that while automation can substitute for some tasks, it can also complement others. This can lead to increasing value for tasks that require other attributes like creativity and intuitive judgement. These models hypothesize that automation may have a net positive effect on employment, or at least on employment in certain sectors, which is consistent with historical employment trends. However, researchers have also noted that machine learning may affect different tasks than earlier forms of automation and may be less likely to automate low-wage jobs—though low-wage workers may be affected in other ways. Workforce Effects of Advanced Technologies in Broader Context Although the models discussed above represent ways of identifying jobs that may be affected by the adoption of advanced technologies, they do not provide a model for tracking the current or to-date workforce effects of technology adoption. As the recent National Academies report states, “making forecasts about social phenomena is perilous… doing so with respect to the fast-changing and dynamic area of technology is even more challenging.” According to a different project by some of these same experts, several factors unrelated to whether a task or job could be automated contribute to these challenges. For example, technologies may substitute for human labor in some tasks, but: may also complement human labor in other tasks—increasing the demand for, or value of, human labor (e.g., the automation of calculation tasks leading to increased demand for human programmers); prices and demand for products may counteract this human labor substitution (e.g., technology reducing the price of air travel, and thus leading to increased demand for flights, and thus increased employment in the aviation industry); and firms may redesign operations in response to the substitution in ways that lead to employment increases or decreases that are greater than the direct substitution. As discussed in the National Academies report and elsewhere, researchers have tried to disentangle workforce effects in various ways, such as analyzing productivity data to examine workforce trends in the context of other economic factors, such as globalization. As the National Academies report observes, “Predictions that new technologies will make workers largely or almost entirely redundant are as old as technological change itself…. However, predictions of widespread, technologically induced unemployment have not come to pass, at least so far.” Since recovering from the recession of 2007-2009, the economy has recently experienced low unemployment rates—4.0 percent in January 2019—despite continued strides in advanced technologies. However, other indicators have not recovered. For example, the labor force participation rate—the percentage of the population that is either employed or seeking work—declined significantly through the recession and has generally remained at this lower level. This may indicate that the post-recession decline in the unemployment rate may over-represent the health of the labor market, according to BLS. Advanced technologies and automation may also affect workers in other ways, beyond potential changes in the workplace, such as by reducing production costs and thus lowering the prices of consumer goods. No Comprehensive Data Exist to Link Employment Trends to Advanced Technology Adoption, but Analyses Suggest Relationships There are currently no comprehensive data on firms’ adoption and use of advanced technologies. As a result, researchers have difficulty determining whether changes in the U.S. workforce observed in existing employment data are related to advanced technologies. The National Academies report states that federal household and employer surveys, such as the CPS, ACS, and OES, provide useful information about changes to the occupational mix of the U.S. workforce over time. However, these data cannot identify the causes of employment shifts. For example, these data do not identify whether an employment decline in one occupation is due to jobs being replaced as a result of automation, or to other factors unrelated to automation. Other federal data, such as the Job Openings and Labor Turnover Survey, provide useful information on employment turnover and opportunities. However, although these data are available by industry sector and firm size, the data do not capture reasons for layoffs and discharges, and thus cannot be linked to advanced technologies. Employment Trends and Characteristics of Workers in Jobs Susceptible to Automation In the absence of comprehensive data that definitively link employment trends to technology adoption, we analyzed occupations that researchers Frey and Osborne identified as being susceptible to automation (see sidebar) to determine whether changes due to advanced technologies are appearing in employment data. By exploring concentrations of these occupations in industries, job displacements in these occupations, and the characteristics of workers in these occupations, we found minor indications that advanced technologies are changing the workforce and could affect some worker populations. However, the conclusions that can be drawn from these analyses are limited by the unpredictability of when, if, or how automation materializes—e.g., whether worker positions are eliminated or shifted to other non-automated tasks. Industries with higher concentrations of jobs susceptible to automation were more likely than others to have experienced significant growth in their concentration of tech jobs from 2010 to 2016, according to our analysis of employment data from the American Community Survey. For example, as shown in figure 2, the plastics product manufacturing industry has a relatively high concentration of jobs susceptible to automation. Many of these jobs are in production occupations. From 2010 through 2016, this industry experienced about 11 percent annual growth in tech jobs (i.e., jobs in the fields of computing, engineering, and mathematics). More than half of this growth was the result of increases in industrial engineers, engineering technicians, and miscellaneous engineers. As we observed at some firms we visited, some of these engineers may have been hired to program or maintain newly installed robots. However, the data do not provide this level of information about job tasks. Similar dynamics could also be occurring in other industries. Across all 69 industries that had statistically significant changes in the concentration of tech jobs, we found a positive, though weak, correlation with the concentration of jobs susceptible to automation (see fig. 2). This suggests that growth in tech jobs may be an indicator of industries’ preparation for, or adoption of advanced technologies. However, given the complex causes of employment changes, there could be other reasons for tech job growth in these industries that are unrelated to firms’ adoption of advanced technologies. The growth in tech jobs in certain industries suggests firms in these industries may be using more advanced technologies, which could also signal that jobs susceptible to automation are being replaced. However, our analysis of ACS data showed no correlation between an industry having a higher concentration of jobs susceptible to automation and employment changes in that industry (i.e., total employment increases or decreases). We also found no meaningful differences in job losses, according to our analysis of employment data from the Current Population Survey’s Displaced Worker Supplement. Specifically, the relative rate at which workers in occupations susceptible to automation lost a job because their position or shift was abolished or there was insufficient work for them to do was not meaningfully different than workers in other occupations. There could be a number of reasons we did not find a relationship between susceptibility to automation and employment changes in both of these analyses, including: a relationship does not exist; such a relationship is too complex to measure in this way (e.g., automation may lead to decreases in employment in some industries, while also leading to increases in employment in other industries due to improved competitiveness, productivity, and profitability); it is too soon to observe the employment effects of automation (e.g., growth in tech jobs in an industry may be a leading indicator of employment disruption); or our analysis covered a period of overall economic growth, which could obscure or overwhelm other employment trends. Existing data cannot predict with certainty when or if automation will materialize in the workforce, as suggested by our analyses. However, the tendency of particular worker groups to hold jobs susceptible to automation suggests that some communities may be disproportionately affected by changes if they occur. For example, according to our analysis of 2016 ACS data, workers with lower levels of education are more likely than those with higher levels to hold jobs in occupations that the Frey and Osborne study identify as susceptible to automation. Specifically, 60.7 percent of workers with a high school degree or less hold these types of jobs, as compared to 46.7 percent of workers with some college, 26.9 percent of workers with a bachelor’s degree, and 11.3 percent of workers with a graduate degree. In addition, 54.1 percent of Hispanic workers hold jobs in occupations susceptible to automation, as compared to 46.4 percent of Black workers, 40.0 percent of White workers, and 35.9 percent of Asian workers. Certain geographic areas also rely more heavily than others on occupations identified as susceptible to automation, according to OES data. We identified areas where the proportion of jobs susceptible to automation is at least 5 percentage points greater than the national average (see fig. 3). These occupations are comprised of a diverse set of jobs that may experience automation in different ways and at different times, if at all. However, if employment disruptions are regionally concentrated, groups of workers with similar skills in the same labor market may need to adapt to changes simultaneously, which could strain the availability of local job opportunities and support resources. Workers in occupations that the Frey and Osborne study identify as susceptible to automation earn less on average than other workers. For example, the median hourly wage for workers in occupations susceptible to automation is $14.26, compared to $22.06 for other workers, according to our analysis of 2016 ACS data. After controlling for factors that may affect wages, such as age, education, and industry, we found that workers in jobs susceptible to automation earn about 17.2 percent less, on average, than similar workers in other occupations. These results show that, on average, workers in jobs susceptible to automation are already in more vulnerable economic circumstances than other workers. When or if changes brought on by automation materialize, these workers may face additional hardships in adapting to changing workforce demands. Examples of Other Researchers’ Analyses that Attempt to Measure Workforce Effects Due to Advanced Technology Adoption In the absence of comprehensive data, researchers have taken differing approaches to exploring the relationships between technology adoption and workforce trends. We identified some examples of recent and ongoing work that attempt to measure workforce effects directly attributable to technology adoption. These examples illustrate types of data that may be useful for better understanding and measuring the use of specific technologies (e.g., robot sales), the spread of technologies generally (e.g., automation patents), and how specific work tasks are changed by technology use (e.g., firm-level operations data). Some researchers have used data on industrial robot sales collected by the International Federation of Robotics (IFR) to approximate robotics adoption worldwide and in the United States and to model its direct effects on employment. Analysis by Furman and Seamans (2018) shows that annual sales of industrial robots in the United States increased substantially between 2010 and 2016. The analysis attributes this growth to a combination of factors, including lower robot prices, improved robot functionality, and greater awareness of the benefits of robots. They also observe that the automotive sector was the largest customer for industrial robot sales in the United States from 2004 through 2016, though robot sales to the consumer electronics sector grew the most over that period. Studies by Acemoglu and Restrepo (2017) and by Graetz and Michaels (2017) both use IFR data through 2007 to model the workforce effects of robot adoption in the United States, though their methods, results, and conclusions differ. Acemoglu and Restrepo estimate that each additional robot used in a geographic area reduces employment by about six workers in that area. They observe that their estimated employment effects are greatest in manufacturing and other industries most exposed to robots, in routine manual work-related occupations, and for workers with less than a college education. They do not find corresponding employment gains in any other occupation or education groups. They also estimate that one more robot used per thousand workers reduces wages by about 0.5 percent. They conclude by noting that, so far, relatively few robots have been used in the U.S. economy and thus the effect on jobs has been limited; however, they state that if robot usage continues to grow as researchers expect, these effects could be more substantial. Graetz and Michaels estimate that increased robot use did not significantly affect total hours worked across the 17 developed countries in their analysis, but that work shifted from low-skilled workers to middle-skilled and high-skilled workers. They also estimate that increased robot use increases productivity and average wages. While their analysis covers 17 developed countries, they note that robot use in the United States was marginally lower than the average across all countries. They also observe that while their results differ from Acemoglu and Restrepo, it is possible that the effects of robot usage are different in the United States than across the 17 countries they analyze. Other researchers have used U.S. patent data as an alternative way to approximate the spread of advanced technologies and to examine the resulting workforce effects. Mann and Püttman (2017) use machine learning algorithms to identify patents related to automation technology. They find that automation patents grew substantially from 1976 through 2014. After linking the patents to industries where they may be used, they estimate that automation causes manufacturing employment to fall, though it increases employment in the service sector, as well as overall employment. They observe that their results depict a more positive picture of the employment effects of new technology use than the studies that used industrial robot sales data (discussed above). Lee Branstetter, a researcher at Carnegie Mellon University, and his colleagues have a similar ongoing project that uses a machine learning algorithm to identify patents related to AI technologies. According to these researchers, their initial results suggest a rapid rise in AI patents over the past decade and also that AI patents are emerging in a variety of application areas. They are also in the early stages of work linking AI patents to industries to explore how new technology use affects the workforce. Researchers have also identified how important micro-level data could be for understanding the workforce effects of advanced technology adoption. For example, reports by the National Academies and others highlight the potential for firm-level information to augment traditional survey data to enable analyses of the conditions under which advanced technologies complement or substitute for workers, and what types of firms invest in advanced technologies. Other researchers have emphasized the importance of focusing on work tasks to analyze the effects of technological change at workplaces. Erica Fuchs, a researcher at Carnegie Mellon University, and her colleagues Christophe Combemale, Katie Whitefoot, and Laurence Ales use a combined firm-level, task- based approach by collecting and analyzing production floor data from four semiconductor firms with different levels of process automation and parts consolidation. They map out detailed versions of firms’ production processes and then use existing data and technical knowledge to simulate each step to analyze the effects of technology changes. Their preliminary results estimate that automation replaces some routine tasks, leading to estimated declines in the number of production floor jobs requiring medium skill levels. According to the authors, this firm-level, task-based approach may be applicable to other manufacturing industries and could provide insight on how the adoption of different technologies may produce different labor outcomes. However, they note that the approach requires detailed production process data, which may be difficult to collect for many firms or industries. Commerce and DOL Have Some Efforts to Track Adoption and Workforce Effects of Advanced Technologies Commerce Has Started Tracking Technology Adoption and Resulting Workforce Effects, but Data Will Not Be Available until Late 2019 Commerce’s Census Bureau has begun administering surveys with questions that focus specifically on firms’ adoption of advanced technologies and resulting workforce changes. According to Census, this data collection is part of a long-standing, coordinated effort to measure the impact of technology. In addition, consistent with Commerce’s strategic plan, these represent new efforts to provide a timely, in-depth, and accurate picture of the economy amidst the economic shifts and technological advances of the 21st century. However, none of the survey results will be available until late 2019 and later. The new Annual Business Survey (ABS) is a joint effort by Commerce and the National Science Foundation that has the potential to provide insight on the spread of advanced technologies in the economy and could be used to examine the workforce effects of technology adoption, but the first ABS results are not expected until late 2019. Census administered the 2017 ABS in June 2018 to collect information on firms’ use of advanced technologies, such as automated guided vehicles, machine learning, machine vision, and robotics, among other things (see example in sidebar). The survey asks whether firms are testing a given technology or using it for either less than 5 percent, 5 to 25 percent, or more than 25 percent of their production or service. Census officials said this question should provide information about the extent of technology adoption nationwide, including whether there are any industry concentrations of advanced technologies. Census plans to add questions on the workforce effects of advanced technologies when it administers the 2018 ABS during July through December 2019, pending final approval by the Office of Management and Budget. Census plans to release these survey results in December 2020. Specifically, Census plans to include new questions that ask firms about: (1) their use of advanced technologies such as AI, cloud computing, robotics, and specialized software and equipment; (2) their motivation for adopting and using artificial intelligence and advanced technologies; (3) the impact these technologies might have on the number and skill level of workers; and (4) the factors that could adversely affect the adoption or production of these technologies. The new questions also ask about changes in the number of production workers, non-production workers, supervisors, and non-supervisors. These new questions could be used to characterize the prevalence of workforce changes in the economy caused by advanced technology adoption (e.g., declines in production workers, or increases in supervisory workers) and whether this differs by industry sector. However, these planned questions are not intended to provide information to quantify the magnitude of workforce changes, in part to minimize respondent burden and potential survey error, according to Census. In addition, until the ABS data are available and evaluated, it remains unclear what limitations, if any, the data may have. Census also plans to expand other surveys to track the spread of advanced technologies in the economy, including its Annual Survey of Manufactures (ASM) and Annual Capital Expenditures Survey (ACES). Census plans to administer the 2018 ASM in May 2019, pending final approval by the Office of Management and Budget. The survey will collect capital expenditures data for industrial robotics at approximately 50,000 manufacturing plants, as well as the number of industrial robots purchased by and in use at these plants. Census officials stated these two measures might be useful in understanding the impact that industrial robots could have on productivity as well as the impact robots could have on the manufacturing labor force once the survey results are available in the spring of 2020. Census plans to administer the 2018 ACES during March through May 2019 and to have the survey results available in February 2020.The survey will include questions on robotics expenditures, similar to those in the 2018 ASM. However, the ACES collects expenditure data from 50,000 employer firms across all non-farm sectors of the economy—instead of just manufacturers—and will also ask about firms’ use of both industrial and service robots. Some Commerce offices also track issues related to the adoption and workforce effects of advanced technologies on a limited or intermittent basis. For example, National Institute of Standards and Technology officials stated that the Hollings Manufacturing Extension Partnership collects limited information about the number of jobs gained and retained by small and medium businesses adopting new technologies. National Telecommunications and Information Administration officials said they monitor developments in AI on an intermittent basis and also direct a project that examines new applications of small and large internet devices. DOL’s Current Efforts Provide Limited Information for Tracking the Workforce Effects of Advanced Technologies DOL has a role in collecting data that track changes occurring in the U.S. economy and workforce, including developing new ways to track emerging economic trends, though as we previously discussed, currently available federal data do not link shifts in the workforce to technological changes. BLS is the principal federal statistical agency responsible for measuring labor market activity. According to DOL’s strategic plan, BLS is to support public and private decision-making and meet the needs of its many stakeholders, including the general public, educational institutions, and the public workforce system. This includes regularly identifying structural shifts in the economy and developing new data products that reflect economic changes. In addition, DOL’s Employment and Training Administration (ETA) is to assist workers’ entry and reentry into in- demand industries and occupations. This assistance includes providing job seekers with accurate labor market data and guidance about opportunities, aligning training services to industry needs, and helping connect businesses with properly skilled workers. Internal control standards state that agencies should use quality information to identify, analyze, and respond to significant changes, including external conditions such as economic and technological changes that may affect an agency’s ability to achieve its objectives. DOL collects workforce data through various surveys, including the Current Population Survey’s Displaced Worker Supplement, and produces other data products such as the occupational employment projections and Occupational Information Network database that include information related to advanced technologies. However, these data are limited, and according to BLS, provide some, but not all, of the information required to assess the impact of automation on the workforce. Employment Projections BLS’s Employment Projections program identifies and provides limited information about occupations expected to experience declines in their share of employment in an industry or group of industries as a result of the adoption of advanced technologies. On a biennial basis, this program analyzes changes in the economy to project how employment by occupation may change over 10 years, including which occupations may be affected by advanced technologies. Factors that can affect occupational employment include but are not limited to technological innovation; changes in business practices or production methods; organizational restructuring of work; changes to the size of business establishments; and offshore and domestic outsourcing, according to BLS. As part of this program, BLS develops a table of occupations that are projected to have direct employment changes due to some identified reason. This table identifies projected staffing pattern changes and BLS’s qualitative judgment of the most significant factor or factors projected to affect the occupation. The table also indicates whether an occupation’s share of employment is expected to change within a single industry or within multiple or all industries. For example, the table includes the following selected entries: Librarians: Employment share is projected to decline in the information services industry as internet-based research continues to displace library-based research. Stock clerks and order fillers: Employment share is projected to decline in two industries (the warehousing and storage industry and the grocery and merchant wholesalers industry) as firms increasingly adopt automated storage-and-retrieval systems. Aircraft structure and systems assemblers: Employment share is projected to decline in all industries as collaborative robotics increase efficiency, producing more output with the same amount of labor. We identified 100 occupations in BLS’s table that are projected to experience declines in their shares of employment in an industry or group of industries as a result of the adoption of advanced technologies. Similar to the examples above, reasons could be related to automation, the increased use of robots or artificial intelligence, advances in machine or software technologies, or other changes resulting from the adoption of advanced technologies. As shown in figure 4, most of these occupations are production occupations (40 of 100) or office and administrative support occupations (30 of 100). BLS officials told us they do not currently track groups of occupations projected to experience employment share declines due to specific reasons, such as advanced technology adoption. Officials also said they do not aggregate total projected employment effects stemming from similar causes because they are unable to identify ripple effects in all occupations—e.g., automation in one occupation affecting employment in a different occupation. Information contained in ETA’s Occupational Information Network (O*NET) database includes, among other things, information about work activities, tools and technologies used, and required skills associated with over 1,000 occupations. According to ETA officials, the primary purpose of O*NET is to assist job seekers in making employment decisions. However, the O*NET database can be used to identify occupations that use certain types of advanced technologies. For example, we identified 15 occupations in which workers monitor, install, develop, troubleshoot, debug, or perform other tasks with robots as part of their daily work activities and 63 occupations in which workers use robots as a tool or technology in their daily work activities (see table 1). In addition, states, federal officials including at BLS, and academic researchers use these data to inform, among other things, worker support programs. DOL officials told us they do not use O*NET data to analyze changes in occupations over time, such as robots being used in additional occupations, because the methodology is not currently structured to capture these kinds of changes systematically. For example, data are collected from a selection of occupations at varying frequencies, rather than at the same time, which could make it challenging to track changes in certain occupations over time. Without comprehensive data linking employment shifts and technological changes, policymakers and DOL may not be prepared to design and implement programs that both encourage economic growth and provide support for workers affected by changes. DOL-funded programs rely on accurate information to guide job seekers to employment opportunities and to help align training services with local industry needs. For example, the O*NET database identifies high-growth, high-demand occupations for job seekers based largely on BLS employment projections data. While these employment projections provide valuable information, they are not designed to identify the full extent of occupational shifts due to advanced technology adoption. Similarly, other workforce surveys, such as the Current Population Survey’s Displaced Worker Supplement and the Job Openings and Labor Turnover Survey, do not collect information about the causes of job losses and gains. This information could be a valuable tool for designing programmatic or policy supports for workers. For example, data on whether advanced technologies have resulted in worker displacements, work hour reductions, or substantial adjustments to work tasks could better position BLS to meet stakeholder needs. Congress has expressed concern that there continues to be insufficient data on the effects advanced technologies are having on the U.S. workforce. On January 2, 2019, BLS reported to Congress that it plans to work with a contractor during fiscal year 2019 to study the interaction between labor and capital in the workplace and how it is affected by new technologies; identify ways to supplement BLS data with additional information on automation; and produce a report that recommends data collection options to fill those gaps. In fiscal year 2020, BLS also plans to identify pilot projects to test the feasibility of new data collection based on the recommendations in its final report, resources permitting. However, these plans are still in their early stages, according to BLS officials. Commerce and DOL Face Challenges Tracking the Workforce Effects of Advanced Technologies Officials at Commerce and DOL stated that collecting data on the adoption and workforce effects of advanced technologies is challenging because it is difficult to identify which new and emerging technologies to track; employment trends generally occur at the occupation and industry levels but the effects of advanced technologies typically occur at the task or job level; and employment trends have a complex and diverse set of causes. Specifically: Identifying which new and emerging technologies to track. Census officials said there is uncertainty about how an emerging technology might affect the economy and thus whether it should be tracked systematically. For example, self-service technology appeared at grocery stores in 1916, other self-service technology appeared at gas stations later, and more recently self-service technologies are being adopted by some restaurants, according to researchers. Periodically, Census has included questions in its firm surveys about the use of these technologies. Past surveys asked questions about the use of self-service at gas stations until the technology became ubiquitous and was dropped from the survey. As self-service technologies have expanded to other areas of the economy such as restaurants, Census has again added questions about self-service to recent surveys because information is lacking on the growth of this phenomenon. Trends and effects appear at different levels. BLS officials said employment changes due to technology typically occur at the individual task or job level and employment trend data are at the industry and occupation levels. Officials also said that identifying technology-related effects in occupations, such as changes related to uses of machine learning algorithms, is difficult because some workers within an occupation might be affected by the technology while others might not. For example, some computer scientists and engineers might be involved in the development or application of machine learning algorithms while others are not. Causes of trends are complex and diverse. BLS officials said that employment trends’ complex and diverse causes make it difficult to identify occupations that are changing because of advanced technologies. Changes in one occupation may have ripple effects in other occupations. Partly as a result of this complexity, BLS’s Employment Projections program identifies examples of technology- impacted occupations, but it does not attempt to identify all instances where technology impacts occupations nor does it attempt to quantify an overall projected employment effect of advanced technologies. White House Office of Science and Technology Policy Coordinates Policy and Research Activities Related to Advanced Technologies The White House Office of Science and Technology Policy (OSTP) is responsible for coordinating AI related policy across government agencies and for overseeing the National Science and Technology Council’s subcommittees and their ongoing activities. For example, the Subcommittee on Machine Learning and Artificial Intelligence was originally chartered in 2016 to monitor machine learning and artificial intelligence and to watch for the arrival of important technology milestones in the development of AI, among other things. OSTP officials told us that the Subcommittee has been re-chartered, now receives direction from OSTP’s Select Committee on Artificial Intelligence, and is presently focused on federal resources related to AI research and development. Cost Savings and Other Considerations Motivated Selected Firms to Adopt Advanced Technologies, Despite Facing Risks Such As the Reliability of Technologies Selected firms generally adopted advanced technologies through a phased process of innovation and technology adoption (see fig. 5). We met with officials representing 16 firms that are using advanced technologies and a systems integrator who spoke for a number of his customer firms. Many firm officials described the path to integrating technology into operations as lengthy, complex, and iterative. For example, some firms we visited have had to build and test different mechanical “grippers” attached to robot arms to pick up and handle particular objects; one firm had high school participants at a local training center develop a gripper solution for one of the firm’s robots. Some of the large firms we visited had their own internal teams that identified, tested, and integrated advanced technologies. Other firms we visited used third- party integrator companies to help with incorporating technologies into their operations. We spoke with firm officials about their motivations for adopting advanced technologies, as well as challenges they faced throughout the process, and they identified a number of similar issues. Selected Firms Identified Cost Savings and Job Quality Among Key Motivations for Adopting Advanced Technologies Cost Savings Most selected firms cited cost savings as a primary consideration for adopting advanced technologies. Firm officials discussed cost-related motivations in various forms, such as remaining competitive in a global economy, increasing productivity (i.e., lower cost per unit), decreasing labor costs, and saving on physical space. Firms said they adopted advanced technologies as a way of reducing operational costs—including labor costs—to increase competitiveness and profitability. Some officials also specifically identified the pressure of large low-cost competitors, both in the United States and globally, as a major motivation to reduce costs and product prices. Officials at a medium-sized door manufacturer told us that increased use of advanced technologies, such as robots, enabled the firm to increase efficiency, reduce labor costs, and re-focus its product line on custom doors to survive the entry of manufacturers in China that could sell mass-produced doors for lower prices. The original motivation for adopting robots at a medium-sized automotive parts manufacturer was a customer’s price demand that the firm could not meet and still remain profitable, according to officials. Integrating more robots enabled the firm to reduce production costs by using fewer workers. At a large manufacturing corporation of household and personal care goods, officials told us the company had a goal of reducing its workforce size by 1,500 full-time positions per year for 5 years (across its subsidiaries), and specifically using robotic automation to accomplish 40 percent of its reduction goal. The constant pressure to keep costs low in the health care sector motivated a university-affiliated medical center we visited to explore adopting more advanced technologies, such as autonomous mobile robots that could decrease expenses by reducing the number of positions in some departments. Firm officials also told us about other, non-labor-related cost savings considerations that led to the adoption of advanced technologies. Officials at a large automotive manufacturer told us they recently upgraded a laser welding system to use fewer, more advanced robots to save production line space—which is a valuable commodity in manufacturing. They also pursued this change to increase overall production capacity because the physical space they saved could be used to install more robots for other production steps. The integration of autonomous mobile robots to deliver prescription drugs to patient wards at a university-affiliated medical center was intended, in part, to save costs related to medicines that go missing when delivered and processed manually, according to officials. Job Quality and Worker Safety According to officials at selected firms, the desire to improve jobs led firms to adopt advanced technologies. The firms wanted to automate tasks that are dangerous, difficult, dull, or dirty in large part to improve worker safety, and to optimize the value added by workers. For example: Dangerous work: Two robots were installed to pick up doors weighing between 90 and 300 pounds, and place them on a paint line at a medium-sized door manufacturer we visited. Prior to the robots, workers who performed this dangerous task experienced work related injuries, and the firm paid large amounts of money in workers’ compensation claims, according to officials. Once the robots were installed, the firm experienced a decrease in the number of worker compensation claims. Dull work: A small automotive parts manufacturer we visited installed an industrial robot to perform a machine-to-machine transfer of a heavy part. Prior to the robot, the firm had three workers performing this task—even though the task only required two—because workers would eventually quit due to the tedium of the job and new workers would require time to be trained, according to officials. Value-added work: Some officials told us they adopted advanced technologies because they wanted to maximize human labor that provided value to the firm and reduce labor that did not. Officials at a warehouse for a regional grocery store chain and a university- affiliated medical center said they wanted to minimize time workers spent traveling between tasks (as opposed to performing tasks). Warehouse officials said their workers spend up to 60 percent of their time traveling back and forth between shelves and products, which is time that could be spent selecting and sorting items. Thus, at the time of our visit, the warehouse was in the early stages of adopting automated guided vehicles to eliminate the need for workers to travel between points. Similarly, officials at a university-affiliated medical center that adopted autonomous mobile robots to transport, among other things, prescription drugs, said nurses and pharmacy technicians used to walk back and forth between the patient ward and the pharmacy to pick up and deliver these drugs, which diverted them from performing other tasks. They said that the medical center wanted them to have more time to provide valuable work, especially for employees who are highly-paid. Recruitment and Retention Officials at many firms said that adopting advanced technologies can help them deal with the challenges of recruiting and retaining skilled workers. They explained that worker shortages and high turnover can result from skill gaps in the local or national workforce, low unemployment, and certain work being viewed as unappealing, among other reasons. For example, officials at a warehouse for a regional grocery store chain we visited told us they struggle with high worker turnover and the constant need to hire new workers. In addition, low unemployment can make it difficult to retain workers with the right skills to operate machinery according to officials at a small automotive parts manufacturer. Similarly, at the university-affiliated medical center, an official said that positions for pharmacy and other types of medical technicians can be difficult to fill. By using autonomous mobile robots to automate some tasks, the medical center can streamline its operations to more efficiently use the technicians it already has. Recruitment in Manufacturing Officials at some manufacturing firms we visited said they have had trouble attracting new workers into the sector, and officials at two firms said that adopting advanced technologies is one way they have sought to make manufacturing more attractive and to appeal to more and younger workers. One younger worker at a small automotive parts manufacturer talked about how appealing his workplace was due to the firm’s use of advanced technologies, specifically robots. Officials at a large automotive manufacturer viewed their tech development facility, which includes spaces to tinker with virtual reality, augmented reality (i.e., technology that superimposes images on a user’s view of the real world; for example, by wearing augmented reality glasses), and other emerging technologies, as an asset to recruit young talent. Product-Related Motivations Improving product quality, expanding product offerings, and supply chain reliability were primary motivations for adopting advanced technologies, according to officials at some firms. Product quality: Quality is paramount in the automotive industry, where mistakes are costly and can have implications for a firm’s reputation, according to officials at a medium-sized automotive parts manufacturer we visited. For this reason, they decided to use robots rather than workers for welding in order to standardize the processes, reduce errors, and improve product consistency and quality. Officials at a large automotive manufacturer similarly said that the firm has pursued machine learning technologies to ensure fewer defects and problems in vehicles. Engineers at the firm are developing a smart watch for workers who connect wires that will provide feedback to these workers if a proper connection is not made, based on the sound of the connection. The firm is already using machine vision technology that inspects vehicles as they pass through a section of the production line to ensure the correct pieces have been used for each vehicle model. Expanding product offerings: At a medium-sized fruit processing plant, an official said that integrating robots, an advanced conveyer system, and machine vision inspection technologies, among other advanced technologies, enabled the firm to begin producing applesauce in a highly automated and safe way. Had manual production been the only option, officials said they would not have considered producing applesauce due, in part, to safety issues. Supply chain reliability: One small manufacturer of rubber stamps and embossing seals (hereafter referred to as a small stamp manufacturer) used to rely on a single supplier for pre-cut materials, which was not always reliable. The firm adopted a collaborative robot, in part, so it could purchase raw materials directly and then have the robot cut the materials as part of the production process (see fig. 6). Selected Firms Cited Various Risks with Adopting Advanced Technologies, Such as the Reliability of Technology, and Working with New Tech Developers In addition to the capital cost of advanced technologies, which some firms told us can be substantial, firms face a number of risks that can affect their return on investment, such as the reliability of technology and working with new tech developers. While the firms we met with had already adopted advanced technologies, officials had to consider and overcome various risks during the adoption process. Some of these firms decided against adopting other advanced technologies upon evaluating these risks. Reliability of Technology Being an early adopter of a technology is risky because the new technology may not yet be sufficiently reliable for firms’ operations. Officials at a large appliance manufacturer we visited showed us technology that was supposed to use machine vision to autonomously inspect the wire connections for clothes dryers. They told us that the vision technology had been ineffective, so they took it off the production line for engineers to continue working with it in the lab; they planned to bring the technology back onto the line a few weeks after our visit. Officials at this firm said that the vision technology was still relatively immature, as it had a limited field of vision and yielded numerous false readings. Similarly, a warehouse we visited that invested in automated guided vehicles used them to move pallets for a short time, but then put them into storage because these vehicles did not have mature enough machine learning and vision capabilities for the firm’s purposes. Eventually, officials from this warehouse began working closely with the developer firm to improve the vehicle technology, which advanced enough that it could be used. For instance, officials from the warehouse suggested adding turn signals to the vehicles to alert nearby workers of intended movements and improving the vehicles’ ability to travel over spills without triggering the system’s sensors to shut down. Firm Size Might Affect Risk Tolerance An official at one small manufacturing firm stated that larger firms may be more willing to be early adopters of technology, as they may be able to absorb the high risks of experimenting with expensive technologies, while smaller firms tend to wait until a technology has been optimized before deciding to adopt it. Accordingly, his firm only purchases industrial robots from an established manufacturer, although it would like to experiment with newer technologies in the future, such as augmented reality. Officials at a large manufacturing firm told us they have purchased a number of advanced technologies to experiment with, even though they do not know yet how the technologies may ultimately be used in their production process. This firm also has teams of technicians and engineers who can adapt the technology for operations. During our visit, we met with engineers who demonstrated different potential applications of technologies that are still being tested, including using virtual reality to test new part design and augmented reality glasses to provide interactive training to workers. Officials at some firms explained that installing advanced technologies at times necessitated building manual redundancies into their operations due to reliability concerns. Officials at a construction consulting company and a municipal township that adopted a machine learning technology to inspect roads said the technology would miscategorize road quality at times, such as identifying tree branch shadows on the road as pavement cracks. While working with the developer to improve the technology, officials said they continued to conduct redundant manual inspections to ensure they were making road repair decisions based on accurate information. During our visit to a large appliance manufacturer, we saw multiple collaborative robots that were not working properly. As a result, workers were performing these tasks manually while the robots were down; officials told us that each of the firm’s automated processes has workers trained to perform the tasks in case a technology was not working properly. Technologies Viewed Differently by Firms Some firms find a technology to be useful while others find little practical application for that technology, as illustrated by the various opinions firm officials had about collaborative robots. Officials at one small manufacturer we visited said that a collaborative robot was well suited for the firm’s production process and environment because, among other reasons: (1) the firm produces small durable goods that require dexterity rather than speed, which the collaborative robot could provide; (2) the collaborative robot would be safe around workers and could be trained by non-technical staff, so the firm’s small workforce could adapt to its use; and (3) the collaborative robot could fit in the firm’s limited floor space, as it would not require a cage. On the other hand, officials at other manufacturing firms we visited told us that collaborative robots were less useful in their settings because they have significant weight and speed limitations in order to be safe enough to operate outside of a cage, limiting their usefulness for their firms. Working with New Tech Developers Some firm officials told us it could be risky to work with tech developers with limited experience. Officials at a large appliance manufacturer said that newer developers may go out of business or be bought out by a larger firm, which could render the technology acquired from them obsolete (especially in terms of future servicing of parts and software updates). The officials stated that emerging technologies, both hardware and software, tend to not be standardized, so investing in a developer likely means investing in a type of technology that may not be supported by other developers if issues arise. We heard from some firms that they purchased technology from developers who already had established reputations and longevity. For example, a small manufacturer of durable goods selected a robotics company because of the founder’s reputation and track record, among other reasons. Other Risks Operational slowdowns: The time period between initial adoption and optimization of a technology varies widely and can sometimes be a lengthy and ongoing process, according to officials. One small stamp manufacturer experienced a lengthy and iterative implementation process for an off-the-shelf collaborative robot they purchased. For example, they had to construct a customized environment for the robot to function in, make parts by hand, purchase a 3-D printer to develop tools for the robot, and build additional parts to take care of increased byproducts like sawdust. Officials at a large automotive manufacturer told us that new technology, such as machine vision technology used for automated inspections, is often integrated on the weekends or during off-shifts. Then, on the first day of production after the new technology is integrated, the production line starts slowly and speeds up as worker comfort and experience increases. Outside of manufacturing, a consultant that helps facilitate the adoption of advanced technologies at firms said that firms’ existing, or legacy, computer infrastructure can be a barrier to integrating machine learning technology, increasing complexity and causing an extended implementation process as his firm integrates the new technology platform with the legacy infrastructure. Worker concerns: Officials at some manufacturing firms said they have encountered worker concerns with advanced technologies, and have employed various tactics to mitigate this, such as introducing workers to the technology in offsite demonstrations and involving them during the decision-making and planning before the technology was integrated. In one case, workers were able to ask questions about a collaborative robot as it was being installed and were provided with orientation training. The robot was then phased into operations—used initially for short periods of time so workers would become accustomed to its physical presence and proximity to their workstations. Deciding Not to Adopt Advanced Technologies Officials at the firms we visited identified instances in which they chose not to adopt certain advanced technologies, or not to use advanced technologies that were working well in other processes. Reasons we heard included: a product line had too much variation to benefit from advanced technologies (i.e., that some advanced technologies work better for standardized products and processes); a certain manufacturing process was too low-volume to invest time and resources into automation; and human dexterity is difficult to replicate. Officials from a large appliance manufacturer showed us an instance where using automation would not make sense. We observed a worker performing a simple, single task: grabbing a metal heat shield and plastic dishwasher spinner from separate bins and clipping one on to the other. Because of the shape of the pieces and because they were lying unorganized in boxes, the task requires human dexterity, making the process difficult to automate, according to officials. Adopting Advanced Technologies Has Had Varied Effects on the Workforces of Selected Firms, Including Declines in Some Types of Work and Gains in Others Officials Said Advanced Technologies Have Replaced Positions at Some Selected Firms, and Most Firms Relied on Redeployment of Workers and Attrition Rather than Direct Layoffs Officials at many of the firms we visited said they needed fewer workers in certain positions after adopting advanced technologies to perform tasks previously done by workers. Officials at these firms generally told us they adjusted by redeploying workers to other responsibilities and, in certain instances, reducing the firm’s workforce size through attrition. We also heard examples of direct layoffs due to the adoption of technologies. There may also be other types of adjustments firms can make that we did not observe or discuss with these officials. The complexity of these workforce adjustments makes it difficult to determine or measure the effects of technology adoption on workers. For example, although workers may not have lost their jobs due to an adopted technology taking over specified work tasks—either because of redeployments or attrition— fewer job opportunities might be available in the future for workers with similar skills. In addition, the iterative and sometimes lengthy process of incorporating advanced technologies can delay workforce effects. Thus, the absence of short-term effects of technology adoption does not necessarily preclude long-term implications, such as reductions or slower growth rates in workforce size over time (see text box below). As discussed in the prior section, one reason firm officials are motivated to use advanced technologies is to decrease labor costs. Slower Workforce Growth than Revenue Growth An official from a small automotive parts manufacturer told us that advanced technologies and automation resulted in revenue increasing by more than 400 percent over the last 12 years while the workforce increased about 15 percent. Production workers now make up a smaller percentage of the overall firm workforce than prior to automation, and sales and support staff now make up a greater percentage. The firm official described this change as an increase in higher-skilled jobs and a decrease in lower-skilled jobs. Similarly, according to firm officials at a different medium-sized automotive parts manufacturer, revenue has grown six times in the past 15 years while the workforce has grown four times, largely as a result of adopting robotics technology. Redeployments without job loss: When advanced technologies replaced positions, firms we visited often shifted, or redeployed, workers to different responsibilities. For example, officials at a medium-sized automotive parts manufacturer we visited told us they had nine workers who smoothed sharp edges and removed burrs on hydraulic cylinders prior to installing two robots to perform these tasks. Now, with the robots in these positions, three workers load the robots and then inspect and de- burr any parts of the cylinders the robots missed. The other six workers were redeployed to other tasks, according to a firm official. At a large appliance manufacturer we visited, officials told us that two workers used to move large parts from one line to another line to be painted. Now, as we observed, a collaborative robot performs this function alone; a worker monitors the operation to ensure it is running smoothly, and the original workers were moved to different tasks on the production line, according to officials. Although the size of these firms’ workforces did not decrease as a result of the technology adoption, the numbers of certain positions were adjusted—for example, production positions decreased while monitoring positions increased. Differences in skills required for these positions may also affect the ability of current workers to transition and could have implications for individual workers even though the number of jobs at the firm does not change. These sorts of changes may or may not appear in firms’ reported employment data, depending on whether redeployed workers change occupations or what other workforce changes may be occurring simultaneously (e.g., if other production workers are hired for reasons unrelated to the technology adoption). Redeployments with job loss through attrition: Officials at some of the selected firms that redeployed workers said they also reduced their overall workforce size through attrition, as a result of adopting advanced technologies. Autonomous mobile robots independently transported biohazardous waste, linens, meals, and prescription drugs throughout the university- affiliated medical center we visited. Officials told us they eliminated 17 positions after they deployed the robots. No workers were laid off; instead, they relied on high staff turnover rates and moved workers to vacant positions elsewhere. At a medium-sized fruit processing plant, firm officials told us they replaced 150 to 200 jobs with various advanced technologies over the past 3 to 4 years. However, they relied on attrition rather than layoffs. For example, the plant adopted a robot to pack food into boxes. Prior to using the robot, officials told us there were 26 workers per shift performing this job; as of our visit, there were 13 workers per shift. A medium-sized door manufacturer reduced its workforce from 650 employees to less than 500 over approximately the last 20 years due to, among other things, their adoption of robots, according to firm officials. For example, we observed industrial robots that load steel sheets into a cutting machine, reading a barcode on each sheet that tells them what size sheet is being lifted and how it should be placed in the cutting machine. This process only requires a single worker to monitor the robots during each of two shifts, where previously three workers per shift were on this production step (i.e., a change from six to two workers total). How quickly workforce reductions materialize for firms using attrition can vary greatly. We visited firms with low employee turnover rates and firms with high turnover rates. High worker turnover rates allowed some firms to more quickly adjust their workforces when deploying advanced technologies and may be a reason we were told about job loss through attrition rather than layoffs at these firms. Job loss through layoffs: An official from a systems integrator firm (“integrator”) provided examples of significant layoffs as a direct result of advanced technologies. This integrator provides machine learning technology and other similar products to automate office and administrative processes, among other things. One of the integrator’s customers—a U.S. automotive parts firm facing competition from online retailers—adopted machine learning technology to take over its accounts payable and distribution system. As a result, according to the integrator’s official, this firm reduced the number of employees in one of its U.S. offices from 500 to 200. Another of this same integrator’s customers—a firm that sells telecommunication circuits—adopted machine learning technology to automate product returns processing. As a result, the firm experienced a 30 percent reduction in customer care calls, and replaced about 150 jobs in a U.S. call center with 110 jobs at a call center in a different country (i.e., about 150 U.S. jobs lost; and an overall workforce reduction), according to the integrator’s official. Advanced Technologies Helped Increase Competitiveness and Enabled Employment Growth Despite Positions Being Replaced, According to Officials at Some Selected Firms According to officials at some selected firms, greater competitiveness and productivity due to the adoption of advanced technologies (see sidebar) has helped firms grow their workforces. For example, some hired additional production workers due to increased production (despite some production tasks being taken on by the adopted technologies), or new types of workers, such as technicians to maintain the technologies. Some officials also said that although they may not have grown their workforces, adopting advanced technologies helped them stay in business by allowing them to compete effectively, and thus to preserve jobs and retain workers. For example, officials at a medium-sized door manufacturer, where we observed numerous robots in the production facility, told us that their firm “could not survive” global competition without the use of advanced technologies. Productivity and Efficiency Gains Adopting advanced technologies has helped some firms improve their product quality and increase their production efficiency. For example, according to officials at a medium-sized fruit processing plant, after the firm began using an automated fruit grading technology, the process took significantly less time and resulted in far fewer complaints from farmers about the grading. Farmers thought the automated grading technology was fairer and more accurate than having workers manually and subjectively grade the fruit. A large appliance manufacturer that began using a collaborative robot to apply sealant to an appliance door observed improved consistency, which led to fewer service calls from retailers and customers about excessive, insufficient, or incorrect seals. One medium-sized door manufacturer said that automation technologies enabled them to produce and ship doors in 3 days, as opposed to 4 to 6 weeks. An official from a warehouse for a regional chain of grocery stores said that using automated guided vehicles allowed the firm to save time moving pallets from one end of the warehouse to the other, and also save worker hours. The warehouse saves just over $2 per pallet moved by an automated guided vehicle rather than a worker, and up to $3,500 a day based on volume, according to the official. Advanced technologies enabled some selected firms to increase production or produce a larger range of goods, and thus to hire additional production workers. This also led to workforce increases for suppliers and other firms, according to officials. One large appliance manufacturer increased its use of robots and other advanced technologies to produce more of its own component parts internally instead of relying on suppliers. As a result, the firm was also able to increase the number of production jobs, according to firm officials. Due to advanced technologies, a small automotive parts manufacturer was able to bid on a contract to produce a new and more intricate part for a major automotive manufacturer. An official described how the part was so intricate that it could not have been produced manually with the required level of consistency and speed. Although the firm adopted six robots to produce this part, winning the contract also created nine new jobs. While the robots are completing much of the production, the volume of parts demanded and the existence of some tasks that only workers can complete has led to this job growth. A developer of autonomous mobile robots said that, as a result of increased business, his firm has created jobs among its eight local suppliers where he buys parts, such as motherboards for the robots. Growth of Developer and Integrator Firms Selected developer firms we met with said they grew their technical and non-technical staff as a result of increasing demand for their technologies. A firm that develops and produces robots had tripled its workforce size, to about 130 employees, in the last year alone, according to officials. An official at another developer firm that makes inspection robots said they had grown from three workers to about 20 and envisions expanding to 100 in the near future. The official said that the firm’s first years were spent on technology development, but that once the technology was deployable to customers, the firm grew its workforce size. Integrator firms that help companies adopt advanced technologies have also grown in size, and new types have emerged, according to integrators we visited. For example, with the development of smarter robots, one integrator firm we visited entered the industry to recondition and sell old robots; the firm also adds newer technology to these robots if requested. This integrator has grown from 35 to 45 employees in the last 10 years, according to officials, with the new positions being primarily robot technician jobs. As a result of technology adoption, some firms hired more workers with technical skills, and in other instances lower-skilled workers, according to firm officials. An official from a warehouse for a regional chain of grocery stores said that adopting an advanced automation system created a need for three additional workers to provide preventive maintenance on the machines. These additional positions pay about 25 percent more than the standard warehouse positions, according to officials. At a large automotive manufacturer, officials told us the firm increased its number of lower-skilled cleaning jobs when robots began producing large amounts of byproduct. Officials Said Workers’ Roles, Tasks, and Skills Have Been Changing Due to Advanced Technologies at Selected Firms At the firms we visited, workers changed roles and tasks as a result of advanced technology adoption, such as focusing more on interactive, cognitive, higher-skilled, and monitoring tasks, and in other cases focusing more on lower-skilled tasks. Workers who can adapt and be flexible to task changes may experience positive effects, including work that is less physically taxing, safer, more ergonomic, less monotonous, or higher paying. On the other hand, workers who are unable to adjust to changing tasks may be negatively affected. Officials at some of the firms told us that their firms provided internal training or leveraged external resources to develop workers’ skills to help them move into new positions. During our visits to selected firms, we saw a variety of ways in which tasks for workers are changing. Interactive work: The use of autonomous mobile robots to deliver prescription drugs for patients enabled nurses at the university-affiliated medical center we visited to focus more of their time on patient interaction, according to officials. The small stamp manufacturer we visited would like to continue to automate its ordering process and focus more on providing customer service. Officials there said for future hires, they plan to recruit for data and people skills, rather than production skills. Cognitive work: A federal statistical agency adopted machine learning technology to automatically interpret text narratives on forms and assign codes to the data. As a result, staff who previously entered this information manually are able to spend more time on analytical tasks such as reviewing the accuracy of the auto-coding, correcting issues, obtaining clarifications about information submitted on the forms, and following up with non-respondents, according to officials. Higher-skilled work: At a large automotive manufacturer, due to increased use of advanced technologies, workers who are hired today need to have greater technical proficiencies than workers hired in the past. For example, to adapt to their changing roles working with robotic equipment, non-technical production staff need machine maintenance and technical skills, rather than only manual dexterity skills. Officials at a large appliance manufacturer that adopted an automated machine to stamp metal said that the resulting process required a single worker to monitor the machine and provide basic maintenance. This worker needed technical skills and at least 6 months of training to effectively perform these duties. In contrast, at another one of this firm’s global plants, four separate pressers are used and each requires workers to load and unload metal. Monitoring work: Officials at the large appliance manufacturer mentioned above showed us a step in their production process in which two small pieces of plastic and metal need to be attached. Three workers used to perform this task by hand, which caused ergonomic challenges, and inconsistencies in both quality and production cycle times. Now, the firm uses three robots to perform this work and a single worker loads the pieces for all three robots and monitors their performance. At a small automotive parts manufacturer, production operators who work in cells with robots monitor multiple machines and sometimes also monitor multiple work cells, so a greater aptitude level is needed. As a result, these operators earn $3 per hour more than operators in work cells without robots, according to a firm official. Less physically taxing work: Staff at some firms also told us how advanced technologies have made worker tasks less physically demanding. For example, we talked with one warehouse worker who used to lift heavy boxes, but who now operates a forklift after his old task was automated with a conveyer belt and sorting system. He described his new position as having ergonomic benefits, including experiencing less back pain. At a large automotive manufacturer, officials said the firm installed six robots to paint vehicle interiors. This production step was a major ergonomic hazard and workers who did this painting had a relatively high injury rate, according to officials. Officials told us that adopting the robots lowered the injury rate among these workers and resulted in faster vehicle painting. Simplified work: At a small stamp manufacturer that adopted a collaborative robot, officials told us that as the firm continues to redesign and optimize operations, the robot will take on more complex tasks. As a result, the remaining production work performed by the firm’s production worker will be simpler (see fig. 7). Officials said that in the future, after the firm’s current production worker retires, the firm may rely on contingent workers to perform any needed production work not completed by the robot because the tasks will be simpler and easier to train a new, temporary worker to complete. Officials said the firm may also hire a worker with a different and more varied skillset who can perform the few remaining production tasks along with other types of tasks. Lower-skilled work: Officials at a medium-sized door manufacturer installed a robot to facilitate the firm’s redesigned door sealant system and production process. The original process of manually applying door sealant was physically-intensive, ergonomically challenging, and required significant skill and experience to precisely apply the sealant. With the new design, a robot applies the sealant autonomously. As a result, workers perform lower-skilled tasks in this process, including placing a piece on a platform, visually inspecting the robot’s work, cleaning and setting up the robot’s work station, and confirming the correct program is entered in the computer. Adaptability to changing daily work demands: Officials from selected firms told us that due to advanced technology adoption, workers need to change tasks depending on the day and circumstances. For example, at a large appliance manufacturer some workers serve in different capacities depending if the robots are functioning properly and depending on the production needs of that day. On the day we visited the plant, several of the robots were malfunctioning and workers were performing the robots’ tasks. Firm officials said that some of their workers serve in swing roles and move around to different production processes and assist as needed. Training Training Centers for Advanced Tech Skills We met with officials at a training center that re-trains adults and teaches high school students to work with advanced technologies used in manufacturing. We visited two firms in the area that told us that this training center helps fill a local shortage in maintenance technician skills, and that they have hired workers who graduated from the center. Officials at the training center said that there is a high demand in the area for maintenance technicians. For example, they said that a large automotive manufacturer in the area is planning to hire 800 maintenance technicians over the next 3 years, and that the firm is worried about how it will fill these positions. Officials at the training center also said that some firms have such a high demand for maintenance technicians that they hire high school students who complete the training program before they graduate high school. The training center is piloting its adult training program. The program recruits adults who are underemployed and have some mechanical aptitude, then trains them in advanced technologies used in manufacturing. Most of the students who participated in an early pilot obtained higher paying jobs than those they held before the program, according to officials at the training center. Many firms we visited offered training for workers to adapt to their changing roles and tasks, particularly when the tasks or roles became more technical. Some firms used internal training resources and some leveraged local training centers (see sidebar). Some technology developers also offered training to firms that adopted their technologies. Officials at some firms told us that training current workers for more technical positions was easier than finding workers with the appropriate skills. For example, officials at one medium-sized door manufacturer said they needed highly specialized engineers, but could not find any in the region. As a result, this firm offered tuition reimbursement for workers who were willing to go back to school to become engineers. They also partnered with local community colleges to train students to become future maintenance technicians. Officials at a large automotive manufacturer said that due to increases in the firm’s use of advanced technologies, the plant has needed to hire more technicians. As a result, this firm added programs to its on-site training center to train workers for these roles. Conclusions The complex job changes we observed at the selected firms we visited are not currently captured in federal data, though they may have significant implications for broader employment shifts. As the primary agencies responsible for monitoring the U.S. economy and workforce, the Departments of Commerce and Labor are aware of the importance of advanced technologies as major drivers of changes. For example, Census’ newly administered Annual Business Survey may provide valuable information in the future about the adoption and use of advanced technologies nationwide and the prevalence of resulting workforce effects. However, comprehensive data on firms’ adoption and use of advanced technologies do not currently exist, which prevents federal agencies and others from fully monitoring the spread of advanced technologies throughout the economy and linking their use to changes in employment levels or structural shifts in the tasks and skills associated with jobs. Observations from our visits to selected firms illustrate the complex and varied workforce effects that result from firms’ adoption of advanced technologies. In some circumstances, technology adoption will lead to increases in different types of jobs and in other cases technology adoption will lead to workforce reductions—either over time or immediately. Regardless of the firm-level workforce effects, worker roles and responsibilities are likely to change as advanced technologies take over tasks that workers previously performed. These changes could positively affect some workers, but could also have negative consequences for other workers, especially those who are unable to adapt to changes. For example, workers whose previous work tasks are automated and who are unable to perform new tasks required of them may need to seek new employment. If these changes occur occupation- wide, across many firms, workers may need to re-train or seek new employment in entirely different occupations or industries. To the extent that these changes are concentrated among occupations susceptible to automation, certain groups of workers (e.g., those with lower education levels) may be disproportionately affected and may lack the opportunity to develop skills needed to enter growing occupations. These workers will be in greater need of programmatic or policy supports, and federal workforce programs will need to be aligned with in-demand skills for the changing economy. Without comprehensive data that can measure the magnitude and variety of these firm-level changes, the workforce effects of the adoption of advanced technologies will remain unclear, job seekers may not be fully informed about their best future career prospects, and federally funded programs to support workers may be misaligned with labor market realities. DOL’s ability to collect information regularly on jobs and workers may enable the agency to fill these information gaps. Specifically, better data could be used by policymakers and DOL to proactively design and fund worker training programs that meet the job needs of the future. Recommendation for Executive Action The Secretary of Labor should direct the Bureau of Labor Statistics (BLS) and the Employment and Training Administration (ETA) to develop ways to use existing or new data collection efforts to identify and systematically track the workforce effects of advanced technologies. For example, the Secretary could select any of the following possibilities, or could identify others. BLS could expand existing worker or firm surveys to ask respondents whether advanced technologies have resulted in worker displacements, work hour reductions, or substantial adjustments to work tasks. BLS could expand its employment projections work to regularly identify occupations projected to change over time due to advanced technologies. ETA could expand the O*NET data system to identify changes to skills, tasks, and tools associated with occupations, as the information is updated on its rotational basis, and consider how this could be used to track the spread of advanced technologies. (Recommendation 1) Agency Comments We provided a draft of this report to DOL, Commerce, NSF, and OSTP for review and comment. We received written comments from DOL that are reprinted in appendix II and summarized below. DOL and Commerce provided technical comments, which we incorporated as appropriate. NSF and OSTP told us that they had no comments on the draft report. DOL agreed with our recommendation to develop ways to identify and track the workforce effects of advanced technologies. DOL stated that it will continue coordinating with the Census Bureau on research activities in this area, and that it plans to identify and recommend data collection options to fill gaps in existing information about how the workplace is affected by new technologies, automation, and AI. DOL also stated that it plans to release employment projections annually instead of every 2 years, beginning in 2019. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Labor, the Secretary of Commerce, the Director of the National Science Foundation, the Director of the White House Office of Science and Technology Policy, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or brownbarnesc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The objectives of this review were to examine (1) what is known about how the adoption of advanced technologies affects the U.S. workforce; (2) selected federal agency efforts to track and monitor the adoption and workforce effects of advanced technologies; (3) considerations that led selected firms to adopt advanced technologies and the risks they faced; and (4) ways technology adoption has affected the workforce at selected firms. Throughout the report, we use “advanced technologies” as a broad term to describe technological drivers of workforce changes, including but not limited to those identified in the National Academies study: artificial intelligence; machine learning; robotics; autonomous transport; advanced manufacturing; 3D printing; advanced materials; computing power; and internet and cloud technology. The technologies we observed at work sites could generally be categorized as applications of robotics, machine learning (e.g., machine vision or autonomous navigation), or both. However, not all technologies that may affect the U.S. workforce in the future—through automation or in other substantial ways—fall into these categories. Our use of the broad term “advanced technologies” leaves open the possibility that new technologies and other areas of focus are likely to emerge. To examine what is known about how the adoption of advanced technologies affects the U.S. workforce, we explored the extent to which available federal data could identify and measure these effects, and we identified limitations with available data. Because there was no comprehensive data that link employment trends to technology adoption, we used a study by Frey and Osborne to identify a group of occupations susceptible to automation. We then analyzed whether the concentration of these occupations in industries is correlated with growth in tech jobs or employment declines in those industries, whether job displacements are more common in these occupations than in others, the characteristics of workers who hold jobs in these occupations, and the geographic concentration of jobs in these occupations. We analyzed employment data from the Census Bureau (Census) and the Bureau of Labor Statistics (BLS); specifically, the American Community Survey (ACS), the Current Population Survey’s (CPS) Displaced Worker Supplement, and the Occupational Employment Statistics (OES) survey. For more information, see detailed discussions of our data analyses in sections 1-3 below. Identifying occupations susceptible to automation: Using a model that evaluates tasks within an occupation, Frey and Osborne estimate a probability of automation for 702 occupations. They identify occupations with a probability greater than 0.7 as being at high risk of automation. In our analyses, we thus consider this collection of occupations as those susceptible to automation. While there are different studies that attempt to predict what occupations or jobs may be automated in the future, we use the work by Frey and Osborne because it is widely cited and because its results are structured to allow us to identify a broadly inclusive collection of occupations susceptible to automation. The results of our analyses could be affected by using other studies to the extent that they identify different occupations as susceptible to automation. The accuracy of any collection of occupations is limited by the unpredictability of when or if jobs are automated, as well as the fact that occupations are comprised of a variety of jobs, which may experience automation to varying degrees or in different ways. We also reviewed examples of recent and ongoing studies that attempt to measure workforce effects directly attributable to technology adoption. We identified examples of research through interviews with knowledgeable individuals and from among those included in a recent review of the state of empirical work. Our review of studies was not meant to be comprehensive of the research in this area. To identify selected federal agencies’ current and planned efforts to collect data on, and monitor the prevalence and effects of advanced technologies in the economy, we met with the Departments of Labor (DOL) and Commerce (Commerce), as the principal federal agencies responsible for collecting data on the U.S. economy and workforce; the White House Office of Science and Technology Policy (OSTP), which leads interagency science and technology policy-coordination efforts across federal agencies; and the National Science Foundation (NSF), which was involved in the development of the Annual Business Survey. We interviewed officials and reviewed data and information collected by these agencies. We also reviewed the Annual Business Survey’s questionnaire to consider the potential uses of data being collected by the survey, and analyzed data from DOL’s Employment Projections program and Occupational Information Network (O*NET) database to identify information related to the adoption and workforce effects of advanced technologies. Annual Business Survey: The Annual Business Survey was administered for the first time in summer 2018, and collects information from firms about various topics, including innovation and technology use. The survey is a joint effort by the Census Bureau and the National Center for Science and Engineering Statistics within the National Science Foundation and Census plans to administer the survey annually for 5 years. The Annual Business Survey replaces the 5-year Survey of Business Owners, the Annual Survey of Entrepreneurs, the Business R&D and Innovation for Microbusinesses survey, and the innovation section of the Business R&D and Innovation Survey. Employment Projections program: BLS’s Employment Projections program analyzes changes in the economy, among other things, to project how employment by occupation may change over the next 10 years, including which occupations may be affected by advanced technologies. BLS’s projections are for the most part structured around the Occupational Employment Statistics, which produces employment and wage estimates for over 800 occupations. As part of this program, BLS develops a table of occupations that are projected to have direct employment changes due to some identified reason. According to BLS officials, the specific reason listed for each occupation is based on BLS’s judgment of the most significant factor or factors affecting the occupation (i.e., based on a qualitative assessment). We examined the reasons listed in this table and identified those related to the adoption of advanced technologies in an occupation, such as through automation, the increased use of robots or artificial intelligence, advances in machine or software technologies, or other similar changes. We then counted the number of unique occupations projected to experience declines in their shares of employment in an industry or group of industries due to one of these reasons. We also counted these occupations according to their major occupation group. BLS projected that some of these occupations would experience employment share declines in all industries and some would experience employment share declines in a single industry only. We counted unique occupations regardless of what industries or how many were noted (e.g., all industries or only one). We chose to do this to capture an inclusive list of occupations projected to be affected by advanced technologies, and because we are not using the list to quantify total projected employment changes. Of the 247 unique occupations BLS includes in its table as projected to have direct employment changes due to some identified reason, BLS projects that 163 will experience employment share declines— 100 of those occupations are projected to change broadly as a result of the adoption of advanced technologies. An employment share decline indicates that employment in an occupation will decline relative to others in a given industry or group of industries, not that the occupation will necessarily experience a decrease in employment in absolute terms. Occupational Information Network (O*NET) database: The O*NET database contains information about the skills, tasks, and tools (i.e., use of technology) associated with specific occupations. We downloaded two components of the database that (1) list the various work tasks associated with each occupation, and (2) list the various tools and technologies used by each occupation. In each database component, we searched for and identified tasks, tools, and technologies that involved robots in some way—e.g., tasks such as working with robots, robotic systems, or robotic applications, and tools such as welding robots, loading robots, or robot automation tools. We then counted the number of unique occupations that (1) had an associated work task related to robots, or (2) used a robot-related tool in the occupation. To understand firms’ adoption of advanced technologies and any resulting workforce effects, we met with officials representing 16 different firms that are using advanced technologies in their operations, as well as a systems integrator who provided detailed information about how several customer firms are using advanced technologies. Most of the meetings with firms were in-person site visits; three of the meetings with firms and the meeting with the systems integrator were by phone. Throughout this report, we use the term “firm” for simplicity, although the “firms” we met with included production plants of large manufacturers, single-location firms, public sector agencies, and other entities (see below). We also identify the manufacturing firms we visited as falling into one of three different size groups to describe their relative size differences from each other. The manufacturing firms we visited ranged from eight employees to thousands, according to firm officials. For the purposes of our study, we define small as fewer than 200 employees; medium as 200 employees to 1,000; and large as over 1,000 employees. Among the 16 firms we met with that are using advanced technologies, 10 are manufacturing firms: a small manufacturer of rubber stamps and embossing seals (also referred to as a small stamp manufacturer); two medium-sized door manufacturers; a small automotive parts manufacturer; a medium-sized automotive parts manufacturer; two large appliance manufacturers; a large automotive manufacturer; a large manufacturing corporation of household and personal care a medium-sized fruit processing plant. Six are non-manufacturing firms of various types: a construction consulting company; a federal statistical agency; a food retail corporation; a municipal township; a university-affiliated medical center; and a warehouse for a regional grocery store chain. The firms about which we received information from the systems integrator were business, administrative, and customer relations offices of various firm types. To identify firms to meet with, we consulted and sought referrals from a variety of knowledgeable sources, including academic researchers, technology developer firms, technology integrator firms, state economic development associations, and our own research. We selected firms that varied in size, industry sector, types of advanced technology used, and geography. We limited our focus to firms that had adopted advanced technologies and had experienced workforce effects. Our selection of firms is not a generalizable sample, but does provide illustrative examples of the adoption and workforce effects of advanced technologies. During our site visits at firms, we met with one or more management officials and, at times, with workers. We were also able to view the advanced technologies being used in operations. Our discussions with officials included topics such as motivations for adopting advanced technologies, the integration process, and any workforce effects that resulted from the technologies, including positions lost or gained and how workers’ tasks and skills may have changed. Our site visits and interviews with firm officials ranged from hour-long conversations to full-day visits, so some site visits yielded more detailed information than others. In addition to the firms that use advanced technologies, we interviewed seven technology developer firms and two robotics integrator firms (in addition to the systems integrator mentioned above). We met with these firms to learn more about some of the technologies being used and the adoption process, as well as about workforce effects at these firms. We identified these developer and integrator firms from various sources, including our conversations with academic researchers and our own research. We conducted additional interviews to obtain background and context for our work. We met with individuals knowledgeable about issues related to the adoption and workforce effects of advanced technologies, such as academic researchers and economists, officials from two unions representing manufacturing workers, officials at three industry-based organizations, officials from two state economic development associations, and officials at two worker training centers. For all objectives, we also reviewed relevant federal laws and regulations. The remainder of this appendix provides detailed information about the data and quantitative analysis methods we used to examine what is known about the workforce effects of automation and the adoption of advanced technologies (objective 1), as follows: Section 1: Analyses using data from the ACS Section 2: Analyses using data from the CPS’s Displaced Worker Section 3: Analyses using data from the OES survey For each of the datasets described below, we conducted a data reliability assessment of variables included in our analyses. We reviewed technical documentation and related publications and websites with information about the data. We spoke with BLS and Census officials who maintain the datasets to gain an understanding of and provide context for the various data that we analyzed, as well as to resolve any questions about the data and to identify any known limitations. We also tested the data, as applicable, to check for logical consistency, missing data, and consistency with data reported in technical documentation. We determined that the variables we used from the data we reviewed were sufficiently reliable for the purposes of this report. Section 1: Analyses Using Data from the American Community Survey This section describes the quantitative analysis methods we used to examine employment trend correlations and the characteristics and earnings of workers in occupations susceptible to automation (as identified by Frey and Osborne; see above). We used ACS data for these analyses. The ACS is administered by the Census Bureau and is an ongoing national survey that uses a series of monthly samples to produce annually updated estimates for the same areas surveyed via the decennial census. The ACS collects a range of information about individuals from a large sample of households—over 2.2 million respondent households in 2016—including employment information such as occupation, industry, and earnings, and demographic information such as age, gender, race, ethnicity, and educational attainment. We limited our analysis to workers who were classified as current employees, and who had earned positive wage and salary income in the prior 12 months. In 2016, this resulted in observations representing 136 million workers, close to the number reported by BLS for that same period using a different survey. This report primarily used ACS data from 2010 through 2016—specifically, we relied on the Census Bureau’s Public Use Microdata Sample of the ACS for the single years 2010, 2011, 2012, 2013, 2014, 2015, and 2016. Analyses of Employment Trend Correlations To test whether industries with higher concentrations of individuals in occupations susceptible to automation (as identified by Frey and Osborne) have experienced employment changes, we examined their correlation with changes in tech job concentration and changes in overall employment from 2010 through 2016. We limited the analysis to this period both because the ACS occupation codes changed in 2010 and because it allowed our results to post-date the economic recession of 2007-2009. We used industry definitions set by the ACS data, which groups some industries together—e.g., residential and nonresidential construction industries are combined in a single construction industry grouping. We defined tech jobs as those in computing, engineering, and mathematics occupations, consistent with previous GAO work on the tech field. We also examined an alternative definition of tech jobs in which we included those with “computer” in the occupation title. For both definitions, we estimated the number of tech jobs in each industry in each year, 2010-2016. We then calculated the growth rate in the number of tech jobs in each industry, and correlated that growth rate with the percentage of workers in that industry in occupations susceptible to automation (as identified by Frey and Osborne). We also estimated the number of workers overall in each industry in each year (2010-2016) and correlated the trend in total employment with the percentage of workers in that industry in occupations susceptible to automation (as identified by Frey and Osborne). We restricted our correlation analyses to those industries where the tech job growth rate or the overall employment trend was statistically significant. We performed two correlation tests. The Spearman test measures correlation between the rank of the two sets of values. The Pearson test measures correlation between the values themselves. As shown in table 2, we found a positive but weak correlation between industries with higher concentrations of jobs susceptible to automation and their concentration of tech jobs, based on both correlation tests and both definitions of tech jobs, and we found no meaningful correlation with change in overall employment in either test. To explore an example industry—the plastics product manufacturing industry—in further detail, we identified the number of jobs susceptible to automation within that industry, by occupation and groups of occupations. We also examined the growth in tech jobs within the industry, by tech occupation. We approximated each occupation’s contribution to the overall growth of tech jobs in the industry by multiplying their individual growth rates over the period 2010-2016 by their employment in 2010. The growth rates for the three engineering occupations, which when combined, account for more than half of the industry’s growth in tech jobs, were each significant at the 85 percent confidence level. Analyses of Worker Characteristics and Earnings To analyze the characteristics of workers in occupations susceptible to automation (as identified by Frey and Osborne), as well as the characteristics of workers with tech jobs, we used 2016 ACS data. We examined data on the workers’ gender, level of education, age, race and ethnicity, and hourly wage, and compared distributions of workers in occupations susceptible to automation and workers in all other occupations (see table 3). For race and ethnicity categories, we included only non-Hispanic members of White, Black, Asian, and Other categories, and the Hispanic category included Hispanics of all races. The “Other” category included American Indian or Alaskan Native, Native Hawaiian or Pacific Islander, two or more races, and other race. To analyze education level, we combined all attainment levels from a high school degree or less. To estimate the hourly wage of workers, we divided the wage and salary earnings of the worker by their usual hours worked and weeks worked. To test the reliability of this measure, we compared our results to average hourly wages reported by other BLS surveys; we found that the average values were sufficiently close to determine that this method was sufficiently reliable for our purposes. To investigate whether differences in hourly wage might be due to other factors, we estimated multiple regression models that enabled us to control for additional variables. Specifically, we estimated wage differences between workers in occupations susceptible to automation and workers in other occupations—i.e., whether a worker was in an occupation susceptible to automation (as identified by Frey and Osborne) was our primary independent variable (a binary, yes/no variable). Because we used the natural log of the hourly wage as the dependent variable, the standard interpretation of the regression coefficient of this variable is that it represents the average log point difference in hourly wages between occupations susceptible to automation and all other occupations. This coefficient can be made to more closely approximate a percentage difference in hourly wages or an earnings gap by taking the exponent and subtracting 1. As noted previously, we limited our analysis to workers who earned positive wage and salary income in the prior 12 months. We also removed observations with outlier values for wages (e.g., wage rates above $140 per hour); this represented about 1 percent of the sample in 2016. We ran five regression models with different sets of independent variable controls. Regression (1) estimates the earnings gap without any controls (the uncorrected earnings gap). Regression (2) estimates the earnings gap with a set of independent variables that control for characteristics of the individual; these variables included age, race and ethnicity, gender, marital status, state of residence, and education level. Regression (3) estimates the earnings gap with independent dummy variables for 2-digit industry codes added; this corrects for any differences between industries at the 2-digit level. Regression (4) estimates the earnings gap with independent dummy variables for 2-digit occupation codes added; this corrects for any differences between occupations at the 2-digit level. Regression (5) includes both 2-digit industry and 2-digit occupation code dummy variables. As table 4 shows, we found a significant difference in hourly wages between workers in occupations susceptible to automation compared to workers in other occupations, even after independent variables to control for worker characteristics, industry, and occupation codes were included. Including the additional independent variables caused the earnings gap to fall from just over -34 percent to just over -10 percent. Regression model 3, which estimated an earnings gap of about -17.2 percent, is our preferred model, as it controls for individual worker characteristics and for any differences between industries at the 2- digit level, but does not include occupation as an independent variable. Including occupation variables controls for any differences between occupations at the 2- digit level. However, because we identify workers in jobs susceptible to automation based on their occupations, these occupation control variables are likely highly predictive of Frey and Osborne’s estimated probability of automation, which is used to categorize workers in jobs susceptible to automation. We also ran these regression models for other years from 2010 to 2016 and we found substantively similar results. Section 2: Analyses Using Data from the Current Population Survey’s Displaced Worker Supplement This section discusses the quantitative analysis methods we used to compare relative job displacement rates between workers in occupations susceptible to automation (as identified by Frey and Osborne; see above) and workers in other occupations. We used data from the CPS’s Displaced Worker Supplement for these analyses. The CPS is sponsored jointly by Census and BLS and is the source of official government statistics on employment and unemployment in the United States. The basic monthly survey is used to collect information on employment, such as employment status, occupation, and industry, as well as demographic information, among other things. The survey is based on a sample of the civilian, non-institutionalized population of the United States. Using a multistage stratified sample design, about 56,000 households are interviewed monthly based on area of residence to represent the country as a whole and individual states; the total sample also includes additional households, some of which are not interviewed in a given month for various reasons, such as not being reachable. The CPS Displaced Worker Supplement has been administered every other year since 1984, and provides supplemental data on persons age 20 years or older who lost a job involuntarily in the prior 3 years, including data on reasons for job displacement, as well as industry and occupation of the former job. This report used data from the January 2016 Displaced Worker Supplement. Analyses of Relative Job Displacement Rates To analyze whether workers in occupations susceptible to automation (as identified by Frey and Osborne) experience job displacement at differing rates than workers in other occupations, we used data from the CPS’s January 2016 Displaced Worker Supplement. We identified workers who lost or left a job involuntarily during the 3 calendar years prior to the survey (i.e., January 2013 through December 2015) because their position or shift was abolished or because there was insufficient work for them to do. We focused on these reasons for displacement as those that most closely approximate how advanced technologies could replace workers at a given firm. We also limited our analysis to those workers who did not expect to be recalled to their jobs within the next 6 months. We categorized these displaced workers according to the occupations from which they were displaced (e.g., workers displaced from occupations susceptible to automation and workers displaced from all other occupations). We calculated relative job displacement rates as the number of displacements over the period 2013-2015 reported by a given population (e.g., workers in occupations susceptible to automation), over that population’s total current employment in January 2016. Although this measure does not represent the total number of jobs that existed annually that could have resulted in displacements, it allows us to control for population size and to approximate a relative displacement rate. We examined various populations, including occupations identified as susceptible to automation by Frey and Osborne, occupations BLS projects will experience declines in their share of employment due to advanced technologies (see above), and production occupations. To categorize occupations, Frey and Osborne and BLS use Standard Occupational Classification (SOC) codes, whereas the Displaced Worker Supplement uses Census occupation codes. We used a crosswalk provided by Census to match these occupation classifications. SOC codes have a hierarchical structure—e.g., a “broad” occupation group contains a subset of “detailed” occupations. For example, SOC code 13- 1031 is the detailed occupation “claims adjusters, examiners, and investigators” within the broad group SOC 13-1030 (“claims adjusters, appraisers, examiners, and investigators”). When a direct crosswalk between SOC and Census occupation codes was not available at the detailed level, we used the associated broad SOC group to identify a Census occupation code. There were some respondents in the Displaced Worker Supplement who did not report the occupation from which they were displaced, and these were dropped from our analysis. To estimate the sampling errors for each estimate, we used strata defined by state because the Displaced Worker Supplement data did not provide replicate weights or the sampling strata necessary to obtain standard errors. When estimating the number of job displacements over the period 2013-2015 reported by a given population (e.g., workers in occupations susceptible to automation), we used the supplement weight for respondents. When estimating the population’s total current employment in January 2016, we used the CPS 2016 weight for respondents. We used a Taylor series linearization to estimate the sampling error of the ratio of estimated number of job displacements over the period 2013- 2015 to the estimated number of current employment in 2016. While our primary analysis examined relative displacement rates for workers in occupations susceptible to automation, we also conducted sensitivity analyses by considering other groups of occupations. Specifically, we examined the relative displacement rates of the following groups: Jobs susceptible to automation had a relative displacement rate of 3.4 percent +/- 0.3, and all other jobs combined had a relative displacement rate of 2.9 percent, +/- 0.2. Jobs in occupations BLS projects will experience relative declines in employment due to advanced technologies (see above) had a relative displacement rate of 3.7 percent, +/- 0.5, and all other jobs combined had a relative displacement rate of 3.6 percent, +/- 0.2. Jobs in production occupations had a relative displacement rate of 3.7 percent +/- 0.8, and all other jobs combined had a relative displacement rate of 3.1 percent, +/- 0.2. Section 3: Analyses Using Data from the Occupational Employment Statistics survey This section discusses the quantitative analysis methods we used to analyze geographic reliance on occupations susceptible to automation (as identified by Frey and Osborne; see above). We used OES data for these analyses. The OES survey is a federal-state cooperative effort between BLS and state workforce agencies, which collects information on occupational employment and wage rates for wage and salary workers in nonfarm establishments. The survey is based on a sample drawn from about 7.6 million in-scope nonfarm establishments in the United States that file unemployment insurance reports to the state workforce agencies. Using a stratified sample design, about 200,000 establishments are surveyed semiannually and employment estimates are based on six panels of data collected over a 3-year cycle. The final in-scope sample size when six panels are combined is approximately 1.2 million establishments. The OES survey includes all full- and part-time wage and salary workers in nonfarm industries, but excludes self-employed workers, owners and partners in unincorporated firms, household workers, and unpaid family workers. OES data provide occupational employment estimates by industry for the country as a whole, for individual states, and for more local geographic areas (e.g., metropolitan and nonmetropolitan areas). This report used data from the May 2017 Occupational Employment Statistics. Analyses of Geographic Reliance on Occupations Susceptible to Automation To analyze what U.S. geographic areas rely more heavily on employment in occupations susceptible to automation, we used data from the May 2017 OES. For each local geographic area, we estimated how many jobs were in occupations identified as susceptible to automation by Frey and Osborne (see above) and how many jobs were in all other occupations. We also estimated how many jobs were in each group of occupations nationwide (using national-level data). We then calculated a location quotient for each local geographic area, which measures the proportion of each area’s jobs that were in occupations susceptible to automation compared to the national proportion of employment in these occupations. This measure depicts the extent to which a local geographic area relies on certain jobs for the employment of its population, relative to other areas. Based on their location quotients, we categorized and mapped 589 local geographic areas in the following three groups: Relatively High Concentration: Areas where the proportion of jobs susceptible to automation is at least 5 percentage points greater than the national average, and the difference is statistically significant at the 95 percent confidence level. This translates to an estimated location quotient of at least 1.1. Average or Relatively Low Concentration: Areas where the proportion of jobs susceptible to automation is within 5 percentage points above the national average or lower. Undetermined Reliance: Areas where the proportion of jobs susceptible to automation is undetermined. We classify an area’s proportion as “undetermined” if the estimated margin of error at the 95 percent confidence level is larger than 5 percentage points. We conducted one sided z-tests at the 95 percent confidence level to analyze each area’s estimated location quotient. The null hypothesis is that the area location quotient is less than or equal to 1.1 (i.e., the proportion of employment in the group of occupations in an area is 1.1 times the national proportion). The alternative hypothesis is that the area location quotient is greater than 1.1. Because estimated area employment proportions are based on a sample, we also restricted our tests to those areas that were reliable for our purposes by requiring that areas had sampling errors of no greater than 5 percentage points for a 95 percent confidence interval. According to BLS, employment estimates for individual occupations in individual local geographic areas may not be available in the public data for a variety of reasons, including for example, failure to meet BLS quality standards or to ensure the confidentiality of survey respondents. Because we aggregate data across multiple occupations, our methodology treats these cases as if employment in the given occupation in the given area was zero, which is not the case and which introduces imprecision into our analysis and the resulting location quotients. However, because ensuring confidentiality is a primary concern, we assume that most of these cases where data are suppressed would have relatively small numbers of jobs, and thus have minimal effects on our results. To test this assumption and to ensure the appropriateness of our methods, we compared the total number of jobs we analyzed across all local geographic areas to the total number of jobs reported at the national level (which do not have data suppressed). The total number of jobs analyzed across our local geographic areas was 5.5 percent lower than the total number of jobs reported at the national level, which we concluded was within an acceptable threshold to determine that the data were sufficiently reliable for our purposes and our analysis. In addition, according to BLS, because occupational employment estimates are rounded to the nearest 10 before publication, estimates of location quotients calculated from the public data will be subject to some rounding error, compared with location quotients calculated from the unrounded pre-publication data. Appendix II: Comments from the Department of Labor Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Blake Ainsworth (Assistant Director), Michael Kniss (Analyst-in-Charge), Shilpa Grover, and John Lack made key contributions to this report. Also contributing to this report were James Bennett, Benjamin Bolitzer, Melinda Cordero, Holly Dye, Jonathan Felbinger, Sheila R. McCoy, Jean McSween, James Rebbe, Krishana Routt-Jackson, Benjamin Sinoff, Almeta Spencer, and Sonya Vartivarian.
Why GAO Did This Study Advanced technologies—including artificial intelligence and robotics—are continually changing and emerging. While robots have existed for decades, modern robots may be equipped with learning capabilities that enable them to perform an expansive array of tasks. Advanced technologies are likely to affect the U.S. workforce by enabling firms to automate certain work tasks. Questions exist about how prepared federal agencies are to monitor workforce changes, promote economic growth, and support workers who may be negatively affected by automation. GAO was asked to examine workforce issues related to the adoption of advanced technologies. This report examines (1) what is known about how the adoption of advanced technologies affects the U.S. workforce ; (2) federal efforts to track these effects; (3) considerations that led selected firms to adopt advanced technologies and the risks they faced; and (4) ways technology adoption has affected the workforce at selected firms. GAO identified available federal workforce data, analyzed the extent to which those data could identify and measure workforce effects due to advanced technologies, reviewed selected research, and analyzed federal data on occupations susceptible to automation. GAO used data from the American Community Survey (2010-2016), the Current Population Survey's Displaced Worker Supplement (2016), and the Occupational Employment Statistics (2017). GAO met with 16 firms that are using advanced technologies in their operations and seven firms that develop advanced technologies, and interviewed managers and workers, and observed firms' use of technologies. The selected firms varied in size, industry sector, types of technologies used, and geographic location. Findings from discussions with the fims are not generalizable, but provide illustrative examples about the adoption of advanced technologies. GAO interviewed officials from federal agencies, including Commerce and DOL, academic researchers, economists, labor union officials, industry association officials, officials from state economic development associations, and other knowledgeable individuals. GAO also reviewed relevant academic work. What GAO Found Although existing federal data provide useful information on the U.S. workforce, they do not identify the causes of shifts in employment. As a result, it is difficult to determine whether changes are due to firms adopting advanced technologies, such as artificial intelligence and robots (see photo), or other unrelated factors. In lieu of such data, GAO analyzed employment trends and characteristics of jobs that selected researchers identified as susceptible to automation, and found that: industries with a greater proportion of jobs susceptible to automation were more likely to have experienced growth in tech jobs (i.e., computing, engineering, and mathematics) from 2010 to 2016—possibly an indicator of industries preparing to adopt advanced technologies; occupations susceptible to automation and industries with a greater share of these jobs did not experience meaningfully higher job loss rates in this period, though it could be too soon to observe these effects; and certain groups, such as workers with no college education and Hispanic workers, tended to hold jobs susceptible to automation in 2016, and thus could be disproportionately affected by changes if they occur. The Department of Labor (DOL) has a role in tracking changes in the U.S. workforce, but the data it collects related to the workforce effects of advanced technologies are limited. DOL's Bureau of Labor Statistics (BLS) identifies occupations projected to experience staffing pattern changes and the most significant causes, such as use of robotics, but its efforts are not designed to capture all instances of changes due to advanced technologies. DOL's Occupational Information Network program also collects data on tasks and technologies in occupations, such as robotics, but it was not designed to track changes over time. According to BLS, these efforts and other data they collect provide some, but not all, of the information required to identify and systematically track the impact of automation on the workforce. Without comprehensive data that link technological changes to shifts in the workforce, DOL lacks a valuable tool for ensuring that programs it funds to support workers are aligned with local labor market realities, and employers and job seekers need to rely on other sources of information to decide what training to offer or seek. The Department of Commerce's Census Bureau (Census) has started tracking technology adoption and resulting workforce effects in the new Annual Business Survey, which was administered for the first time in June 2018 with significant support from the National Science Foundation. This first survey asked firms about their use of advanced technologies and initial results will be available in late 2019. When the survey is next administered in summer 2019, Census plans to ask additional questions about firms' motivations for adopting technologies and effects the technologies might have on workers. This survey could provide information about the prevalence of technology adoption and workforce changes (e.g., declines in production workers or increases in supervisory workers), but it is not intended to provide information on the magnitude of workforce changes. Also, it remains unclear what limitations, if any, the survey data may have. According to officials from the 16 firms GAO interviewed, cost savings and other considerations led them to adopt advanced technologies, despite facing certain risks with the new technologies. Officials from these firms typically identified cost savings and improving job or product quality as primary motivations for adopting advanced technologies. For example, an automotive parts manufacturer said the firm adopted robots to reduce costs by using fewer workers. A door manufacturer said the firm installed two robots to lift heavy doors onto a paint line to reduce the number of worker injuries. A rubber stamp manufacturer said acquiring a robot (pictured above) allowed it to purchase and process raw materials instead of buying precut materials. Firm officials also identified risks related to adopting advanced technologies that could affect their return on investment, such as risks related to the reliability of technology and working with new tech developers. Among the firms GAO met with, officials described various ways technology adoption has affected their workforces. On one hand, officials at many firms said they needed fewer workers in certain positions after adopting technologies. The firms generally redeployed workers to other tasks, and in some cases, reduced the size of their workforces, typically through attrition. For example, a medical center GAO visited adopted autonomous mobile robots to transport linens and waste, among other things, which officials said eliminated 17 positions and shifted workers to other positions. On the other hand, officials at some firms said advanced technologies helped them increase competitiveness and add positions. An appliance manufacturer used advanced technologies to produce more of its own parts instead of relying on suppliers and, as a result, increased the number of production jobs, according to officials. Firm officials also noted that workers' tasks and skills have been changing due to advanced technologies (see figure). Workers who can adapt to new roles may experience positive effects, such as work that is safer, while those who cannot adapt may be negatively affected. What GAO Recommends GAO recommends that DOL develop ways to use existing or new data collection efforts to identify and systematically track the workforce effects of advanced technologies. DOL agreed with GAO's recommendation, and plans to identify and recommend data collection options to fill gaps in existing information about how the workplace is affected by new technologies, automation, and artificial intelligence. DOL also stated that it will continue coordinating with the Census Bureau on research activities in this area.
gao_GAO-20-322
gao_GAO-20-322_0
Background Government-wide Reform Plan Requirements and Timeline As shown in figure 2, a number of activities led up to OMB publishing the reform plan in June 2018, and subsequently OMB has provided updates on the proposals in the reform plan. In March 2017, the President issued an executive order requiring comprehensive reorganization plans for executive branch agencies. In April 2017, OMB provided guidance to federal agencies for developing their respective reform plans. According to this guidance, the government-wide reform plan was to have been based on the agency reform plans, OMB-coordinated crosscutting proposals, and public input. In addition, OMB’s guidance indicated that OMB would track the progress of the reforms in coordination with the President’s Management Council. OMB’s guidance also stated that it would track progress of the reforms by leveraging the federal performance planning and reporting framework that was originally put into place by the Government Performance and Results Act of 1993 (GPRA), and significantly enhanced by the GPRA Modernization Act of 2010 (GPRAMA). Accordingly, OMB’s guidance explained that progress would be tracked through the use of cross- agency priority (CAP) goals, agency priority goals, and Performance.gov. The President’s Management Agenda In March 2018, OMB released the President’s Management Agenda, which identified a set of CAP goals, required under GPRAMA. The CAP goals target areas where multiple agencies must collaborate to effect change, and agencies must report CAP goal progress in a manner the public can easily track. The Government-wide Reform Plan In June 2018, the administration released its government-wide reform plan, Delivering Government Solutions in the 21st Century: Reform Plan and Reorganization Recommendations (reform plan). In July 2019, the administration reported on the first year of progress toward its reform proposals. According to the 1-year update, the President’s Fiscal Year 2020 Budget included 18 of the proposed reform proposals in whole, or in part, and also described administrative actions by agencies to implement more than 20 of its 32 proposals. Of these proposals, the administration reported progress toward four of the five reforms we selected for review: (1) moving personnel security clearance background investigations from OPM to DOD; (2) solving the cybersecurity workforce shortage; (3) establishing the GEAR Center; and (4) reorganizing OPM. OMB officials said that they are not planning to move forward with the customer experience improvement capability reform during fiscal year 2020 because they are pursuing other customer experience activities, such as those included in the CAP goal for Improving Customer Experience with Federal Services. The Extent to Which Key Practices for Effective Reforms Were Followed Has Varied, and Agencies Identified Some Legal Authorities for Implementation Moving Background Investigations from OPM to DOD: Most Selected Key Practices Addressed We added the government-wide personnel security clearance process to our High-Risk List in January 2018 because it continues to face challenges in the timely processing of clearances, measuring the quality of investigations, and ensuring the security of related information technology (IT) systems. The National Defense Authorization Act (NDAA) for Fiscal Year 2018 included provisions that resulted in the transfer of background investigations from OPM’s National Background Investigations Bureau (NBIB) to DOD for certain DOD personnel, which represented approximately 70 percent of all federal background investigations performed by NBIB. Subsequently, the selected reform proposal recommended moving the remaining 30 percent of investigations to DOD. According to the reform plan, this transfer would provide an opportunity to conduct the background investigations more efficiently and economically than having them be performed by separate agencies. In January 2019, DOD formally established the Personnel Vetting Transformation Office (PVTO) to implement and oversee activities related to the transfer of NBIB functions. In April 2019, the President issued Executive Order 13869 which generally provided for the transfer of the remaining background investigation operations from OPM to DOD. The executive order also called on the Secretary of Defense to enter into an agreement with the Director of OPM to set forth expectations and designate the appropriate support functions for the transfer. As directed, in June 2019, OPM and DOD signed an interagency memorandum that set forth expectations for activities necessary for the transfer of functions of NBIB and associated employees and resources from OPM to DOD, including measurable deliverables, key considerations for executing deliverables, and processes for coordination and governance. According to documents we received from DOD, the transfer of NBIB functions to DOD occurred by September 30, 2019, as required by the April executive order. As shown in figure 3, OMB, OPM, and DOD have generally addressed most key reform practices in implementing the transfer of background investigations from OPM to DOD. According to DOD, more than 99 percent of NBIB employees, totaling 2,979 individuals, accepted positions transferring them to DOD’s Defense Counterintelligence and Security Agency (DCSA) by September 30, 2019. According to the DOD’s PVTO Director, 17 individuals chose not to transfer, and instead retired as permitted. Going forward, we will continue to monitor the government- wide personnel security clearance process as part of our work to identify and assess high-risk issues across the government. Establishing Goals and Outcomes OMB, OPM, and DOD have generally addressed key practices related to establishing goals and outcomes. The NDAA for Fiscal Year 2018 and Executive Order 13869 established a goal and related requirements for the transfer of OPM’s NBIB personnel, resources, and functions to DOD. Specifically, the executive order established a goal to complete the transfer of all NBIB administrative and operational functions to DOD by September 30, 2019. The executive order also outlined a series of deliverables and objectives for OMB, OPM, and DOD to achieve during the transfer. For example, the executive order required DOD to execute a written agreement with OPM to establish expectations for the transition period related to detailing personnel, safeguarding information technology, contracting, and funding background investigations, among others. OPM and DOD achieved their intended goal, and as of September 30, 2019, DOD is the primary provider of national security background investigations for the federal government. As directed, OPM and DOD signed an interagency agreement in June 2019 to address expectations, including governance, information technology, contracting, and funding issues, among others. According to documents provided by DOD, the Transfer Tollgate group and the Executive Steering Committee provided interagency leadership including an executive-level decision venue for implementation, resourcing, and other decisions. According to DOD, these interagency groups also provided accountability for implementation milestones. Under the leadership of these two groups, DOD officials shared with us that they worked with OPM to resolve a host of issues such as the transfer of personnel, funding for transfer costs, transfer of information technology assets, financial management issues, and acquisition concerns, among other critical issues. To help address differences in the financial management and funding of background investigations between OPM and DOD, the agreement required DOD to establish a Working Capital Fund to fund DCSA’s background investigation mission by September 1, 2019. According to DOD officials and the agency’s Transfer Status Dashboard, the Working Capital Fund was established prior to the September 1, 2019, deadline; and, as of October 7, 2019, the fund had a balance of approximately $1 billion. Neither the NDAA for Fiscal Year 2018 nor the executive order outlined measurable outcomes related to the efficient and effective delivery of background investigations, but rather goals and deliverables related to transferring NBIB functions to DOD, among other things. According to DOD officials we spoke with, the reform’s objective was the timely transfer of background investigation functions and coordination between affected agencies and stakeholders. DOD officials explained that following completion of the transfer, on October 1, 2019, PVTO, in coordination with other DOD components and federal stakeholders, began work transforming DCSA’s processes and procedures, including the background investigation process, to improve outcomes. Involving Employees and Key Stakeholders OMB, OPM, and DOD generally addressed key practices related to involving employees and key stakeholders. OPM and DOD generally communicated with affected employees and key stakeholders and involved them in the implementation of the transfer of NBIB functions to DOD. Agencies’ communication included email correspondence to affected staff from agency leaders including OPM’s Acting Director, and the Director of NBIB. These emails provided NBIB staff regular updates on the status and details surrounding the transfer of NBIB functions to DOD. Based on documents provided by OPM, communication to affected staff began in June 2017, informing staff that Congress was considering a legislative proposal to move certain NBIB functions to DOD. According to documents we received, communication with staff has continued regularly since this time, including a July 29, 2019, message to affected staff with an official notice that NBIB employees would be offered an appointment to DOD’s DCSA effective September 29, 2019. This notice explained that OPM’s NBIB employees accepting this appointment would transfer to DOD without changes to their duty stations, grades, or benefits. In addition to email communication, in-person town hall meetings were held between agency leaders and affected staff to provide updates on the status of the transfer and answer questions. According to OPM’s then NBIB Director, a July 2017 town hall was held addressing the congressional proposal to move the majority of NBIB staff to DOD. The Director also reported that OPM and DOD had worked via meetings, information exchanges, site visits, and communication at all levels in the organization to assemble information on the implication of the transfer and its potential impacts. OPM officials testified at a number of hearings in 2018 and 2019 related to the transfer, and OPM officials told us that they joined DOD in providing quarterly briefings required by the NDAA for Fiscal Year 2018, on the status and progress of the transfer. DOD and OPM also developed a Joint Transfer Plan that described strategic communication activities with affected employees, contractors, and other stakeholders including public media outlets, ourselves, and state and local law enforcement agencies, among others. DOD officials at the PVTO explained that they developed a more detailed communication plan in March 2019 that was implemented prior to the transfer. Addressing High-Risk Areas and Longstanding Management Challenges OMB, OPM, and DOD have partially addressed key practices related to addressing high-risk areas and longstanding management challenges. As previously mentioned, we placed the government-wide personnel security clearance process on our High-Risk List because of continuing challenges in the timely processing of clearances, measuring the quality of investigations, and ensuring the security of related IT systems. While OMB and lead agencies have considered our related high-risk work, the reform proposal and implementation plans do not demonstrate how the transfer and delegation of background investigation functions from OPM to DOD will address these challenges. Moreover, in November 2019, OPM’s Inspector General identified the background investigation legacy information systems as an ongoing top management challenge that will need to be addressed by both OPM and DOD moving forward. The Director of the PVTO told us that the office’s initial goal was to ensure a smooth and timely transition of functions from OPM’s NBIB to DOD by the beginning of fiscal year 2020. The Director also told us that after the transfer occurred, the office would shift its focus to address our high-risk area by, among other things, transforming these security clearance services to optimize processes government-wide. Specifically, the PVTO charter established a goal to “identify efficiencies to be gained, areas where the organizational structure and business services may be incomplete, maximize synergy where possible, and propose mitigation strategies to address gaps and shortfalls.” We will continue to monitor the government’s progress toward addressing security clearance challenges as part of our work to track high-risk issues across the government. Leadership Focus and Attention OMB, OPM, and DOD have generally addressed key practices related to leadership focus and attention. In particular, Executive Order 13869 outlined the roles and responsibilities of OMB, OPM, and DOD, and authorized a new office (PVTO) to assist in the execution of the transfer. The executive order also clarified agencies’ roles and requirements for coordinating the transfer, delegation, and other activities. Specifically, the executive order directed the Secretary of Defense and the OPM Director, in consultation with the OMB Director and the Security Executive Agent, to provide for the transfer of the bulk of OPM’s investigative functions to the DCSA, along with any appropriate OPM-associated personnel and resources, including infrastructure and certain investigation-related support functions. With regard to a dedicated implementation team, PVTO was responsible for ensuring coordination and resource alignment during the transfer, as well as ensuring that personnel security background investigations continued without disruption during the transfer. The PVTO Director told us in July 2019 that his team reports directly to the Office of the Under Secretary of Defense for Intelligence. The Director stated that he has experience in the areas of acquisitions, mergers, and reorganizations, and has support from experts and top leadership throughout the department. In addition, the PVTO charter states that the office be composed of employees with extensive experience and expertise in personnel vetting processes and reform efforts, as well as business and technology innovation, program evaluation, acquisitions and mergers, and organization and change management. Managing and Monitoring OMB, OPM, and DOD have generally addressed key practices related to managing and monitoring. Specifically, PVTO developed a joint transfer plan outlining critical assumptions for the transfer, major activities, and time frames across nine functional areas, including personnel, training, information technology, financial management, acquisitions, strategic communications, and security, among others. For each functional area, the transfer plan provided a summary of the functional area’s objective and a set of recommended major activities. For example, the functional area for IT had an objective to provide secure, current hardware and software in compliance with DOD and federal standards, and promote the unique requirements of a highly mobile, geographically dispersed workforce managing significant volumes of personally identifiable information and other sensitive data. Major activities included: (1) the transfer of IT infrastructure, (2) the completion of a gap analysis to determine which NBIB systems and hardware are transferrable or require new acquisitions, and (3) the provision of secure devices that support mobile operations. The PVTO Director also showed us a detailed implementation plan organized around the nine functional areas identified in the broader joint transfer plan. The implementation plan tracked thousands of activities and provided a detailed timeline for completion. The Director also provided us a dashboard that his team used to track implementation progress. The Director told us that his office used the dashboard to manage and monitor the transfer daily. The dashboard allowed the implementation team to identify areas where attention was needed using red, yellow, and green stoplight indicators signaling the status of major objectives. The annual assessments of timeliness and quarterly briefings required by the NDAA for Fiscal Year 2018 also serve as mechanisms for Congress and the executive branch to monitor timeliness, costs, and continuous evaluation, among other things. OMB also publishes quarterly milestone progress and metrics on the related Security Clearance, Suitability, and Credentialing Reform cross-agency priority goal on Performance.gov. Employee Engagement OMB, OPM, and DOD have generally addressed key practices related to employee engagement. In addition to the communication and outreach activities described above, OPM and DOD have undertaken additional efforts to engage affected employees and monitor levels of employee engagement at both agencies. For example, according to DOD officials, to engage and communicate with affected employees the agency held several town hall meetings to provide information and answer questions. They also said that DOD leadership regularly emailed affected staff providing updates on the status of the transfer and held separate question-and-answer sessions to keep staff informed and engaged. According to PVTO planning documents, the office also developed a strategy to achieve stakeholder buy-in through empowering leaders and through efforts to build a coalition of stakeholders around a common vision for the future of the background investigation function at DOD. In April 2019, OPM also conducted an internal survey of agency staff to collect information on employees’ perceptions of the transition to DOD, personal work experiences, satisfaction with their job, and any intent to leave DOD and reasons for leaving. The survey asked NBIB employees the extent to which they felt informed about the upcoming transition to DOD. According to the roughly one-third of staff who responded, 35 percent felt extremely or moderately informed, 32 percent felt somewhat informed, and 33 percent felt slightly or not at all informed. Approximately 75 percent of the survey respondents reported that they had enough information to do their job well, and 74 percent reported that they were proud to tell others they worked at their organization. When asked about satisfaction with involvement with decisions that affect their work, 38 percent of respondents were positive, 34 percent were neutral, and 28 percent were negative. OPM officials told us that they continued to monitor engagement of NBIB staff throughout the transition. Strategic Workforce Planning OPM and DOD have partially addressed key practices related to strategic workforce planning. In March 2019, we reported that, to make progress on removing the Government-wide Personnel Security Clearance Process from our High-Risk List, OPM and DOD should develop and implement a comprehensive strategic workforce plan that identifies the workforce needed to meet the current and future demand for its services, as well as reduce the current backlog to a manageable level. OPM completed this action in September 2019 with the release of the NBIB Strategic Workforce Plan for the Background Investigation Mission. The strategic workforce plan includes initiatives to strengthen investigative workforce capacity and training, promote the use of different hiring authorities, and provide succession planning, among other initiatives. According to the plan, senior leadership will build upon the strategic workforce plan to create an implementation strategy. While OPM has taken action, DOD has yet to complete its workforce plan. As of October 2019, DOD’s strategic workforce plan for the new DCSA enterprise was under development. Moving Background Investigations from OPM to DOD: Agencies Identified a Number of Legal Authorities In response to our request for information, OPM, DOD, and OMB provided information regarding the authorities they are using to implement the reform proposal to move all background investigations from OPM to DOD. According to OPM and OMB, the legal authorities by which NBIB moved to DOD consisted of section 925 of the National Defense Authorization Act for Fiscal Year 2018 (NDAA 2018) and Executive Order 13869, issued in April 2019, which re-designated DOD’s DCSA as the primary investigative service provider for national security investigations. OPM also cited 5 U.S.C. § 1104, which permits OPM to delegate certain personnel management functions to other agencies. Section 925 of the NDAA 2018 authorized DOD to conduct its own background investigations and required DOD to begin carrying out an implementation plan required under the National Defense Authorization Act for Fiscal Year 2017 (NDAA 2017) by October 1, 2020. The NDAA 2018 also required the Secretary of Defense, in consultation with the OPM Director, to provide for a phased transition of DOD background investigations from OPM to DOD. According to OPM, the DOD background investigations, consisting of investigations for civil service, military, contract, and non-appropriated fund personnel, constitute approximately 70 percent of the work performed by NBIB. Executive Order 13869 provided for the transfer of the primary responsibility for conducting national security background investigations, government-wide, from OPM to DOD. The executive order designated DOD, rather than OPM, as the agency to serve as the primary entity for conducting background investigations for national security adjudications, pursuant to and consistent with the NDAA 2018 and section 3001(c) of the Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA). According to OPM, this has the effect of moving the remaining national security investigations, not already transferred by section 925 of the NDAA 2018, to DOD. The Executive Order also acknowledged that OPM will delegate, pursuant to 5 U.S.C. § 1104, other background investigation functions to DOD for non-DOD personnel, such as investigations performed to enable the adjudication of the subject’s suitability or fitness for federal employment, eligibility for logical or physical access to systems and facilities, fitness to perform work for a federal agency under a government contract, and fitness to work as a nonappropriated fund employee. In accordance with the executive order, DOD and OPM signed an agreement on June 25, 2019 that set forth the expectations for necessary activities for the transfer of functions of the NBIB from OPM to DOD. The agreement provided that the period of transition was from June 24, 2019, through September 30, 2019. The agreement covered such areas as personnel, information technology, facilities and property, contracting, administrative support, records access, claims, and funding. Solving the Cybersecurity Workforce Shortage: Most Selected Key Practices Partially Addressed This reform proposal directs OMB and DHS, in coordination with other agencies, to prioritize and accelerate efforts to recruit, evaluate, hire, pay, and distribute cybersecurity talent across the federal government. Ensuring the cybersecurity of the nation is a longstanding challenge that has been on our High-Risk List for more than two decades. Efforts to solve the cybersecurity workforce shortage will help to address a number of high-risk issues we have previously identified. To accomplish the objective of filling cybersecurity vacancies, the reform lays out a series of projects and activities intended to identify and close workforce skills gaps and develop a standardized approach to hiring, training, and retaining qualified cybersecurity professionals. Specifically, the proposal calls for: identifying and categorizing the federal cybersecurity workforce using the National Initiative for Cybersecurity Education Cybersecurity Workforce Framework (NICE framework), implementing DHS’s Cyber Talent Management System (CTMS) with options to expand the capability across the government, rationalizing and expediting the security clearance process, standardizing training for cybersecurity employees, increasing the mobility of cybersecurity positions, developing plans to establish a cybersecurity reservist program to provide needed surge capacity, reskilling federal employees to fill critical cyber positions, and rationalizing the size and scope of federal cybersecurity education programs. As shown in table 2, OMB, DHS, and other federal agencies have made progress in implementing certain projects and activities included in the reform proposal. Although OMB and DHS have several projects and activities underway related to this reform, they did not provide us with information about the government-wide goals or implementation plans for the proposal. In November 2019, OMB staff told us that they did not have additional information to share regarding their application of key reform practices because they are still developing this reform. As a result, we found that most of the key reform practices were partially met (see figure 4). We did obtain information showing that OMB and DHS addressed key practices for some of the projects and activities included in the reform proposal, but the extent to which these practices were being applied to the reform proposal as a whole, or being coordinated government-wide, was unclear. Establishing Goals and Outcomes OMB and DHS have partially addressed key practices related to establishing goals and outcomes. We found that this reform established an objective to solve the cybersecurity workforce shortage across the government, and DHS established outcome-oriented goals and performance measures for certain agency-specific projects that are part of the reform. For example, as shown in table 2, DHS established a measure to hire at least 150 cybersecurity professionals at the agency during fiscal year 2020 using its new Cyber Talent Management System. In addition, DHS provided us its 2017 Comprehensive Cybersecurity Workforce Update, which includes an array of data and analysis, including cybersecurity workforce trends, metrics on DHS components’ vacancies, attrition, capacity gaps, hiring, and other information describing the status of the agency’s cybersecurity workforce. The administration also released a National Cyber Strategy in September 2018 outlining broad activities related to the government-wide reform such as building a talent pipeline, reskilling employees, and improving the process of recruiting and retaining qualified cybersecurity professionals. These documents may provide a first step toward developing clear outcome-oriented goals and performance measures for the reform as a whole. However, OMB and DHS have not yet established measurable outcome- oriented goals for the government-wide projects and activities outlined in the reform proposal. For example, there are not government-wide measurable goals for hiring cybersecurity professional across the government, reductions to attrition, training, or other aspects of the reform. As shown in table 2, OMB and agencies have made progress on a number of areas related to the reform; however, without establishing government-wide measurable goals and outcomes, OMB and DHS will not be able to determine whether progress is being made across the federal government to solve the cybersecurity workforce shortage. Involving Employees and Key Stakeholders OMB and DHS have partially addressed key practices related to involving employees and key stakeholders. We obtained information on targeted outreach to employees and stakeholders for certain projects and activities outlined in the reform, but as of November 2019, OMB and DHS did not have information on how they were addressing these key practices for the reform as a whole. For example, DHS’s Cybersecurity and Infrastructure Security Agency (CISA) officials told us that they participate in interagency coordination activities related to the NICE framework with OMB, the Department of Commerce National Institute of Standards and Technology (NIST), the Federal Chief Information Officers Council, and outside stakeholders. CISA officials said they worked with government and industry stakeholders to develop the NICE framework, and are working with educators and certification vendors to help build a pipeline of cybersecurity talent. Additionally, in March 2019, we reported that OPM and NIST coordinated with academia and the private sector to develop a cybersecurity coding structure that aligns with the work roles identified in the NICE framework. While OMB and DHS conducted outreach for certain projects and activities included in the reform proposal, it is unclear what, if any, outreach occurred for other projects. However, without a government- wide or project-by-project plan for communicating with and involving employees and stakeholders across the government, OMB and lead agencies will not know if certain agencies or employee groups are being adequately involved and informed. We have previously reported that creating an effective, ongoing communication strategy is essential to implementing a government-wide reform. The most effective strategies involve communicating early and often, ensuring consistency of message, encouraging two-way communication, and providing information to meet the specific needs of affected employees. This reform will be more likely to achieve its intended objective if OMB and DHS establish effective lines of communication with affected federal employees and the broader cybersecurity community. Addressing High-Risk Areas and Longstanding Management Challenges OMB and DHS have partially addressed key practices related to addressing high-risk areas and longstanding management challenges. We have designated information security as a government-wide high-risk area since 1997. We expanded this high-risk area in 2003 to include protection of critical cyber infrastructure and, in 2015, to include protecting the privacy of personally identifiable information. OMB and DHS generally considered areas that we previously identified as high-risk. OMB staff and DHS officials told us that they considered our high-risk reports when developing reform proposals, and have provided some documentation of these considerations. For example, OMB’s former Deputy Director for Management stated that, when developing the Solving the Cybersecurity Workforce Shortage reform, OMB used our 2017 High-Risk Series and noted that of more than 2,500 past recommendations, about 1,000 still needed to be implemented. OMB also identified several of our reports that touch on cybersecurity workforce issues. Although OMB and DHS have considered our prior work, as of November 2019, they had not demonstrated how the projects and activities outlined in the reform proposal would address our related high-risk issues and open recommendations. Without more detailed information describing how our high-risk issues are being addressed across the reform projects and activities, it is unclear which issues and recommendations are being targeted, and which are outside of the scope of this reform. Leadership Focus and Attention OMB and DHS have partially addressed key practices related to leadership focus and attention. In May 2019, the President issued Executive Order 13870 requiring federal agencies to take a variety of actions related to cybersecurity, including efforts to enhance the mobility of cybersecurity practitioners, support the development of cybersecurity skills, and create organizational and technological tools to maximize cybersecurity talents and capabilities. Many of the actions outlined in this executive order align with the stated objectives and components outlined in the reform proposal. However, neither OMB nor DHS have created a dedicated team with necessary resources to manage and implement this reform on a government-wide scale. Moreover, DHS staff we spoke with told us that OMB was the government-wide lead for this reform, and their agency was responsible for a subset of the projects and activities outlined in the reform proposal. OMB staff did not provide us with any plans or other documents regarding the individuals or team responsible for implementation across the government. OMB staff explained that DHS’s CISA and the Federal Chief Information Security Officer Council have some responsibility for federal cybersecurity workforce issues; however, they did not clarify which organization, team, or individuals were responsible for coordinating and implementing the reform government- wide. Our prior work has shown that establishing a strong and stable team that will be responsible for the transformation’s day-to-day management is important to ensuring that it receives the resources and attention needed to be successful. A dedicated leadership team responsible for overseeing and implementing the reform can also help ensure that various change initiatives are sequenced and implemented in a coherent and integrated way. Managing and Monitoring OMB and DHS have partially addressed key practices related to managing and monitoring. DHS has developed some agency-specific implementation plans and mechanisms to monitor progress. For example, DHS provides progress updates to Congress related to its continued efforts to code cybersecurity positions and to review the readiness of the cybersecurity workforce to meet DHS mission requirements, among other agency-specific assessments. However, OMB and DHS have not yet developed a government-wide implementation plan with goals, timelines, key milestones, and deliverables for the reform proposal as a whole. As previously discussed, OMB staff told us that they did not yet have a government-wide reform plan because they are still developing this reform. Without a government-wide implementation plan to track and communicate implementation progress, OMB and DHS will be unable to determine whether the reform is achieving its intended objectives, or whether unanticipated challenges or negative workforce trends are impeding efforts to close the cybersecurity workforce gaps across the government. Employee Engagement OMB and DHS have not addressed key practices related to employee engagement. In February 2019, DHS officials told us that the agency had not yet reached the stage of implementation for its projects and activities where they were considering employee engagement in this reform. According to DHS officials, they have started collecting data on employees, but have not interacted with individual employees on specific reform initiatives. As of November 2019, OMB had not provided information on its efforts to engage affected employees across the government on this reform. We have reported that employee engagement affects attrition, absenteeism, and productivity. Moreover, we have found that failure to adequately address a wide variety of people and cultural issues, including employee engagement, can also lead to unsuccessful change. We identified six key drivers of engagement based on our analysis of selected questions in the Federal Employee Viewpoint Survey, such as communication from management. Given that the objective of this reform is to address a critical workforce skills gap, it is important that OMB and DHS remain attentive to the engagement levels of cybersecurity employees across the government to ensure that productivity and morale are not adversely affected. As previously discussed, OMB and DHS lack a government-wide or project-by-project plan for communicating with and involving employees across the government. Such a communications strategy could be used to inform and, as appropriate, involve employees on implementation of the reform. Strategic Workforce Planning OMB and DHS have partially addressed key practices related to strategic workforce planning. As set forth in the Cybersecurity Workforce Assessment Act, DHS developed and published its Cybersecurity Workforce Strategy for 2019 through 2023. DHS’s strategy contains a 5- year implementation plan and a set of goals and objectives. Goals and objectives include an analysis of DHS’s cybersecurity workforce needs, a multi-phase recruitment strategy, professional and technical development opportunities, and plans to develop a talent management system, among others. OMB and DHS have yet to develop a government-wide cybersecurity strategic workforce plan that addresses the needs of all federal agencies. However, because this reform is focused on addressing a government- wide workforce shortage, it is particularly important that OMB and DHS complete their efforts to develop a strategic workforce plan for cybersecurity professionals that takes into account existing workforce capabilities, workforce trends, and shortages across the government. Without this information, DHS and OMB will not be able to determine if they are making progress or when they have addressed the government’s cybersecurity workforce shortage. Solving the Cybersecurity Workforce Shortage: OMB and DHS Identified Legal Authorities for Certain Reform Activities, and Stated That Additional Authority Would Be Sought If Needed DHS identified some existing legal authority for implementing aspects of the reform proposal, but neither DHS nor OMB provided us with a legal analysis for full implementation of the reform. OMB’s General Counsel stated, in a November 2019 letter to us, that OMB continues to collaborate with DHS and other federal agencies on a wide range of measures to address the cybersecurity workforce shortage. OMB stated that efforts had been within the confines of various current laws and appropriations, and that new legislation had not been required for any of these efforts. OMB did not provide additional details on the existing legal authorities on which it is relying. OMB also stated that the administration would seek legislation for any efforts beyond the scope of what is permitted under current law. DHS identified activities it is currently implementing related to the reform proposal that were previously authorized or required by law. For example, the CISA Chief Counsel identified DHS’s effort in establishing the forthcoming Cyber Talent Management System (CTMS) as a reform activity authorized by statute. The Chief Counsel noted DHS was authorized to establish this new personnel system for recruitment and retention of cybersecurity workers by the Border Patrol Agent Pay Reform Act of 2014. Under the act, DHS may establish cybersecurity positions; appoint personnel; fix rates of pay; and provide additional compensation, incentives, and allowances, subject to certain restrictions. The authority to implement this new system, however, is limited to DHS, and DHS officials acknowledged that CTMS cannot be implemented government-wide without statutory authorization. Additionally, DHS officials identified work being conducted at DHS to identify and categorize cybersecurity workforce positions, another activity related to the reform proposal and required by statute. Specifically, DHS was required by the Homeland Security Cyber Workforce Assessment Act of 2014, to: identify all cybersecurity workforce positions, determine the cybersecurity work category and specialty area of such assign data element codes developed by OPM in alignment with the NICE framework for each position. Furthermore, the Federal Cybersecurity Workforce Assessment Act of 2015 required OPM, in consultation with DHS, to identify critical needs for the IT, cybersecurity, or cyber-related workforce across federal agencies and to report to Congress on the identification of IT, cybersecurity, or cyber-related work roles of critical need. DHS officials also explained that, consistent with the 2015 act, it is currently working with other agencies and with industry to catalogue the federal cybersecurity workforce. The CISA Chief Counsel also identified DHS authorities that, under subchapter II of chapter 35 of Title 44 of the United States Code, CISA could leverage when implementing reform activities having government- wide or interagency impacts. Under these authorities, CISA (in consultation with OMB) administers the implementation of agency information security policies and practices, assists OMB with carrying out its responsibilities for overseeing agency information security policies and practices, and coordinates government-wide efforts on information security policies and practices. The Chief Counsel added that CISA “continues to consider, new, more specific…statutory authority aligned to specific reform responsibilities.” Establishing the GEAR Center: OMB Generally or Partially Addressed Selected Key Practices The administration is working toward establishing the GEAR Center, which it described in the reform plan as a vehicle for applied research that would help improve government operations and decision-making. OMB staff stated that the GEAR Center would be administered as a public- private partnership, and that the administration spent about $3 million for it in fiscal years 2018 through 2020 from available appropriations (see table 3). On Performance.gov, OMB provided options for the GEAR Center’s structure; it could be housed in a physical location, composed of a network of researchers working in multiple locations, or follow a different model. The administration does not envision that the GEAR Center will require government funds to conduct all of its initiatives in the long term. Instead, OMB staff said that the private sector would help fund its work after an initial stand-up period. According to the reform plan, GEAR Center research could help inform, for example, how the government responds to technological advances, how to provide better customer service experiences, and how to better leverage government data. In March 2019, OMB staff told us that they planned to establish the GEAR Center in fiscal year 2019, but as of February 2020, the center had not been formally established. To date, OMB staff have conducted preparation activities for establishing the GEAR Center, such as gathering stakeholder input through a Request for Information and a GSA- administered GEAR Center challenge competition to learn more about the types of projects a GEAR Center could facilitate. Through the challenge, GSA requested ideas on possible research projects, as well as related materials such as a project plan and ways to measure success. The challenge competition judges, which included OMB staff, selected three winning project plans with a prize of $300,000 each (for a total of $900,000). GSA specified that the cash prizes were for high potential project plans, and not grants to execute work on behalf of the government. In September 2019, GSA announced and awarded the winners of the GEAR Center challenge. The grand prize winners submitted 1-year project plans to: (1) help solve the federal cybersecurity workforce shortage by involving neurodiverse individuals, such as those with autism; (2) integrate currently disparate datasets to measure the impact of a federally funded program; and (3) train federal employees on how to better use their data for decision-making and accountability. In addition to the challenge competition, OMB contracted with the Center for Enterprise Modernization, a Federally Funded Research and Development Center operated by the MITRE Corporation, to examine options for operating the GEAR Center, in two projects. The first project— conducted from July 2019 through September 2019—was to explore a number of options for operating the GEAR Center. Following the first project, OMB staff laid out three tasks to accomplish during calendar year 2020 that they said would help them establish the GEAR Center: (1) establish a central coordinating function for the GEAR Center, (2) build the GEAR Center’s network of research partners, and (3) develop a draft government-wide learning agenda with input from federal agencies to inform the GEAR Center’s research and piloting activities. For the second project—which began in September 2019 and is scheduled to be completed in July 2020—the contractor is to provide additional detail on options for operating the GEAR Center, including on creating a network of research partners to support the GEAR Center. Table 3 provides details on these expenditures. As shown in figure 5, OMB has generally addressed most of our relevant key reform practices, and partially addressed the others. Determining the Appropriate Role of the Federal Government OMB staff generally addressed key practices related to determining the appropriate role of the federal government for the GEAR Center. While OMB staff have not developed a detailed governance structure for the GEAR Center, they have determined, with input from the private sector, that the GEAR Center will be a public-private partnership. Specifically, OMB staff considered the private sector’s ability or likelihood to invest its own resources in the initiatives the GEAR Center undertakes and otherwise contribute to the GEAR Center’s work. OMB did this by formally seeking the private sector’s input on these topics first through a Request for Information, and subsequently through a challenge competition. Establishing Goals and Outcomes OMB has partially addressed key practices related to establishing goals and outcomes. Specifically, OMB has initiated a process for developing outcome-oriented goals and performance measures for the GEAR Center, but has not finalized them. The GEAR Center challenge competition asked respondents to provide short- and long-term outcome- focused measures of success for the proposed projects in their submissions. However, as of November 2019, OMB staff told us they have not finalized these goals and measures for the GEAR Center. They stated that this is because they have not yet analyzed the results of the progress made by the challenge competition’s grand prize winners, and because they believe the purpose, or broad goal, of the GEAR Center is sufficient for their purposes at this stage of implementation. OMB staff told us that while they acknowledge that grand prize winners are not required to complete the projects they proposed, they anticipate the winners will carry them out to some extent, and they plan to monitor their work to inform GEAR Center planning activities. As OMB moves forward with establishing the GEAR Center, OMB staff should complete their efforts to develop goals and measures, because they will be necessary to track and communicate the GEAR Center’s progress over time. In addition, OMB staff have not yet fully assessed the costs and benefits of the various options OMB is considering for operating the GEAR Center. As previously discussed, OMB has stated that the GEAR Center could be housed in a physical location, composed of a network of researchers working in multiple locations, or follow a different model. Also, as previously stated, MITRE is currently exploring details of options for operating the center, and plans to provide them to OMB in July 2020. However, OMB has not yet conducted an analysis of the costs and benefits of the options for operating the center. In July 2018, OMB’s then Deputy Director for Management said that defining costs and benefits is dependent on refining and finalizing implementation plans. As of November 2019, OMB had not developed an implementation plan for establishing the GEAR Center. As OMB moves forward with establishing the center, assessing the costs and benefits of the various options for operating it will enable OMB to communicate the value of each option to Congress and other stakeholders. This assessment can help build a business case for OMB’s ultimate choice of how to operate the GEAR Center that presents facts and supporting details among competing alternatives. Involving Employees and Key Stakeholders OMB has generally addressed key practices related to involving employees and relevant stakeholders. Specifically, OMB has coordinated with internal government stakeholders, sought input from the private sector, and publicly communicated GEAR Center progress. For example, OMB staff said that they have worked to develop the GEAR Center with the President’s Management Council, the National Science Foundation, and DOD’s National Security Technology Accelerator group. Also, OMB held a Virtual Stakeholder Forum to provide information about the GEAR Center and to gather stakeholder input. During the forum, OMB sought attendees’ input through live polls, and announced that attendees could ask questions and provide additional input by sending messages to an OMB email account. OMB also sought stakeholder input through the GEAR Center Request for Information and challenge competition. Finally, as shown in figure 6, OMB has publicly reported on the GEAR Center’s progress on Performance.gov. Leadership Focus and Attention OMB has generally addressed key practices related to leadership focus and attention. To accomplish this, OMB has designated leaders, including OMB’s former Deputy Director for Management, a member of OMB’s Performance Team, and other staff to be responsible for implementing the reform. Managing and Monitoring OMB has partially addressed key practices related to managing and monitoring the reform to establish the GEAR Center. Specifically, OMB has gathered input from stakeholders on what research it could pursue, and from both stakeholders and a contractor on how the GEAR Center could be operated. OMB has done some analysis of that input, but has neither determined how the GEAR Center will operate nor developed an implementation plan. For example, OMB’s analysis of Request for Information responses shows that OMB is considering several options for how to execute the GEAR Center’s public-private partnership—a network of researchers, a physical location, etc.—but has not decided on one. In addition, as discussed previously, OMB contracted with MITRE to further assist with determining how the GEAR Center will operate. As OMB moves forward with establishing the GEAR Center, it will be able to track the GEAR Center’s progress, and communicate these results to Congress and key stakeholders, by developing and communicating an implementation plan with key milestones and deliverables. Establishing the GEAR Center: OMB Stated That It Will Seek Additional Authority to Conduct Implementation Activities, If Needed In response to our request to identify the legal authority OMB will need to implement this reform, OMB’s General Counsel stated in a November 2019 letter to us that it and its agency partners have relied upon existing legal authorities and available appropriations to develop the Request for Information, obtain external submissions for ideas to develop the GEAR Center, and issue the prize challenge. OMB stated that in conducting any future implementation activities, it would seek new legislative authority, if necessary. Conclusions While planning and implementation progress has been made since the administration’s government-wide reform plan was released in June 2018, important details surrounding the implementation of certain reform proposals have not been developed or communicated. OMB has a central role in overseeing and prioritizing these reforms for implementation, with support from lead agencies. In our previous work on government reorganization and reforms, we have found that there are key practices that, if followed, can help manage the risk of reduced productivity and effectiveness that often occurs as a result of major change initiatives. Important practices such as engaging and communicating with Congress, employees, and key stakeholders; dedicating a senior leadership team; and developing implementation plans, can help to ensure the successful implementation of reorganizations and reforms. OMB and DHS partially addressed most of the leading practices through their efforts to implement several projects related to the cybersecurity workforce reform, including efforts to reskill employees to fill vacant cybersecurity positions, establish a cybersecurity reservist program to provide needed surge capacity, and streamline relevant hiring processes. However, OMB, in coordination with DHS, has not yet followed relevant key practices to implement its reforms government-wide. Specifically, OMB and DHS have not yet developed a communications strategy to involve Congress, employees, and other stakeholders; established a dedicated government-wide leadership team; or developed a government-wide implementation plan with outcome-oriented goals, timelines, key milestones, deliverables, and processes to monitor implementation progress. In addition, OMB and DHS have not demonstrated how the projects and activities outlined in the reform proposal would address our related high-risk issues and major management challenges, or developed workforce plans that assess the effects of the proposal on the current and future workforce. If OMB, in coordination with DHS, applied key reform practices government-wide, they would be better positioned to manage the reform, and track progress across all agencies facing cybersecurity workforce shortages. OMB has taken steps toward determining how the GEAR Center will operate, such as, by determining the appropriate role of the federal government; providing leadership focus and attention; and collecting input from the public, academia, and industry on how the center could operate and on ideas for possible research projects. However, OMB has neither assessed the costs and benefits of the options it is considering for operating the center, nor developed an implementation plan with outcome-oriented goals and performance measures for it. As OMB moves forward with establishing the GEAR Center, completing these two activities can help OMB (1) make a case for why OMB’s ultimate decisions on how to operate the center are the most optimal, and (2) provide greater transparency to the public and private partners involved in its development, help build momentum, and demonstrate the center’s value. Recommendations for Executive Action: We are making a total of seven recommendations to OMB. The Director of OMB, working with DHS, should develop a government- wide communications strategy to inform and, as appropriate, involve Congress, employees, and other stakeholders in implementation of the reform proposal to solve the cybersecurity workforce shortage. (Recommendation 1) The Director of OMB, working with DHS, should establish a dedicated government-wide leadership team with responsibility for implementing the reform proposal to solve the cybersecurity workforce shortage. (Recommendation 2) The Director of OMB, working with DHS, should develop a government- wide implementation plan with goals, timelines, key milestones, and deliverables to track and communicate implementation progress of the reform proposal to solve the cybersecurity workforce shortage. (Recommendation 3) The Director of OMB, working with DHS, should provide additional information to describe how the projects and activities associated with the reform proposal to solve the cybersecurity workforce shortage will address our high-risk issues related to ensuring the cybersecurity of the nation. (Recommendation 4) The Director of OMB, working with DHS, should develop a government- wide workforce plan that assesses the effects of the reform proposal to solve the cybersecurity workforce shortage on the current and future federal workforce. (Recommendation 5) The Director of OMB should assess the costs and benefits of options for operating the GEAR Center. (Recommendation 6) The Director of OMB should develop an implementation plan that includes outcome-oriented goals, timelines, key milestones, and deliverables to track and communicate implementation progress of the reform proposal to establish the GEAR Center. (Recommendation 7) Agency Comments We provided a draft of this report for review and comment to the Directors of OMB and OPM, the Secretary of DOD, the Acting Secretary of DHS, and the Administrator of GSA. OMB did not comment on the report. DHS and DOD provided technical clarifications, which we incorporated as appropriate. OPM and GSA responded that they did not have comments on the report. We are sending copies of this report to the Director of OMB and the heads of the agencies we reviewed as well as appropriate congressional committees and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or Mcneilt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Key Practices and Questions to Assess Agency Reforms In a 2018 report, we developed key questions based on our prior work on key practices that can help assess agency reform efforts. The 58 questions are organized into four broad categories and 12 subcategories, as shown in table 4. For the purpose of this review, we selected those subcategories and key questions that were most relevant to the selected reforms based on the information contained in the reform proposals, agency documentation, and interviews with the Office of Management and Budget and lead agencies for each of the reforms. Appendix II: Reform Plan Effort to Reorganize the Office of Personnel Management Reorganizing OPM: Most Key Practices Partially Addressed; National Defense Authorization Act for Fiscal Year 2020 Calls for Study of OPM The administration’s proposal to reorganize the Office of Personnel Management (OPM) evolved from June 2018 through November 2019, and was effectively halted by Congress in December 2019. In the June 2018 government-wide reform plan, the administration proposed: (1) moving OPM’s policy functions to a new office in the Executive Office of the President, which would also provide a government-wide view of human capital policy issues, (2) merging a number of OPM’s responsibilities with the General Service Administration’s (GSA) or other government entities’ to be determined at a later date, and (3) renaming GSA as the Government Services Agency. The goals of this proposal were to help elevate the importance of these functions, improve efficiency of operations, and save money, according to the reform plan. Specifically, the administration suggested integrating the following duties into the Government Services Agency or other government entities: administration of healthcare and insurance programs, Human Resources Solutions (HRS), which provides products and services to other federal agencies on a reimbursable basis, and information technology services. In addition, the reform plan contained another proposal to move all of OPM’s national security background investigation functions to the Department of Defense (DOD). The President’s Fiscal Year 2020 Budget, published in March 2019, expanded and modified the original OPM reorganization proposal. It proposed that all of OPM’s functions beyond those moving to the Executive Office of the President and DOD be transferred to GSA, rather than merging a portion of them into a newly formed Government Services Agency. It also called for creating a new GSA service area to house certain functions, and for moving OPM’s Office of the Inspector General to GSA. In May 2019, the administration submitted a legislative proposal to Congress requesting new authority to implement aspects of the OPM reorganization reform proposal. As of December 2019, this proposal had not been introduced in Congress. In May 2019, we testified on issues to consider in the proposed reorganization of OPM. We found that the Office of Management and Budget (OMB) and the two lead agencies (OPM and GSA) had generally not addressed key practices for reforms, such as establishing outcome- oriented goals, assessing costs and benefits, or developing an implementation plan, and had not fully involved or communicated their efforts with Congress, employees, and other key stakeholders. We also found that OMB, OPM, and GSA had not shown how they would address management challenges that may affect their ability to successfully reorganize the government’s central human capital functions. Between May and September 2019, OPM provided us with additional information, which contributed to our assessment of the extent to which OMB, OPM, and GSA addressed key practices for this reform (see figure 7). In October and November 2019, OMB staff and OPM and GSA officials provided us with updates on the status of the OPM reorganization reform proposal. OMB staff and OPM and GSA officials told us that the transfer of major functions from OPM to GSA, such as retirement services and HRS, was on hold until Congress, through legislation, provided the necessary authority to move these functions. They also told us that they were working together on moving the following functions from OPM to GSA through their existing authorities: (1) administrative responsibilities for the Chief Human Capital Officers (CHCO) Council; (2) the Program Management Office for the Security, Suitability, and Credentialing Performance Accountability Council (PAC); and (3) management of two OPM office buildings—the Theodore Roosevelt Building, which houses OPM’s headquarters in Washington, D.C., and the Federal Executive Institute located in Charlottesville, Virginia. OMB staff and OPM and GSA officials stated that the primary purpose of these moves was to achieve greater efficiency of operations, and that these transfers were not components of the OPM reorganization reform proposal. In November 2019, OPM’s Inspector General expressed concern over ongoing efforts to merge these functions with GSA, noting that the specific details of the full merger continued to evolve, and every iteration of the proposed reorganization would fundamentally alter how agency functions and duties are performed. In the National Defense Authorization Act (NDAA) for Fiscal Year 2020, signed into law in December 2019, Congress effectively halted actions to reorganize OPM pending the completion of reports by the National Academy of Public Administration (NAPA) and OPM. The law directed OPM to enter into a contract with NAPA to conduct a study to identify challenges associated with OPM’s execution of its functions and make recommendations for addressing them, including a cost-benefit analysis of proposed changes, and the identification of statutory or regulatory changes needed to execute recommended actions, among other things. Approximately 6 months after the NAPA report, OPM must submit a report providing its views on the NAPA report and its recommendations for changes to its functions. OPM is also to include a business case analysis associated with such changes and a proposal for legislative and regulatory action required to effect the changes. Many of these requirements reflect the issues we raised in our May 2019 testimony on the extent to which the proposal to reorganize OPM was consistent with our key reform practices. According to the President’s fiscal year 2021 budget request, the administration continues to pursue implementation of OPM’s reorganization. Specifically, it proposes to transfer the functions of OPM to GSA, contingent upon enactment of authorizing legislation. Establishing Goals and Outcomes OMB, OPM, and GSA partially addressed the key practices related to establishing goals and outcomes. First, OMB, OPM, and GSA considered how the upfront costs of the reform would be funded by, for example, requesting funds through the President’s Fiscal Year 2020 Budget. However, OMB, OPM, and GSA did not fully address other aspects of the key practices. Specifically, since our May 2019 testimony, OPM provided us information on additional draft goals and measures for some portions of the reform. For example, according to a document we received from OPM in August 2019, a team leading the reform effort was developing “critical to quality” metrics in areas such as cost reduction, employee engagement, and flexible operations. However, these metrics did not have targets and had not been finalized. In November 2019, OMB staff told us that metrics were not yet final because they were still working with Congress to develop a legislative proposal authorizing the reform, and implementation of the merger was not yet underway. The NDAA for Fiscal Year 2020 requires NAPA and OPM to make recommendations for changes to OPM’s structure, functions, responsibilities and authorities, which may differ from those the administration proposed. We have also previously reported that major change initiatives should be based on either a clearly presented business case or analysis of costs and benefits grounded in accurate and reliable data, both of which can show stakeholders why a particular initiative is being considered and the range of alternatives considered. While OPM officials had some information on the costs and benefits they planned to achieve by merging functions with other agencies, they did not have an analysis or underlying data supporting their conclusions. Specifically, OPM provided us with its rationale for the reform in several documents, including: a summary of the agency’s financial and management challenges, a qualitative business case, a list of state and foreign governments’ administrative models where human resources and administrative functions are merged, and a presentation providing OPM’s estimate of the annual savings that could be realized by “fully integrating OPM’s operations into GSA.” However, the information that OPM provided did not include measurable performance or outcome metrics, or quantify benefits relative to costs, to provide a complete assessment of the costs and benefits and any alternative solutions to the reform proposal. OPM’s Office of Inspector General also found, in its fiscal year 2020 top management challenges report, that OPM had not developed a thorough analysis of costs and benefits. Shortly after OMB published the reform plan, OMB’s then Deputy Director for Management, who also served as OPM’s Acting Director, said that defining costs and benefits was dependent on refining and finalizing implementation plans. Since then, in the NDAA for Fiscal Year 2020, Congress required that NAPA’s study include an analysis of the benefits, costs, and feasibility of each recommendation, and a timetable for implementing these options. In addition, the law requires that OPM’s report include a business case analysis that describes the operational efficiencies and cost savings (both short- and long-term) associated with its recommendations. Involving Employees and Key Stakeholders OMB, OPM, and GSA partially addressed the key practices related to involving employees and key stakeholders. Specifically, since our May 2019 testimony, OPM officials provided us with documents to demonstrate that the agency took additional actions in this area, as discussed in more detail below. However, we found that OPM’s early outreach efforts to employees and stakeholders were insufficient, the agency did not have a plan for incorporating employee and stakeholder feedback, and it did not share relevant implementation details that may have affected employees and stakeholders. For example, OPM provided us with a communications tracker that listed meetings and correspondence with Congress, staff, and employee groups from OPM’s Acting Director, Deputy Director, and Deputy Chief of Staff. While this document listed a number of meetings and calls, it showed that most of OPM’s efforts to involve Congress, employees, and employee groups began in April 2019, more than 9 months after OMB published the reform plan, and more than 8 months after OPM’s Director and GSA’s Administrator testified before Congress about their plans for carrying out the reform proposal. In addition, both members of Congress and employee groups expressed dissatisfaction with initial outreach from OMB, OPM, and GSA, including lack of transparency. For example, during a House Committee on Oversight and Reform Subcommittee on Government Operations hearing on May 21, 2019, members of Congress and employee groups testified that they felt insufficiently involved in the reform. Both groups stated that OPM officials communicated with them on few occasions, and members of Congress said that they had not received key documents they requested from OPM, including an implementation plan. In August 2019, OPM provided us with a strategic communications plan that included high level messages and strategies for reaching out to Congress, employees, and the public. This and other OPM documents demonstrated that OPM communicated with employees and key stakeholders, and provided opportunities for its employees to ask questions and provide comments about the reform, activities consistent with our key practices. However, the documents did not indicate how senior OPM officials planned to use the feedback they received from their employees. Similarly, neither OMB nor GSA described how they planned to use employee feedback to inform their reform efforts. The NAPA study required by the NDAA for Fiscal Year 2020 must include methods for involving, engaging with, and receiving input from other federal agencies, departments, and entities potentially affected by any change in OPM that NAPA recommends. The study must also incorporate the views of stakeholders. Addressing High-Risk Areas and Longstanding Management Challenges OMB, OPM, and GSA partially addressed the key practices related to addressing high-risk areas and longstanding management challenges, consistent with our assessment in May 2019. Since then, OPM provided additional documents related to (1) our relevant high-risk area of strategic human capital management, as well as (2) longstanding challenges at OPM we and OPM’s Inspector General have reported. However, OMB, OPM, and GSA did not explain how the OPM reorganization reform proposal would address our high-risk issue or mitigate major management challenges, and did not have plans to monitor the potential effects of the reform on these issues. As a result, OMB, OPM, and GSA did not fully consider the potential risks of transferring OPM systems with longstanding weaknesses to GSA, and of GSA taking on duties in areas such as information technology, where it faces major management challenges. They also lacked a means of monitoring the reform’s potential effects on our strategic human capital management high-risk area and on major management challenges. Moreover, in November 2019, OPM’s Office of the Inspector General continued to identify the proposed merger of OPM with GSA as a top management challenge because the proposal did not include an implementation plan, and created a burden for the agency to fully study, plan, and execute reorganization activities. In November 2019, OMB staff told us that, because the proposed merger was a long-term effort and plans were still under development, they had not yet determined how our high-risk and other management challenges would be addressed. The NDAA for Fiscal Year 2020 requires the NAPA study to include analyses of OPM’s challenges and a recommended course of action for resolving them. Leadership Focus and Attention OMB, OPM, and GSA generally addressed the key practices related to leadership focus and attention. Specifically, since our May 2019 testimony, OPM officials provided us with documents demonstrating that OMB, OPM, and GSA made progress in this area. For example, OPM documents showed that OPM, OMB, and GSA leaders approved a governance structure for leading reform efforts that included: an executive steering committee that provided guidance and made decisions. Its members included the OMB Deputy Director for Management (serves as executive sponsor and chair), the OPM Director (serves as a vice-chair), and the GSA Administrator (serves as a vice-chair). The group used the Lean Six Sigma management approach to make decisions related to planning and implementing the reform during Tollgate meetings. an interagency task force that led activities to implement the reform, and that raised issues to the Executive Steering Committee as needed. Its members included leaders from OMB, OPM, and GSA. interagency teams, which provided subject matter expertise and performed tasks and activities to implement the reform. Their members were OPM and GSA officials. From August 2018 through April 2019, our analysis of OPM documents showed that these groups met and communicated frequently—from every few days to every few weeks, depending on the group. From May 2019— when the administration transmitted their legislative proposal to Congress to reorganize OPM—to November 2019, these groups met less frequently, according to OMB staff and GSA officials. Managing and Monitoring OMB, OPM, and GSA partially addressed the key practices related to managing and monitoring. Since our May 2019 testimony, OPM provided us with documents that demonstrated improvements in this area, but as of November 2019, had yet to finalize an implementation plan. Specifically, the documents showed that OMB, OPM, and GSA held leadership meetings and systematically tracked various aspects of the reform. For example, OPM officials tracked the status of certain activities associated with the reform, such as progress on developing a plan for communicating with employees and stakeholders, through leadership meetings. Also, OPM had a document identifying risks associated with the reform, such as ensuring continuity of services, as well as mitigation strategies, such as including provisions in OPM-GSA interagency agreements. The document also specified individual agency officials responsible for each risk. OMB, OPM, and GSA did not develop an implementation plan for the OPM reorganization reform that included key milestones and deliverables. In November 2019, OMB staff told us that their plans were still being developed because they were waiting for Congress to pass the administration’s legislative proposal authorizing the reform. The NDAA for Fiscal Year 2020 requires NAPA and OPM to make recommendations for changes to OPM’s structure, functions, responsibilities, and authorities, which may differ from those the administration proposed. Employee Engagement OMB, OPM, and GSA partially addressed the key practices related to employee engagement. Specifically, while OMB and agencies undertook activities to measure employee engagement, such as surveying and communicating with employees, they did not develop a comprehensive strategy for sustaining and strengthening employee engagement during and after the reform. For example, GSA officials told us that they established a GSA-OPM change management and communications workgroup, which developed a change management and communications plan that included employee engagement activities. Also, in April 2019, OPM conducted an internal survey of agency staff to measure employee engagement, among other factors. OPM officials also identified employee morale issues as a risk in a document identifying risks associated with the reform and risk mitigation strategies. To address employee dissatisfaction and low morale, OPM officials, including OPM’s then-Acting Director, shared the survey results with employees, held listening sessions to determine employees’ preferences for communications about the OPM reorganization reform proposal, and developed a communications strategy. However, OPM officials did not determine how they planned to use these communications to sustain and strengthen employee engagement. In November 2019, OMB staff told us that because they were still in the planning stages of the reorganization, the proposed reform had not yet involved major changes for employees, so they put employee engagement efforts on hold. The NAPA study required by the NDAA for Fiscal Year 2020 is to include methods for involving, engaging with, and receiving input from other federal agencies, departments, and entities potentially affected by any change in OPM that NAPA recommends. The study is to also incorporate the views of stakeholders. Strategic Workforce Planning OMB, OPM, and GSA did not address the key practices related to strategic workforce planning. OPM and GSA officials told us that they were conducting workforce planning activities associated with the OPM reorganization reform. Also, the President’s Fiscal Year 2020 Budget provided some information about staff levels at OPM and GSA. However, OMB, OPM, and GSA did not produce strategic workforce plans for OPM and GSA employees. OPM and GSA officials stated that they had not provided us with these plans because they were under development. In November 2019, GSA officials added that they were waiting for congressional authorization to carry out the reform proposal, so they had put their efforts to develop a workforce plan on hold. The NDAA for Fiscal Year 2020 requires that NAPA and OPM make recommendations on changes to OPM, which may differ from the administration’s proposed reorganization of OPM. Reorganizing OPM: NDAA for Fiscal Year 2020 Provides for the Identification of Legal Authorities As part of our review, our Office of General Counsel sent letters to the Offices of General Counsel at OPM, GSA, and OMB requesting they provide us with a description of the legal authorities they were using to support the proposed OPM reorganization. OMB, OPM, and GSA provided responses to our letter, but did not identify which aspects of the OPM reorganization could be carried out under existing law and which would require legislative authority. GSA, OPM, and OMB officials stated that they had not yet finalized their legal analysis, and that they were still determining which legal authorities they could use to implement elements of the reform. OMB General Counsel stated that to implement the administration’s proposed reorganization, both legislative and administrative actions would be necessary and dependent on each other “in the long run.” In May 2019, the administration submitted a legislative proposal requesting authority to transfer OPM functions—such as Human Resources Solutions, Information Technology, Retirement, and Health and Insurance Services—to GSA. As of December 2019, the proposal had not been introduced in Congress. OMB staff told us that the legislative proposal was an effort to communicate transparently about the extent to which new authorities would be required. As discussed earlier, in December 2019, Congress passed the NDAA for Fiscal Year 2020. In the NAPA study required under the NDAA, NAPA is to provide a comprehensive assessment and analysis of the statutory or regulatory changes needed to implement any recommended course of action, and to submit this report to Congress and the Director of OPM. The Director of OPM is then to submit a report to Congress that lays out OPM’s views on the findings and recommendations of the NAPA study, along with OPM’s recommendations for change. Any recommendation submitted by OPM for change is to include a business case analysis that sets forward the efficiencies and cost savings (both short- and long-term) associated with the change, and a proposal for legislative or administrative action required to effect the change. The statutory provisions in the act generally provide that no aspect of the agency that is assigned in law to OPM may be moved to GSA, OMB, or the Executive Office of the President until 180 days after OPM’s report is submitted to congressional committees, and subject to the enactment of any required legislation. Appendix III: Reform Plan Effort to Develop a Capability to Improve the Customer Experience This reform proposal aims to modernize and streamline the way citizens interact with the federal government, and to raise customer experience to a level comparable with leading private sector organizations. With support from the United States Digital Service (USDS) and GSA’s Technology Transformation Service, OMB has stated that it will lead an effort to establish a government-wide capability that will enable agencies to identify their customers, map their interactions (or journeys) with federal programs or services, and leverage digital tools and services to improve their experiences and overall satisfaction. For example, as reported in the reform plan, the U.S. Department of Agriculture created a “digital front door,” accessible at Farmers.gov, that is organized around the user experience rather than the government’s structure. The reform plan further explains that the improved capability provided by USDS and GSA would also provide for a government-wide resource to manage organizational change, including improved project planning, facilitating interagency collaboration, and sharing best practices on change management. OMB staff told us in early 2019 that they have delayed implementation of this reform, and instead will focus on other customer experience activities, such as those outlined in the related Cross-Agency Priority (CAP) goal. Upon release of the President’s Fiscal Year 2020 budget in March 2019, we confirmed that this reform was not included in the administration’s reorganization priorities, and OMB confirmed that no funding was requested for its implementation. OMB and agencies are also pursuing a related but distinct CAP goal under the President’s Management Agenda–Improving Customer Experience with Federal Services–with the aim of providing a modern, streamlined, and responsive customer experience across government, comparable to leading private-sector organizations. According to OMB, the reform proposal is meant to stand up a central capacity, or office, within GSA to manage customer experience government-wide; whereas, the CAP goal is intended to support capacity growth and accountability within agencies to develop and manage their own customers’ experience and satisfaction. Because OMB has not yet begun to implement this reform, and no actions are planned for fiscal year 2020, we are not able to assess the extent to which the reform is adhering to key reform practices. When the administration moves forward with implementing this reform, it will be better positioned for its successful implementation if the key reform practices are followed. In response to our request to identify the legal authority OMB will need to implement this reform, OMB’s General Counsel responded in a November 2019 letter that the initiative will not require new legislation. OMB stated the reform can be implemented within current law and available appropriations. Appendix IV: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgments Triana McNeil at (202) 512-6806 or Mcneilt@gao.gov. In addition to the contact named above, Sarah E. Veale (Assistant Director, Strategic Issues), Peter Beck (Analyst-in-Charge, Strategic Issues), Colenn Berracasa, Karin Fangman, Steven Putansu, Janet Temko-Blinder, Peter Verchinski, and Kellen Wartnow made key contributions to this report. Timothy Carr, Jacqueline Chapin, Tom Costa, Sara Cradic, Brenda Farrell, Patrick Hickey, Shirley Jones, Tammi Kalugdan, Brian Mazanec, Kimberly Seay, Gregory Wilshusen, and Alicia White also contributed to this report.
Why GAO Did This Study In June 2018, the administration released its government-wide reform plan, which included 32 proposals aimed at achieving management improvements and organizational efficiencies, among other things. OMB has a central role in overseeing these reform proposals, with support from various lead agencies. In July 2018, GAO reported on key questions to consider when developing and implementing reforms. GAO was asked to examine reform implementation. This report discusses three selected reforms that the administration prioritized: (1) moving background investigations from OPM to DOD, (2) solving the cybersecurity workforce shortage, and (3) establishing the GEAR Center. For each selected reform, GAO determined the extent to which OMB and the lead agencies addressed key practices for effectively implementing reforms, among other issues. GAO reviewed relevant documentation and interviewed OMB staff and agency officials. GAO assessed OMB's and lead agencies' efforts against relevant key practices for effective reforms. What GAO Found In working to implement three selected government-wide reforms that GAO reviewed, the Office of Management and Budget (OMB) and lead agencies followed some, but not all, of the key practices associated with effective reforms. Following key practices, such as those reflected in the questions below, would better position OMB and lead agencies to effectively implement such major change initiatives and achieve their intended objectives. Moving background investigations from the Office of Personnel Management (OPM) to the Department of Defense (DOD) : As required, the transfer of background investigations took place by September 30, 2019. OMB, OPM, and DOD generally addressed most key reform practices in this transfer, including involving employees and stakeholders, establishing an implementation team, and developing implementation plans. With the transfer complete, DOD officials told GAO they are shifting focus toward addressing GAO's high-risk area on the government-wide personnel security clearance process. Solving the cybersecurity workforce shortage : OMB and the Department of Homeland Security (DHS) partially addressed most leading practices through their efforts to implement several projects, such as reskilling employees to fill vacant cybersecurity positions, and streamlining hiring processes. However, GAO found that OMB and DHS have not established a dedicated implementation team, or a government-wide implementation plan, among other practices. Without these practices in place, OMB and DHS may not be able to monitor implementation activities and determine whether progress is being made toward solving the cybersecurity workforce shortage. Establishing the Government Effectiveness Advanced Research (GEAR) Center : According to OMB, the GEAR Center will bring together researchers from private and public sectors to inform and develop ways to improve government services and operations. OMB is working toward establishing the GEAR Center by collecting input from the public, academia, and industry for how the Center could be structured and ideas for possible research projects. However, OMB has not yet developed an implementation plan with key milestones and deliverables to track its progress. Developing and communicating an implementation plan will help OMB track the GEAR Center's progress and communicate its results. What GAO Recommends GAO is making 7 recommendations to OMB to follow certain key practices to help solve the cybersecurity workforce shortage and to establish the GEAR Center. OMB did not comment on the report.
gao_GAO-20-31
gao_GAO-20-31_0
Background FFRDCs arose from partnerships between the federal government and academic researchers and scientists during World War II. Those partnerships were later restructured into federal research centers to retain scientists, and they became known as FFRDCs by the mid-1960s. Since that time, FFRDCs have continued to perform tasks including technical studies and analyses, research and development, and systems engineering on behalf of federal agencies, such as DOD. In sponsoring an FFRDC, agencies draw on academic and private sector resources that can contribute to an agency’s ability to accomplish tasks that are integral to the mission and operation of the sponsoring agency. FFRDCs may be operated, managed, and/or administered by a university or consortium of universities, other nonprofit organizations, or a private industry firm as an autonomous organization or as a separate unit of a parent organization. As of May 2019, federal agencies sponsored a total of 42 FFRDCs, 10 of which are sponsored by DOD. These 10 DOD-sponsored FFRDCs can be divided into three categories: S&A Centers: These centers deliver independent and objective analyses and advise in core areas important to their sponsors in support of policy development and decision-making, among other things. Research and Development Laboratories: These laboratories conduct research and development, focusing on the development and prototyping of new technologies and capabilities to meet DOD needs. For example, these laboratories engage in research programs that emphasize the evolution and demonstration of advanced concepts and technology, and transfer new technology to the private sector. Systems Engineering and Integration Centers: These centers meet long-term technical and engineering needs to ensure complex systems meet operational requirements. Among other things, Systems Engineering and Integration Centers assist with testing system performance, development and acquisition of system hardware and software, integration of new capabilities, and the continuous improvement of system operations and logistics. Table 1 lists the 10 DOD-sponsored FFRDCs. As shown in table 1, each of the 10 DOD-sponsored FFRDCs is managed by a specific military department or organization within DOD—referred to as the FFRDC primary sponsor. More broadly, the Office of the Under Secretary of Defense for Research and Engineering oversees and manages DOD’s FFRDC program. Sponsoring Agreements DOD’s relationships with FFRDCs are defined through sponsoring agreements between the primary sponsor (i.e., the DOD organization responsible for the overall use of the FFRDC) and the FFRDC parent organization. According to the FAR and DOD instruction, sponsoring agreements define the FFRDC’s purpose and mission and may not exceed 5 years in duration. DOD’s instruction also states that sponsoring agreements are to establish conditions under which DOD may award an FFRDC contract and describe the overarching requirements for operation of the FFRDC. For example, the DOD instruction states that sponsoring agreements are to describe constraints on the FFRDC parent organization that are necessary to preserve the integrity of the FFRDC, such as provisions to prevent the occurrence or appearance of organizational or personal conflicts of interest that may undermine the independence, objectivity, or credibility of the FFRDCs. The DOD instruction also states that sponsoring agreements will preclude FFRDCs from performing commercial work. In this regard, the FAR provides that sponsoring agreements are required to address whether or not the FFRDC may accept work from other entities and if so, the procedures to be followed and the limitations as to the work that can be accepted. Further, the DOD instruction and the FAR provide that sponsoring agreements will generally preclude FFRDCs from competing with any organization in response to a formal request for proposals other than the operation of the FFRDC. After the primary sponsor identifies the need for FFRDC work, and has defined FFRDC core competencies, roles, and responsibilities in the sponsoring agreement, the primary sponsor awards a noncompetitive contract to the FFRDC to support the sponsor’s research requirements, such as addressing national security issues and systems development. Comprehensive Reviews Prior to extending a contract or sponsoring agreement for an FFRDC, the FAR requires that the primary sponsor conduct a comprehensive review of the use and need for the FFRDC at least every 5 years. The FAR describes elements of what the comprehensive review should include, such as examination of the sponsor’s special technical needs and mission requirements performed by the FFRDC and assessment of the efficiency and effectiveness of the FFRDC in meeting the sponsor’s needs. The FAR further requires that the head of the sponsoring agency approve continuing or terminating sponsorship based on the results of the comprehensive review. Initiating Work at FFRDCs FFRDCs initiate work on specific projects at the request of “work sponsors,” or the entities that request the services of the FFRDC. Work sponsors can be the primary sponsor of the FFRDC or another entity. When initiating work at FFRDCs, the primary sponsor determines whether to approve research projects for the FFRDC before projects are placed on contract. Approval of research projects is based on the determination that work proposed is appropriate for the FFRDC and consistent with the FFRDC’s core competencies as documented in the sponsoring agreement. Additionally, the primary sponsor ensures FFRDC work efforts do not exceed available resources. Among other things, FFRDC work sponsors identify project requirements, propose an appropriate research design, confirm the work is appropriate and consistent with FFRDC core competencies, identify the source of project funding, and monitor the progress of the work to ensure FFRDC performance is satisfactory and meeting desired requirements. In some instances, S&A Centers serve only a specific military department or office, while in other cases an FFRDC may serve a range of DOD entities. For example, RAND Arroyo Center broadly supports the analytic requirements of the Army in order to provide timely advice to help senior Army leadership make informed policy choices. Accordingly, the RAND Arroyo Center sponsoring agreement with the Department of the Army provides that the scope of RAND Arroyo Center work is to support Army sponsors throughout the Army requiring comprehensive analytical support. In contrast, the Institute for Defense Analyses (IDA) and RAND National Defense Research Institute serve DOD more broadly on national security issues. For example, according to IDA’s sponsoring agreement with DOD’s Office of the Under Secretary of Defense for Acquisition and Sustainment, the primary mission of IDA is to assist the Office of the Secretary of Defense and other Defense organizations in addressing important national security issues, particularly those requiring scientific and technical expertise. Staff Years of Technical Effort DOD manages the overall level of FFRDC work using a metric known as staff years of technical effort (STE), which is roughly equal to the work of one employee working for 1 year. Congress typically sets an annual limitation on the STE that may be funded for DOD FFRDCs to support non-intelligence programs on behalf of the agency (hereafter, Defense STE). Between fiscal years 2013 to 2017, Congress established an annual ceiling of 5,750 Defense STE available to DOD, of which 1,125 could be allocated to S&A Centers. In fiscal year 2018, Congress raised the ceiling on Defense STE to 6,030; however, the limit on S&A Centers remained unchanged. In managing Defense STE, DOD: consolidates annual Defense STE requirements for each fiscal year based on projected primary sponsor requirements and submits STE requirements to Congress; establishes Defense STE allocations for each DOD-sponsored FFRDC and provides associated funding limitations to each primary sponsor; monitors Defense STE usage and associated obligations; and provides an annual report to Congress at the end of each fiscal year outlining the Defense STE funded and associated DOD funds obligated for each FFRDC. In addition to Defense STE, FFRDCs may support DOD intelligence activities under the Military Intelligence Program and the National Intelligence Program. Oversight for STE usage for these programs is provided by the Office of the Under Secretary of Defense for Intelligence and Office of the Director of National Intelligence, respectively. Military Intelligence Program and National Intelligence Program STE funding may not be used to support Defense STE requirements. In October 2008, we reported that Congress implemented the Defense STE ceiling during the 1990s in response to concerns that DOD was inefficiently using its FFRDCs. In addition, we found that STE ceilings aimed to ensure that FFRDC work was appropriate and that resources, which were limited, were being used on DOD’s highest priorities. In December 2018, we reported that officials in the Office of the Secretary of Defense’s Studies and FFRDC Management Office stated that the ceiling significantly constrains the use of DOD’s FFRDCs and that DOD customer demand for FFRDC services is significantly greater than the annual ceiling set by Congress. Further, officials indicated at that time that FFRDC-related work must be deferred to later years when the limits are reached, since there are no other legally compliant alternatives capable of fulfilling these requirements. We did not make any recommendations related to this issue. Reviewing FFRDC Performance Following the completion of FFRDC work, the primary sponsor, with assistance from the work sponsor, reviews FFRDC performance in written assessments via questionnaires. In addition, the primary sponsor assesses FFRDC performance annually, addressing the technical quality, responsiveness, value, and timeliness of the work performed. Some of the information from the annual reviews may be used in support of the comprehensive review, such as to demonstrate the efficiency and effectiveness of the FFRDC in meeting the primary sponsor’s needs. DOD Obligated about $3 Billion per Year to DOD-Sponsored FFRDCs from Fiscal Years 2013 through 2018 From fiscal years 2013 through 2018, total DOD obligations to the 10 DOD-sponsored FFRDCs generally increased annually from about $2.7 billion in fiscal year 2013 to approximately $3.2 billion in fiscal year 2018. Approximately 70 percent of total annual DOD obligations to DOD- sponsored FFRDCs between these fiscal years went to support non- intelligence programs and were comprised of DOD obligations associated with utilized Defense STE, or Defense STE obligations. Specifically, DOD Defense STE obligations ranged from about $1.9 billion in fiscal year 2013 to $2.2 billion in fiscal year 2018, with S&A Centers representing approximately 18 percent of these obligations. In addition to DOD Defense STE obligations, about 30 percent of total DOD obligations to DOD-sponsored FFRDCs between fiscal years 2013 through 2018 went towards other FFRDC-related activities and costs, such as intelligence program activities through the Military Intelligence Program and National Intelligence Program and capital equipment costs. Figure 1 shows DOD obligations by fiscal year to DOD-sponsored FFRDCs. For fiscal years 2013 to 2018, the FFRDCs we reviewed in-depth—DOD’s S&A Centers—collectively accounted for about 18 percent of DOD Defense STE obligations annually, whereas Research and Development Laboratory FFRDCs and Systems Engineering and Integration Centers accounted for 27 and 55 percent, respectively (see figure 2). DOD Defense STE obligations to S&A Centers rose from about $320 million in fiscal year 2013 to approximately $380 million in fiscal year 2018 totaling about $2.3 billion during this period. Within each S&A Center, obligations remained relatively constant over the 6 years, with obligations for some FFRDCs higher than obligations for others. For example, on average DOD obligated about $134 million annually to IDA between fiscal years 2013 through 2018, whereas DOD obligated approximately $39 million annually to RAND Arroyo Center during this timeframe. DOD Defense STE obligations to S&A Centers were almost entirely awarded to support research projects requested by DOD. In some cases, work was done in response to congressional direction. For example, RAND Project Air Force (PAF) initiated a fiscal year 2017 independent review and assessment of the Ready Aircrew Program to respond to requirements outlined in the National Defense Authorization Act of Fiscal Year 2017. Overall, according to information provided by DOD sponsors and FFRDC representatives, between fiscal years 2013 through 2017, S&A Centers began work on about 600 research projects annually on behalf of DOD, with about 93 percent of these projects initiated at the request of DOD. The dollar value of these S&A Center projects ranged from about $2,000 to $11 million between fiscal years 2013 through 2017. DOD Reported It Primarily Considered Strategic Relationships and FFRDC Core Competencies When Sponsoring S&A Centers and Initiating Projects Sponsoring agreements note and primary sponsors reported in comprehensive reviews that S&A Centers are utilized because of DOD’s strategic relationships with FFRDCs. As described in the FAR, FFRDCs meet special, long-term research or development needs of the sponsoring agencies. Sponsoring agreements with S&A Centers outline the importance of strategic relationships that have helped these FFRDCs to develop and maintain in-depth knowledge of their sponsors’ and users’ programs and operations. In our review of S&A Center sponsoring agreements and comprehensive reviews, we identified that strategic relationships between sponsors and S&A Centers are generally characterized by the stability of long-term capabilities in subject areas important to DOD, access to sensitive and proprietary data and information, and objectivity in the form of freedom from conflicts of interest. These documents also indicate that strategic relationships enable S&A Centers to maintain in-depth knowledge of work sponsor programs and operations. For example, in the 2015 sponsoring agreement between the Army and RAND Arroyo Center, the sponsoring agreement states that both the Army and RAND Arroyo Center share a strategic relationship, and that the RAND Arroyo Center is structured to maintain strong analytic expertise related to Army policy and operations. In addition, the sponsoring agreement outlines the importance of RAND Arroyo Center’s continuity of expertise to the Army, long-term research efforts, and high-quality staff. Office of the Under Secretary of Defense for Acquisition and Sustainment (OUSD(A&S)) officials told us that S&A Centers are oftentimes chosen to perform work for DOD due to unique long-term strategic relationships with sponsors for independent and knowledgeable expertise within core competencies to address sponsors’ specific analytic requirements. In some cases, these strategic relationships date back to World War II. Regarding these strategic relationships, OUSD(A&S) officials also told us the primary sponsor has a degree of control over an FFRDC’s business affairs that can limit the risks of organizational conflicts of interest at FFRDCs. DOD also cited strategic relationships between DOD and S&A Centers as a reason for using S&A Centers when initiating projects we reviewed. For example: Prior to initiating a 2016 assessment of the impact of long-term fiscal trends on Army capabilities, the Army determined RAND Arroyo Center was uniquely qualified to conduct the research because the project required knowledge of defense planning scenarios that would have given an industry contractor a competitive advantage, potentially leading to a conflict of interest. The Army also identified RAND Arroyo Center’s long-standing expertise on security cooperation when requesting a fiscal year 2013 study on assessing value in Army security cooperation as a reason RAND Arroyo Center was uniquely suited to complete the study. Navy primary sponsor officials identified the long-term relationship between CNA, the FFRDC, and the Navy, which has led to broad subject-matter expertise in naval matters, as a reason they used CNA for the fiscal year 2016 study on the assessment of the effects of possible policy changes to a career track program for military officers trained to work with other military services. CNA leadership chose two researchers to lead the effort, one of which had prior experience in this area. An OUSD(A&S) official cited RAND National Defense Research Institute’s (NDRI) longstanding portfolio on military workforce issues as a reason for using RAND NDRI for a fiscal year 2017 study on the military’s 40-year pay table. An official told us that RAND NDRI’s prior work in this area would allow for a quicker response and more in- depth analysis to respond to the work request. In addition to the strategic relationships, sponsoring agreements and comprehensive reviews cited FFRDC core competencies as key factors in establishing and continuing relationships with S&A Centers, which is consistent with provisions outlined in DOD Instruction 5000.77. The DOD instruction states that FFRDCs maintain long-term competencies and capabilities to meet DOD needs that cannot be met by government or other private sector resources as effectively, and these competencies derive from the sponsor’s analytical requirements. In general, core competencies include expertise in engineering, research and development, and analysis, and are further described in FFRDC sponsoring agreements and comprehensive reviews. For example: The Navy 2015 comprehensive review of CNA states that CNA satisfies the Navy’s need for highly specialized skills and competencies in Navy warfighting and warfighting support— particularly research staff from CNA’s studies and analyses division— to accomplish their operational missions. The 2019 sponsoring agreement between DOD’s OUSD(A&S) and IDA outlined the need for technical and analytical support, citing IDA’s four core competencies as the scope of work of the FFRDC: systems and capabilities evaluations, technology assessments, force and strategy assessments, and resource and support analyses. The Army 2010 and 2014 comprehensive reviews of RAND Arroyo Center stated that RAND Arroyo Center has currency in all requisite Army proficiencies, provides a multidisciplinary research process that integrates and applies competencies with an assurance of consistently high quality, and also has the ability to apply competencies with expedience when an Army request for analytic support requires a quick response. OUSD(A&S)’s sponsoring agreement with RAND NDRI defines RAND NDRI’s research capability and core competencies such as, but not limited to, global and national security, defense acquisition, intelligence, and system risk management as means to satisfy essential needs of the FFRDC’s work sponsors for policy research and analysis. Primary sponsor officials we spoke with also told us that FFRDC staff skills and knowledge related to FFRDC core competencies are important to DOD. For example: Navy officials said CNA is uniquely suited to perform work for the Navy due to CNA’s core competencies relating to maritime defense analysis and how those competencies align with Navy goals and requirements. Army officials told us that RAND Arroyo Center staff has extensive background knowledge and analytical skills relating to reserve affairs, manpower policy, and war game analysis, among other things, in providing work for the Army. Air Force officials told us that RAND PAF has robust knowledge of Air Force processes and maintains top staff and researchers in each core competency. OUSD(A&S) officials also told us that sponsors and FFRDCs have a relationship in which sponsors rely on FFRDCs for independent and knowledgeable expertise within their core competencies to address sponsors’ analytic requirements. As shown in figure 4, DOD’s S&A Center primary sponsors identified 3 to 15 core competencies in their sponsoring agreements with each S&A Center. DOD cited FFRDCs’ core competencies as factors that contributed to using S&A Centers when initiating projects we reviewed, as provided by DOD instruction. For example: When initiating a fiscal year 2016 CNA assessment on the effects of possible policy changes to a career track program for military officers who are trained to work with other military services, DOD’s Office of the Under Secretary of Defense for Personnel and Readiness cited CNA’s core competencies of analysis of maritime resources; maritime program planning; and maritime policies, strategies, and doctrines as justification for using CNA to perform the work, among other things. In initiating a fiscal year 2014 IDA analysis on satellite ground control, DOD’s Office of the Deputy Assistant Secretary of Defense for Space and Intelligence cited IDA’s core competencies related to technology, such as systems and capabilities evaluations, as justification for using IDA for the research. DOD Uses S&A Center Research in a Variety of Ways and Takes Some Steps to Assess the Value of Research and Centers DOD Uses S&A Center Research to Inform Decisions, Shape Guidance, and Identify Potential Efficiencies DOD reports that it uses studies and analyses to inform decision-making; shape guidance, policies, and training; and identify opportunities to save time and money. Inform decision-making. For example, a 2016 study conducted by the RAND Arroyo Center on linking Army cost and performance found that the Army needed an updated tool to inform more strategic allocation of its resources. Among other things, the study contributed to updated strategies to measure the Army’s performance regarding force structure and readiness as well as the cost implications for these activities. According to an Army official, the study contributed to the development of updated Army metrics for cost and other performance indicators. In another example, a 2013 research project conducted by RAND NDRI on effectiveness measures of a DOD program to reduce the threat from infectious diseases and biological weapons developed and recommended two sets of metrics to improve program evaluation efforts. According to OUSD(A&S) officials, DOD used the recommended metrics to develop program performance measures. Shape guidance, policies, and training. For example, a 2013 study conducted by RAND NDRI on the root causes related to DOD weapons programs cost overruns found, among other things, that DOD needed to re-examine its assumptions when estimating a program’s cost, schedule, and technical performance. OUSD(A&S) officials told us the study contributed to DOD’s decision to update its policy, processes, management practices, and training curriculum so as to improve estimates. In another example, a 2013 study conducted by RAND Arroyo Center on the value of security missions conducted by the Army’s geographically aligned forces found that the use of these forces improved the efficiency of security planning and preparation and recommended a range of process and planning improvements for the Army. According to an Army official, the Army used several of the recommendations to update guidance for preparation and planning for future missions involving regionally aligned forces. Identify opportunities to improve efficiency. For example, a 2013 study conducted by RAND Arroyo Center on marketing and resources needed for Army recruiting efforts identified strategies aimed at optimizing the Army’s annual spending, estimated at nearly $1 billion for recruiters, enlistment bonuses, and television advertising. An Army official said that the Army has used the recruiting tool developed by RAND Arroyo Center for this study to make decisions and the Army estimates the tool can reduce costs by potentially hundreds of millions of dollars annually. In another example, DOD reported in its 2015 comprehensive review of RAND PAF that a 2010 study conducted by RAND PAF on aircraft maintenance at centralized repair facilities found that these facilities should be consolidated. According to the Air Force primary sponsor, this study helped the Air Force make decisions that led to saving up to $300 million annually as well as saving time on aircraft inspections. DOD Has Taken Steps to Assess the Value of S&A Center Research and the Centers In terms of assessing the outcomes of research, we found that DOD primary sponsors took steps to assess the value of S&A Center research and the centers. The DOD instruction requires that primary sponsors assess the efficiency and effectiveness of the FFRDC in meeting DOD needs in comprehensive reviews, including a review and summary of FFRDC accomplishments and their effectiveness utilizing factors such as quality and timeliness of the work produced and value of projects assessed. Additionally, the DOD instruction provides that the factors of technical quality, responsiveness, value, and timeliness be addressed in annual performance reviews. DOD’s FFRDC Management Plan—which preceded the DOD instruction and was in effect until the DOD instruction became effective in January 2018—also required primary sponsors to annually assess the value of FFRDC performance, among other factors, and include summaries of these annual assessments in comprehensive reviews. Primary sponsors generally assess the value of S&A Center research through annual performance reviews (through performance evaluation questionnaires to solicit feedback from work sponsors) and comprehensive reviews. To monitor the execution of research projects, primary sponsors regularly solicit work sponsor input regarding S&A Centers’ performance, including the value, technical quality, responsiveness, and timeliness of the work performed. Time frames for soliciting this input vary by primary sponsor but most do this annually. These questionnaires include one or more sections for work sponsors to add comments about S&A Center work and allow work sponsors to rate S&A Center performance. Some of these questionnaires use a numerical scale. For example, the Air Force questionnaire sent to RAND PAF work sponsors asks respondents to rate project value using a scale from 1 through 10, with 1 indicating “very poor” and 10 “very good.” The OUSD(A&S) questionnaire sent to IDA work sponsors asks respondents to rate the value of IDA’s work and results using a scale from 1 through 5, where 1 symbolizes either “strongly agree” or “outstanding performance” and 5 symbolizes “strongly disagree” or “poor performance.” FFRDC primary sponsors conduct comprehensive reviews at least every 5 years to, among other things, identify the accomplishments made by each FFRDC. In August 2014, we reported that DOD officials described the comprehensive review process as an opportunity to take a broad assessment of the FFRDC and its key competencies beyond the annual assessments of FFRDCs. Included in these comprehensive reviews is a summary of FFRDC accomplishments and effectiveness in meeting work sponsors’ needs since the last comprehensive review. In our examination of the most recent comprehensive reviews for each of the five S&A Centers, we found that the comprehensive reviews summarize the results from the performance evaluation questionnaires and assessed the value of the research in varying ways. For example, the Army questionnaire to RAND Arroyo Center work sponsors assessed value in terms of whether a project was worth the investment monetarily. OUSD(A&S) questionnaires sent to work sponsors assessed the value of IDA work in relation to whether the results were useful, consistent with the level of effort, and if IDA brought competence, expertise, and helpful perspectives to the issues. The Army reported in the 2014 comprehensive review of RAND Arroyo Center that between fiscal years 2010 through 2013, work sponsors provided overwhelmingly positive results that RAND Arroyo Center performance was “worth the level of effort.” DOD’s Office of the Under Secretary of Defense for Acquisitions, Technology, and Logistics—RAND NDRI’s primary sponsor prior to the DOD reorganization in 2018—reported in the 2014 comprehensive review of RAND NDRI that in fiscal year 2013, work sponsors provided overwhelmingly positive results that RAND NDRI performance provided long-term value. We also found that comprehensive reviews included anecdotal examples of how DOD used S&A Center research. For example, the Army 2014 comprehensive review of RAND Arroyo Center highlighted 53 of 114 research projects completed between fiscal years 2010 through 2013 to demonstrate how RAND Arroyo Center work met Army research requirements. Likewise, the Air Force primary sponsor’s 2015 comprehensive review of RAND PAF highlighted 28 of 207 research projects completed between fiscal years 2010 and 2014 to demonstrate how the Air Force leveraged RAND PAF work to improve efficiency in the department. An Air Force official told us that RAND PAF, and not the Air Force, collected 28 project examples for the purposes of the comprehensive review. DOD Does Not Track Whether S&A Center Recommendations Have Been Implemented, but Recently Took Steps Intended to Improve Insights Another potential way to assess the outcomes of research is to track to what extent a research project’s recommendations were implemented, and how. Neither DOD nor primary sponsors currently track the implementation of S&A Center research project recommendations. While primary sponsors are not tracking recommendations, in 2015 one of the S&A Centers—RAND PAF—began tracking recommendations made to the Air Force. According to a RAND PAF representative, the tracking system captures the issue, approach, conclusions, opportunities, and outcomes for each completed project. A RAND PAF representative told us that tracking recommendations is useful for demonstrating the value that RAND PAF provides the Air Force. In April 2019, a Navy official told us that the Navy is working on a database to track CNA reports, including recommendations, report topic, work sponsor, and project funding, among other things, to prevent duplication of requests. The Navy official said this effort is expected to be completed in 2019. Both OUSD(A&S) and Army officials told us that while they do not currently track recommendations, they are considering doing so as part of their oversight efforts. Further, Army officials told us that it is important for the sponsor that implements the recommendations to track how and whether that information was used. While tracking recommendations is useful according to some primary sponsors, some DOD officials cautioned that tracking recommendations would not provide insights into the overall value across all S&A Center research. DOD officials told us that recommendations are only one potential outcome of S&A Center research and that the value of a study may not be specifically linked to a recommendation. For example, Navy officials said that CNA’s projects may present the Navy with options and associated courses of action rather than formal recommendations, and DOD officials also told us that S&A Center work can provide value to DOD that is not always represented by recommendations, such as presentations or research aimed at contributing to the understanding of a particular issue, but without specific recommendations. In February 2019, DOD’s Office of the Under Secretary of Defense for Personnel and Readiness issued a memorandum related to the oversight of the Personnel and Readiness Studies and Analysis program. The memorandum tasked the program director with developing a studies and analysis program framework that improves accountability for project results and the implementation of study recommendations. Personnel and Readiness also issued a template “action memo” providing for an executive summary of completed projects as well as implementation plans delineating recommendations made, implementation approach, and plan of action for each recommendation. According to a senior Personnel and Readiness official, work sponsors with reports that were completed or published since September 2018 are subject to these actions. This official noted that the purpose is to increase accountability of the Personnel and Readiness staff regarding the use of FFRDCs and to develop an overall picture of the value proposition of FFRDC research. It is too soon to tell to what extent these memorandums will affect DOD’s insights on its implementation of S&A Center recommendations. DOD and the S&A Centers We Reviewed Have Conflict of Interest Policies and Practices Regulation Requires FFRDCs to Operate Free from Conflicts of Interest What are Conflicts of Interest? A Personal Conflict of Interest exists when an individual employed by an organization is in a position that could materially influence research findings or recommendations and may lack objectivity due to their financial interests, personal activity, or relationships. An Organizational Conflict of Interest exists when, because of other interests or relationships, an entity is unable or potentially unable to render impartial assistance or advice to the government or the entity might have an unfair competitive advantage. The Federal Acquisition Regulation (FAR) requires an FFRDC to conduct its business in a manner befitting its special relationship with the government and to be free from conflicts of interest. To perform its responsibilities to the sponsoring agency, an FFRDC and its employees have access beyond that which is common in a normal contractual relationship, including access to sensitive and proprietary data and information, equipment, and property. To accomplish this, the FAR and DOD instruction state that an FFRDC must be free from conflicts of interest and fully disclose financial and outside interests to the sponsoring agency. Conflicts of interest can be personal or organizational. Personal conflicts of interest can be, but are not limited to, financial interests of the employee or close family members, other employment, gifts, consulting relationships, other forms of research funding or support, investment in the form of stock or bonds ownership, real estate, or business ownership. Additionally, the DOD instruction outlines steps FFRDC parent organizations should take to prevent and mitigate conflicts of interest. These steps include, but are not limited to, having procedures in place to screen employees for potential conflicts of interest; requiring disclosure of financial and other interests that might affect the employee’s objectivity; establishing policies and procedures to protect proprietary, privileged, and sensitive information from disclosure; and reporting any conflicts of interest to the applicable contracting officer or contracting officer’s representative and the primary sponsor as soon as it is identified. See figure 5 for DOD’s conflict of interest elements outlined in the DOD instruction that primary sponsors are to require from FFRDC parent organizations. All Five S&A Centers Have Conflict of Interest Policies and Practices Each of the five S&A Centers we reviewed has corporate-wide conflict of interest policies and practices which incorporate various key elements of the DOD instruction. For example, all S&A Center policies we reviewed have measures that require personnel to protect proprietary, privileged, and sensitive information. S&A Center representatives told us they undertake various approaches in practice that meet key elements in the DOD instruction in order to ensure they operate in the public interest with objectivity and independence. For example: Reviewing all personnel annually or on a task-by-task basis for conflicts of interest. Generally, representatives we spoke with from the five S&A Centers address conflicts of interest annually or task-by- task, which is an option outlined in the DOD instruction. For instance, RAND representatives said they perform task-by-task, instead of annual, conflict of interest reviews because staff do not know which projects they will be working on during the year. In addition, IDA and RAND representatives told us they screen employees upon hire as well as when an employee initiates a new project, and both IDA and RAND have automated their screening processes. IDA representatives explained that their automated tool screens personnel at the initiation of each project, by including, for example, a process to determine if staff assigned to a project have any affiliations with industry or companies and competitors in the particular field of study. If staff or members of their households do have affiliations, IDA may issue a waiver if the financial interest (such as but not limited to stocks, stock options, and bonds) in a single company is below $15,000, the threshold for disclosure outlined in the DOD instruction. IDA representatives also told us that IDA staff are required to self- report any changes to previous financial interest disclosures during the year. In another example, RAND representatives said their automated conflict of interest tool screens for conflicts of interest by comparing areas of work RAND performs to similar areas in the private sector. Additionally, the system will identify any staff that have not submitted a conflict of interest statement within a year. Providing initial and annual conflict of interest training for all personnel. S&A Center representatives told us that they perform training related to or specifically covering conflicts of interest in varying ways. IDA’s corporate-wide conflict of interest policy includes initial and annual conflict of interest training elements, as outlined in the DOD instruction. For example, IDA’s policy states that all employees are to participate in conflict of interest training upon initial hire, and in annual refresher training thereafter. The other four S&A Centers did not explicitly include annual conflict of interest training in corporate-wide policies, but representatives told us they provide annual ethics training, which includes training on conflicts of interest, to their employees. For example, CNA representatives told us they provide ethics and conflicts of interest training to staff, which is required by their contract with the Navy. CNA representatives told us that if CNA staff do not complete the required training, the staff will be blocked from accessing CNA’s time card system and will not receive pay until the training is complete. In another example, RAND representatives told us they have annual training that covers ethics, conflicts of interest, and culture and discrimination issues for newly hired staff. Representatives from each of the S&A Centers told us they attempt to mitigate potential conflicts of interest as soon as the potential conflicts become known and before they become a reportable conflict. For example, CNA representatives told us that in one instance, a CNA employee’s spouse worked for the Navy and CNA mitigated this potential conflict by transferring the employee to another project where the relationship did not pose a potential conflict. In another example, when a RAND employee inherited stock in the middle of a project, a potential conflict of interest was mitigated by the employee selling the inherited stock. In another instance, a RAND employee was initially staffed to a project related to an area of work a spouse worked on commercially, and RAND mitigated the potential conflict by recusing the employee from the project. Agency Comments We provided a draft of this report to DOD for review and comment. In its comments, DOD concurred with our findings. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Defense. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or LudwigsonJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Selected Study and Analysis Center Research Projects Table 2 provides detailed information on the 22 projects we selected for review. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments In addition to the contact named above, Janet McKelvey (Assistant Director), Andrew Burton (Analyst-in-Charge), Mallory Bryan, Lisa Fisher, and Jordan Kudrna made key contributions to this report. Additional assistance was provided by Marie Ahearn, Pete Anderson, Jenny Chanley, Joseph Cook, Julia Kennon, Tind Shepper Ryen, and Roxanna Sun.
Why GAO Did This Study For decades, the government has contracted and entered into agreements to sponsor academic, nonprofit, or private organizations to operate FFRDCs. DOD military departments and other DOD components sponsor 10 FFRDCs to help develop innovative solutions to diverse national security threats. Five FFRDCs—referred to as S&A Centers—aim to provide independent analyses to support DOD policy development. Federal regulation and DOD guidance specify sponsors' oversight activities, including the establishment, use, and review of FFRDCs. A Senate Armed Services Committee report included a provision that GAO review DOD's use of FFRDCs. This report describes, among other objectives: (1) DOD obligations (in dollars) to DOD's FFRDCs from fiscal years 2013 through 2018; (2) factors that led DOD to use S&A Centers for research; and (3) how DOD used this research. GAO analyzed obligation data for DOD's 10 FFRDCs. GAO focused further review on DOD's five S&A Centers that primarily provide studies and analysis. GAO analyzed sponsoring agreements, comprehensive reviews, and 22 S&A Center research projects selected based on factors such as obtaining a mix of project costs, and interviewed DOD and FFRDC representatives. What GAO Found From fiscal years 2013 through 2018, the Department of Defense (DOD) obligated about $2 billion annually to 10 DOD-sponsored Federally Funded Research and Development Centers (FFRDC), excluding obligations related to two intelligence programs and capital equipment costs (such as antenna or radar systems). Of these obligations, roughly $400 million annually went to a subset of five FFRDCs called Study and Analysis (S&A) Centers. Note: Obligation amounts were not adjusted for inflation and totals may be affected by rounding. a Numbers in parentheses refer to the number of FFRDCs within each category. DOD primarily cited strategic relationships between the sponsor (the agency responsible for the overall use of the FFRDC) and the FFRDC and the core competencies of the FFRDC as factors when sponsoring S&A Centers and initiating projects. For example: Strategic relationships. The Army determined that an S&A Center was uniquely qualified to conduct a research project that required knowledge of defense planning scenarios, noting that awarding the project to an industry contractor would have given that contractor a competitive advantage. Core competencies. The Center for Naval Analyses has core competencies in Navy policy, strategy, and doctrine, among other things. S&A Centers perform hundreds of research projects annually on behalf of DOD, and DOD reported using them to inform decisions, shape guidance, and identify opportunities to improve efficiency. For example, one S&A Center's study on the causes of weapons system cost overruns found DOD needed to re-examine its assumptions when estimating program cost, schedule, and performance. DOD officials told us the study contributed to policy, process, and training updates.
gao_GAO-19-340
gao_GAO-19-340_0
Background Federal agencies are dependent on computerized (cyber) information systems and electronic data to carry out operations and to process, maintain, and report essential information. Cybersecurity—the security of these systems and data—is vital to public confidence. Ensuring the cybersecurity of the nation, including protecting privacy and sensitive data, and IRS’s efforts to address tax refund fraud due to identity theft are issues included in our High Risk List. IRS relies on information system security controls to protect the confidentiality, integrity, and availability of the sensitive financial and taxpayer information that resides on its systems. Federal law and guidance specify requirements for protecting federal information and systems. The Federal Information Security Modernization Act of 2014 (FISMA) is intended to provide a comprehensive framework for ensuring the effectiveness of information system security controls over information resources that support federal operations and assets. To accomplish this, FISMA requires each agency to develop, document, and implement an agency-wide information security program to provide security for the information and systems that support the operations and assets of the agency, using a risk-based approach. However, taxpayer information held by third-party providers is generally outside of these requirements, according to IRS officials. Fraudsters may target third parties, such as paid preparers and tax software providers, to steal taxpayer data—defined for our purposes as personally identifiable information and other personal, financial, or federal tax data—which can then be used to commit identity theft refund fraud or other types of financial crimes. Viewed broadly, identity theft tax refund fraud consists of two crimes: (1) stealing or compromising taxpayer data and (2) using stolen (or otherwise compromised) taxpayer data to file a fraudulent tax return and collect a fraudulent refund. Figure 1 presents an example of how this crime can work. In this example, a taxpayer may alert IRS of identity theft refund fraud. Alternatively, IRS can detect identity theft refund fraud through its automated filters that search for specific characteristics, as well as through other reviews of taxpayer returns. Third-party providers retain a large amount of electronic tax information, which makes them targets of various types of data theft incidents. Five common types of security incidents are shown in table 1. The number of electronically filed (e-filed) tax returns, and therefore the amount of electronically available data that are vulnerable to security incidents, has been increasing over the past several decades from 4.2 million in 1990 to 135.5 million in 2018. In 2018, approximately 90 percent of the 150.5 million filed individual income tax returns were filed with IRS electronically (see figure 2). Paid preparers prepared more than half of the e-filed returns in 2018. Multiple IRS offices have discrete responsibilities in overseeing how third- party providers secure taxpayer information, as depicted in figure 3. Oversight responsibilities are as follows: Stakeholder Liaison works with the paid preparer community to educate preparers about information security risks and guide them through the process of resolving security issues when security incidents are reported. This office is also the intake point for security incident information for paid preparers. Cybersecurity works to protect taxpayer information and IRS’s electronic systems, services, and data from internal and external cybersecurity threats—such as damage to computers, electronic communications systems, or information contained in those systems—by implementing security practices. Criminal Investigation (CI) reviews security incident reports to determine whether criminal action has occurred and investigates any potential criminal violations of applicable laws. It also investigates large-scale tax schemes and fraud. The Return Preparer Office is responsible for matters relating to the registration and the program compliance of tax return preparers who prepare returns for compensation. The office also engages in outreach and education programs and administers IRS’s Annual Filing Season program, a voluntary program to encourage noncredentialed preparers to participate in continuing education courses. Small Business/Self-Employed (SB/SE) Examination revenue agents visit e-file providers to ensure they are complying with the Authorized e-file Provider program’s requirements. Electronic Products and Services Support (EPSS) administers the Authorized e-file Provider program. It is also responsible for updating IRS Publications 1345 and 3112, which outline the requirements of the program. EPSS officials reported that they must coordinate with other business units to update individual references in the publications. EPSS is the intake point for security incident information for online providers and e-Services users, according to officials. Return Integrity and Compliance Services (RICS) monitors taxpayer accounts for potential fraud to protect revenue. RICS also manages the security incident data reports that are submitted by tax software providers. RICS is the intake point for security incident information for Security Summit and Identity Theft Tax Refund Fraud - Information Sharing and Analysis Center (ISAC) members, as described below, and actively monitors ISAC alerts from the online platform for new information that may not have been reported elsewhere. While the Office of Professional Responsibility (OPR) does not have oversight responsibilities over the security of tax information at third parties, it administers the regulations that govern the practice of tax professionals who interact with IRS on behalf of taxpayers, including attorneys, certified public accountants, and enrolled agents, among others. Treasury Department Circular 230, which incorporates the regulations, directed the Commissioner to establish OPR and any other offices within IRS to administer and enforce the regulations. However, Circular 230 does not include a requirement for practitioners concerning the security of taxpayer information. In recent years, IRS has taken a number of steps to help battle identity theft refund fraud. In 2015, IRS formed the Security Summit, a public-private partnership to protect the nation’s taxpayers and the tax system from identity theft refund fraud. The summit has representatives from IRS, state tax administrators, and industry partners including the software industry, tax professional associations, and payroll and tax financial product processors. IRS launched ISAC in the 2017 filing season. It aims to allow IRS, states, and tax preparation industry partners to quickly share information on identity theft refund fraud. It includes two components: an online platform controlled by IRS to communicate data on suspected fraud, and a collaborative organization governance structure comprising IRS, states, and industry. IRS uses a Rapid Response Team in partnership with states and industry members to coordinate responses to identity theft refund fraud incidents. The team aims to respond to significant threats within 24 to 72 hours of their discovery. The Rapid Response Team was deployed for six incidents in 2016, one in 2017, and was not deployed for any incidents in 2018. IRS’s Security Requirements for Third-Party Providers Do Not Provide Assurance That Information Is Being Protected Different Types of Third Parties Have Varying Responsibilities for Safeguarding Taxpayer Information under IRS’s Authorized e-file Provider Program IRS seeks to help safeguard taxpayers’ information and the electronic filing system by prescribing requirements for various types of third-party providers through its Authorized e-file Provider program. These requirements are outlined in Revenue Procedure 2007-40 and Publication 1345, Handbook for Authorized IRS e-file Providers of Individual Income Tax Returns. IRS Revenue Procedure 2007-40 states that the security of taxpayer accounts and personal information is a top priority for the agency. Further, the Revenue Procedure states that it is the responsibility of each IRS Authorized e-file Provider to have security systems in place to prevent unauthorized access to taxpayer information by third parties. Some of the requirements included in this program are applicable to all types of Authorized e-file Providers, while others are applicable to one group or another. Businesses—including sole proprietors—that wish to e-file tax returns on behalf of clients must apply to IRS’s Authorized e-file Provider program and choose a provider type, as described in table 2. According to IRS, in 2018 there were more than 325,000 Authorized e-file Providers, some of which were paid preparers. More than 790,000 paid preparers had registered with IRS as of 2018; accordingly, not all paid preparers are Authorized e-file Providers and are therefore not covered by the requirements of the Authorized e-file Provider program. However, a business that has been approved as an electronic return originator (ERO) may employ multiple paid preparers who are not Authorized e-file Providers. Those paid preparers would be allowed to e-file returns under the supervision of their ERO employer. According to IRS Publication 3112, the activities and responsibilities for return preparation and e-filing are distinct and different from each other. Tax software providers, which IRS refers to as software developers in its Authorized e-file Provider program, develop tax return software that individuals and businesses can use to file their own returns, or that paid preparers can use when filing returns on behalf of clients. Online providers are the subset of tax software providers that allow individual taxpayers to self-prepare returns and file them with IRS. Providers that develop software for paid preparers’ use do not fall under the definition of an online provider. IRS Does Not Fully Incorporate the Federal Trade Commission Safeguards Rule into Its Authorized e-file Provider Program Requirements IRS has not fully incorporated the Federal Trade Commission (FTC) Safeguards Rule into its requirements for all provider types under the Authorized e-file Provider program. The Gramm-Leach-Bliley Act provided FTC with the authority to require that financial institutions subject to its jurisdiction ensure the security and confidentiality of customer records and nonpublic personal information; protect against any anticipated threats or hazards to the security of such records; and protect against unauthorized access to or use of such records or information which could result in substantial harm or inconvenience to any customer. FTC, in turn, issued a regulation known as the “FTC Safeguards Rule.” The FTC Safeguards Rule applies to financial institutions including third- party providers that help taxpayers file tax returns, such as paid preparers and providers of software that allows individuals to prepare their own tax returns. The FTC Safeguards Rule requires those institutions to develop, implement, and maintain a comprehensive written information security program. The program must contain administrative, technical, and physical safeguards that are appropriate to the provider’s size and complexity, the nature and scope of the provider’s activities, and the sensitivity of any customer information at issue. IRS addresses the FTC Safeguards Rule through its Revenue Procedure 2007-40. This Revenue Procedure provides the procedures for the Authorized e-file Provider program, and clearly states that violations of the provisions of the Gramm-Leach-Bliley Act and the implementing rules and regulations promulgated by FTC are considered violations of the Revenue Procedure. It also states that violations may subject an Authorized e-file Provider to penalties or sanctions, including suspension or expulsion from the Authorized e-file Provider program. However, the IRS publications that provide further information on the Authorized e-file Provider program only briefly discuss the FTC Safeguards Rule, and do not provide details on the required elements of an information security program. For example: Publication 3112, IRS e-file Application and Participation, states that providers should become familiar with the Privacy and Security Rules that implement the Gramm-Leach-Bliley Act, and with other important information regarding the safeguarding of personal information available on the FTC website. The publication does not detail each of the required elements of an information security program. Publication 1345, Handbook for Authorized IRS e-file Providers of Individual Income Tax Returns, which was updated in February 2019, notes FTC’s role in protecting taxpayer data and generally describes the requirement of implementing and maintaining a comprehensive information security program, including the requirement that administrative, technical, and physical safeguards be appropriate to the business’s size, nature and scope of its activities, and the sensitivity of the customer information. The publication does not detail each of the required elements of an information security program. We identified other IRS publications that are not exclusively related to the Authorized e-file Provider program that discuss the requirements of the FTC Safeguards Rule, as well as other information security measures that serve as leading practices for the broader population of tax professionals. For example, in 2018, IRS updated Publication 4557, Safeguarding Taxpayer Data: A Guide for Your Business. The publication aims to help tax professionals understand basic security steps, recognize signs of data theft, respond to data losses, and understand and comply with the FTC Safeguards Rule. This publication refers to the FTC rule and tax professionals’ responsibilities to create and enact security plans, and provides a checklist from FTC to help third-party providers implement the information security plans. IRS Publication 4600, Tips for Safeguarding Taxpayer Data, also discusses elements of the FTC Safeguards Rule. However, while IRS references these documents in Publications 3112 and 1345, Authorized e-file Providers are not obligated to consult or follow these documents. In addition, most paid preparers do not know about the FTC Safeguards Rule and likely do not have information security plans for their places of business, according to officials from several tax preparation industry groups. Industry group officials also told us that there are misconceptions about who should be responsible for implementing information security. For example, one industry group official said that paid preparers and EROs often think that their tax software providers will provide security services or that their computer firewall or antivirus software will be enough protection. Modifying the Authorized e-file Provider program requirements to explicitly incorporate the FTC Safeguards Rule’s elements of an information security program would be consistent with Internal Control Standards. The standards call for management to consider the external requirements—such as laws, regulations, and standards—and incorporate these requirements into an agency’s objectives when setting the standards for the compliance of other entities. IRS officials told us that they do not believe that federal law provides IRS with any authority to enforce the FTC Safeguards Rule. However, IRS has already stated in Revenue Procedure 2007-40 that compliance with the FTC Safeguards Rule is required for participation in the Authorized e- file Provider program. Modifying its requirements to explicitly state the elements of an information security program as required under the FTC Safeguards Rule would help IRS ensure that all types of Authorized e-file Providers are aware of, and comply with, the FTC Safeguards Rule, which could help them better protect taxpayers’ information. While modifying the Authorized e-file Provider program may not reach paid preparers who are not part of the Authorized e-file Provider program, it will strengthen the controls for EROs, tax software providers, and online providers. IRS Lacks Explicit Authority to Require Minimum Security Standards for Paid Preparers’ or Authorized e- file Providers’ Systems IRS’s Authorized e-file Provider program does not outline a set of minimum information security standards for systems used by paid preparers or Authorized e-file Providers. When we reviewed IRS’s publications for Authorized e-file Providers, we found that specific information security standards were outlined for online providers, but there were no specific standards for other types of Authorized e-file Providers or paid preparers. Officials from tax preparation groups we interviewed and IRS raised issues that relate to paid preparers’ system risks. First, the tax preparation industry groups that we spoke with stated that most paid preparers, especially small firms or individual preparers, did not know the steps that they should take to protect taxpayer information on their systems. IRS officials reported that paid preparers often do not know that they experienced a security incident until IRS informs them something is wrong with their filing patterns. Second, according to officials from several tax preparation industry groups, paid preparers often have several misconceptions as to what is required of them in protecting taxpayer data, causing confusion. Industry group officials we interviewed told us that IRS’s current publications are not clear about requirements versus leading practices. For example, IRS publication 4557, Safeguarding Taxpayer Data, provides paid preparers with some leading practices to protect taxpayer data, but the leading practices are not legal requirements, with the exception of the FTC Safeguards Rule. An official from the Return Preparer Office explained that imposing any standards for paid preparers, whether related to competency or information security, without explicit authority would leave IRS vulnerable to legal challenges because of a recent court case that found that IRS does not have the authority to regulate the competency of paid preparers. According to IRS’s Office of Chief Counsel, this ruling, combined with the lack of explicit statutory authority, prevents IRS from establishing system standards for paid preparers, because while 31 U.S.C. § 330 authorizes the Secretary of the Treasury to regulate the practice of practitioners before the Department of the Treasury, mere return preparation, including through systems practitioners use to prepare and transmit tax returns, is not considered practice before IRS. In contrast to paper filing of tax returns, certain security measures need to be taken for e-filing returns to protect the integrity of the e-file system; thus, IRS has implicit authority to regulate e-file providers insofar as their activities relate to electronically filing returns with IRS, according to IRS Office of Chief Counsel officials. These officials also noted that no single provision of the Internal Revenue Code provides IRS explicit authority to regulate the standards for e-file providers. Instead, Internal Revenue Code § 7803 gives the Commissioner of Internal Revenue broad authority to administer and supervise the internal revenue laws, and § 6011 authorizes IRS to require returns and regulate the form of such returns. When taken as a whole, these provisions of the Internal Revenue Code show congressional intent to provide the Secretary of the Treasury with broad authority to administer the method for, and requirements surrounding, the e-filing of federal tax returns, according to IRS officials. Nevertheless, having explicit authority to establish security standards for the systems of Authorized e-file Providers may help IRS better ensure the protection of taxpayers’ information and mitigate the risk of legal challenges to IRS’s ability to do so. IRS Office of Chief Counsel officials also noted that for several years the Department of the Treasury has sought additional authority for IRS to regulate all tax return preparers. For example, this request was included in the most recent (fiscal year 2020) Congressional Budget Justification. The justification for this additional authority specifically refers to the competency of tax return preparers, but does not mention security standards for the systems that those preparers use. Similarly, we have previously suggested that Congress consider granting IRS the authority to regulate the competency of paid preparers (that suggestion did not cover regulating the security of paid preparers’ systems). As of April 2019, Congress had not provided such authority. Without Congress providing IRS with explicit authority to regulate the security requirements for the systems of paid preparers or Authorized e- file Providers, Congress and IRS have limited assurance that the processes used by paid preparers or Authorized e-file Providers are adequately protecting taxpayers’ information against electronic data breaches and potential identity theft tax refund fraud. Having such explicit authority would enable IRS to establish minimum security requirements and help ensure improved taxpayer information security by paid preparers and Authorized e-file Providers. IRS Does Not Have Standardized Security Requirements for All Tax Software Providers IRS does not have a robust set of information security requirements for all tax software providers in the Authorized e-file Provider program. Instead, IRS has limited security requirements for the subset of tax software providers designated as online providers outlined in IRS Publication 1345, as we discuss in the next section. In Publication 4164, Modernized e-File Guide for Software Developers and Transmitters, IRS also provides some information on “security directive rules of behavior for accessing IRS business systems” while transmitting returns to IRS. However, this document does not provide a specific list of controls to for these providers to follow. IRS has been working with the Security Summit to implement a subset of the NIST Special Publication 800-53 security and privacy controls for the industry members of the Security Summit, which represents a subset of all tax software providers. The Security Summit partners agreed voluntarily to implement about 140 tax-related controls over a 3-year period and provide self-assessments related to the implementation of those controls. IRS reported in October 2018 that 15 of the 21 Security Summit industry partners had voluntarily certified that they implemented the NIST controls in years 1 and 2 of the rollout schedule. IRS officials reported that they later determined three of the other 21 industry partners are financial institutions that do not handle taxpayer data; thus the standards are not applicable to them. IRS officials told us that they are actively following up with the remaining three providers to determine why they have not completed and submitted the self-assessment, and to what degree they have implemented the subset of NIST security controls. While this is an important and significant first step, the 15 industry partners in the Security Summit that are voluntarily adhering to the NIST security controls represent about a third of all of the tax software providers that IRS has approved to be a part of the Authorized e-file Provider program. According to IRS, these 15 Security Summit partners transmitted about 132.6 million (98.8 percent) of all of the electronically filed returns in 2018; the other two-thirds of tax software providers in the Authorized e-file Provider program transmitted about 1.6 million (1.2 percent) electronically filed returns. A Security Summit membership criterion states that only those providers that filed more than 50,000 returns with IRS during a filing season can be members, but not all tax software providers meet this threshold. Internal Control Standards state that managers consider external requirements when defining objectives, such as those set by standard- setting bodies designed to comply with laws, regulations or standards. Management should incorporate those requirements into its objectives and sets those requirements through the established standards of conduct, oversight structure, organizational structure and expectations of competence. By statue, NIST is responsible for developing information security standards and guidelines, including minimum requirements for federal information systems. According to Special Publication 800-53, the controls outlined provide a holistic approach to information security and risk management by providing organizations with the breadth and depth of security controls necessary to fundamentally strengthen their information systems and the environments in which those systems operate—contributing to systems that are more resilient in the face of cyber attacks and other threats. While the guidelines in this publication are applicable to all federal information systems, other organizations are encouraged to consider using the guidelines, as appropriate. The applicability of the selected NIST controls is evidenced by the adoption of those controls by the Security Summit partners. While most returns are filed through tax software providers that are voluntarily adhering to the security controls, these controls are not required and do not apply to all tax software providers. Additionally, IRS officials that are a part of the Security Summit stated that they cannot enforce the subset of NIST controls with the remaining Security Summit partners because the controls were set up in a voluntary program. IRS officials from multiple offices did not have a clear reason as to why this subset of NIST controls has not been incorporated into the requirements for the entire population of tax software providers in the Authorized e-file Provider program, even though some security standards had been incorporated into the Authorized e-file Provider program for a limited set of providers (online providers) as discussed in the next section. In addition, as previously discussed, IRS can prescribe the requirements to which Authorized e-file Providers must adhere when e-filing returns for taxpayers. Incorporating fundamental security controls into its Authorized e-file Provider program would give IRS greater assurance that tax software providers have identified and addressed information security risks consistent with professional standards. This missed opportunity to update the requirements for tax software providers by adopting the subset of NIST controls is due, in part, to IRS’s lack of a centralized leadership over the security of taxpayer information collected by paid preparers and tax software providers. As previously discussed, multiple IRS offices have discrete responsibilities for overseeing the security of taxpayer information while at third parties; however, no one office is responsible for, or has the authority to provide, the strategic vision, oversight, or coordination over all aspects. Further, while IRS offices coordinate to some extent, there is not a formalized governance structure, such as a steering committee, that would help provide this level of leadership, coordination, and collaboration to the agency. According to Internal Control Standards, an agency’s organizational structure provides management’s framework for planning, directing, and controlling operations to achieve agency objectives. Management develops an organizational structure with an understanding of overall responsibilities, and assigns these responsibilities to discrete units to enable the organization to operate in an efficient and effective manner and reliably report quality information. A sound internal control environment requires that the agency’s organizational structure clearly defines key areas of authority and responsibility, and establishes appropriate lines of reporting. Without setting and requiring the same security standards for all tax software providers, IRS does not have assurance that these providers have an equivalent level of standards in place to adequately protect taxpayer information. Further, in continuing to operate a voluntary security controls program, IRS does not have assurance that those software providers who are currently adhering to the standards will continue to do so in the future. Finally, without centralized leadership in this area, it is unclear how IRS will adapt to changing security threats in the future and ensuring those threats are mitigated. IRS Has Not Updated the Authorized e-file Provider Program’s Information Security Standards for Online Providers Since 2010 Online providers—tax software providers that allow individuals to prepare their own tax returns—have additional requirements for security and privacy that they must follow, as outlined in Publication 1345. IRS established six security, privacy, and business standards for online providers, including requirements for developing information privacy and security policies and reporting security incidents. Compliance with these six standards for online providers became mandatory on January 1, 2010; however, IRS has not substantially updated them since then (see appendix II for the text of the six security, privacy, and business standards). These additional requirements do not apply to paid preparers, EROs, or providers of tax software used by paid preparers. Without updating standards regularly, the standards can become outdated and lose their ability to protect information from known vulnerabilities as technology changes. For example, IRS’s current guidance refers to an outdated encryption standard. Specifically, IRS requires online providers to use, at minimum, Secure Sockets Layer 3.0 and Transport Layer Security (TLS) 1.0. However, NIST Special Publication 800-52 and industry leading practices recommend the use of TLS 1.1 as the minimum level of encryption due to known weaknesses of using TLS 1.0 to encrypt data in transmission. While the standard allows for use of later encryption versions, it refers to a minimum encryption standard that has known weaknesses. As a result, IRS and taxpayers have limited assurance that their taxpayer data are protected according to NIST guidelines and industry leading practices. Recommended controls outlined in NIST Special Publication 800-53 and our Fraud Risk Framework call for continuous monitoring and regular fraud risk assessments, respectively, to help determine the effectiveness of controls in a program. Internal Controls Standards also calls for management to periodically review the policies, procedures, and related activities for continued relevance and effectiveness in achieving the entity’s objectives or addressing related risks. When we asked why the six standards in Publication 1345 had not been updated since 2010, a senior Wage and Investment Division official stated that the publication is subject to an annual review by multiple IRS offices, but no office had identified the need to update the standards as part of these reviews. An Electronic Products and Support Services (EPSS) official told us that the standards were initially developed based on the latest technology at the time. However, according to this official, technology can become obsolete quickly, and adapting standards to keep pace with technological changes can require a lot of resources. Not updating the requirements for online providers again points to a missed opportunity due to IRS’s lack of a centralized leadership over the security of taxpayer information at paid preparers and tax software providers. In this case, centralized leadership may have identified the need to update the standards. Without periodically reviewing and updating the standards themselves, IRS has limited assurance that the standards have kept pace with technological changes, and therefore, that the online providers are protecting the taxpayer’s data. IRS Uses Various Outreach Techniques to Encourage Third- Party Providers to Protect Taxpayer Information IRS uses a variety of outreach tools to communicate with third-party providers, such as paid preparers and tax software providers, about information security risks. IRS tries to educate these tax professionals about ways to improve information security practices and the benefits of doing so. For example, IRS informs paid preparers, tax software providers, and others about the importance of reporting security incidents in a timely manner to help ensure that action can be taken quickly to help protect their clients and avoid fraudulent returns being filed. Similarly, Stakeholder Liaison advises paid preparers about the steps to take to ensure that their systems are no longer vulnerable to compromise, according to Stakeholder Liaison officials. Below are examples of IRS’s recent communication efforts. IRS and the Security Summit collaborated on tax professional outreach campaigns. For example, in 2018, they launched the Tax Security 101 campaign, which provided tax professionals with basic information on how to protect taxpayer data. Each year, IRS sponsors nationwide tax forums largely targeted toward paid preparers such as enrolled agents, certified public accountants, and noncredentialed preparers. The 2018 forum included five seminars focused on securing taxpayer information, such as “Data Privacy and Cybersecurity for Tax Professionals” and “Data Compromises—It’s Not a Matter of ‘If’ but ‘When.’” IRS hosts webinars throughout the year to inform tax professionals and taxpayers about various topics, including information security. For instance, in October 2018, IRS hosted a webinar called “Protect Your Clients, Protect Yourself: Tax Security 101.” The webinar covered common security threats, signs of data theft, ways to report taxpayer data theft to IRS, and tax preparers’ obligations to create a written information security plan consistent with the FTC Safeguards Rule. Stakeholder Liaison has participated in over 1,000 virtual and in- person events since June 2015 where data security was a primary topic or featured message, according to Stakeholder Liaison officials. Further, the officials reported that there were over 165,000 attendees at these events. IRS uses social media outlets such as YouTube and Twitter to provide information to tax professionals. For example, in July and October 2017, IRS released two YouTube videos about information security for tax professionals titled “Why Tax Professionals Need a Security Plan” and “What to Do After a Tax Professional Data Compromise.” Similarly, IRS’s tax professional Twitter account, @IRStaxpros, releases information about information security (see figure 4). Though IRS has various ways to disseminate information to tax professionals, it faces a challenge reaching paid preparers who are not affiliated with larger industry groups or who do not visit the IRS.gov website, according to both IRS officials and industry group officials. According to Return Preparer Office officials, many paid preparers are not linked to standard tax communication channels, such as direct communications from IRS through news releases or email alerts. IRS and industry group officials told us one barrier to reaching these paid preparers is preparers’ belief that their businesses are too small to be a target for fraudsters. IRS officials recognize the challenges and said that they continue to address them by speaking with tax professionals about how to increase paid preparers’ awareness of information security risks, such as by making materials easy for preparers to read. IRS’s Authorized e-file Provider Monitoring Largely Focuses on Physical Security Controls and Is Inconsistent among Provider Types IRS Monitoring Efforts for EROs Have Limited Focus on Cybersecurity IRS’s monitoring program is primarily focused on EROs’ adherence with multiple aspects of the Authorized e-file Provider program, such as requirements for Earned Income Tax Credit due diligence, advertising, and electronic signatures. The monitoring program also calls for monitoring of physical information security, which is not required as part of the Authorized e-file Provider program. The Internal Revenue Manual (IRM) details mechanisms and practices for monitoring Authorized e-file Providers, including EROs and online providers. As part of this monitoring, Small Business/Self-Employed (SB/SE) conducts field visits, the number of which more than doubled in the past few years, from almost 300 in 2015 to about 650 in 2018. SB/SE revenue agents visit providers to monitor their operations and to advise providers of any program violations. IRS uses monitoring visits to investigate allegations, complaints, and warnings against Authorized e-file Providers, as well as to determine general compliance with program requirements. While any provider type could undergo a monitoring visit, IRS officials informed us that they primarily conduct field monitoring visits for EROs, which are selected using risk-based criteria. According to these officials, SB/SE coordinates with other IRS offices to provide field monitoring on an as-needed referral basis for other types of Authorized e-file Providers. IRS officials reported that they were unable to confirm the specific number of recent referral monitoring visits but said there were likely fewer than five referrals in the past couple of years. However, the IRM section detailing the monitoring visits provides little direction for monitoring of information security standards from IRS Publication 1345. The IRM lists monitoring techniques for security, but they focus largely on physical security rather than cybersecurity controls for the electronic aspects of information security. For example, the IRM suggests that agents ask about access to physical files or office keys rather than about how providers send emails containing taxpayer information. According to our Fraud Risk Framework, agencies should use a risk- based approach to evaluate outcomes and adapt activities to improve fraud risk management. As fraudsters increasingly target paid preparers and tax software providers through cybersecurity attacks, risk-based monitoring and evaluation of cybersecurity controls could help IRS identify fraud risks and potential control deficiencies among third-party providers. IRS officials said that the SB/SE revenue agents who conduct monitoring visits do not have the technical expertise to effectively monitor information security or cybersecurity controls. For example, an IRS official stated that the IRM monitoring techniques ask about physical security instead of cybersecurity because revenue agents can verify whether filing cabinets are locked or whether computer passwords are visible, but they cannot verify cybersecurity controls, such as whether a provider’s information security policies are consistent with government and industry guidelines. Further, an SB/SE official said that, while SB/SE is responsible for monitoring Authorized e-file Providers, cybersecurity is not part of SB/SE’s role. However, we believe there are opportunities for revenue agents to ask basic cybersecurity questions and, at a minimum, use monitoring visits to help promote awareness of leading practices designed to help protect taxpayer information. For example, revenue agents could ask providers if they have secured their office’s wireless capabilities, use encryption for sensitive business information, have a designated official in case of a security incident, or know their assigned stakeholder liaison, among other things. Additionally, opportunities exist to leverage resources across IRS to monitor cybersecurity controls. For instance, Cybersecurity has technical expertise that SB/SE could leverage to help monitor these requirements, according to a Cybersecurity official. Without effective monitoring of information security standards or cybersecurity controls, IRS has limited assurance that EROs’ systems are adequately protecting taxpayers’ information. If these third parties do not adequately protect that information, taxpayers will face increased risk of both tax-related and non-tax-related identity theft. Improved monitoring could help IRS ensure that it is more effectively detecting and responding to changing fraud risks among providers. Additionally, updating documentation of monitoring activities, as needed, such as the IRM and internal guidance, along with staff training, would provide IRS with better assurance that the greatest risk areas are addressed appropriately. IRS Does Not Consistently Monitor Authorized e-file Providers’ Cybersecurity Controls IRS conducts limited monitoring of the online provider subset of tax software providers enrolled in the Authorized e-file Provider program. However, these monitoring efforts are not part of the systematic Authorized e-file Provider monitoring program for EROs described above, nor are they documented in the IRM or relevant job aids. According to EPSS officials, IRS does not currently monitor all of the standards for online providers. IRS staff can remotely monitor three of the six security, privacy, and business standards for online providers through electronic means, according to EPSS officials (see table 3). EPSS officials stated that the other three standards cannot be monitored remotely (see appendix II for the full text of the six security, privacy, and business standards). For two of the three standards that cannot be monitored remotely, EPSS officials said it would be feasible for online providers to send the results of vulnerability scans (standard 2 in table 3) and privacy seal vendor certifications (standard 3 in table 3) to IRS for monitoring purposes. However, according to these officials, EPSS does not have dedicated staff who could review these results. Similarly, SB/SE, which conducts Authorized e-file Provider monitoring, does not have the technical expertise to review these results, as previously discussed. In addition, IRS cannot monitor the requirement to report security incidents, according to officials, because there is no way for the agency to know whether security incidents have occurred but were not reported. However, every fiscal year, IRS asks online providers to self-certify that they are meeting all six of the security, privacy, and business standards in IRS Publication 1345, according to an EPSS official. To self-certify, providers answer “yes” or “no” questions about whether they have complied with each standard. According to this official, companies generally indicate that they are meeting all of the standards. In addition to inconsistent monitoring of online provider requirements, IRS has not recently assessed the information security risks among all third- party provider types. IRS initially implemented the Authorized e-file Provider monitoring program described above only for EROs because they presented the greatest risk for fraud, according to an EPSS official. However, IRS’s monitoring practices and the associated IRM section have not been updated since 2011, and still reflect IRS’s initial assumption that EROs present the greatest risk for fraud among the different provider types. Additionally, while IRS assessed the security and privacy risks of tax software providers, the assessment did not compare these risks to those presented by EROs. In 2009, we recommended that IRS assess the extent to which the reliance on tax software creates significant risks to tax administration, including the security and privacy of taxpayer information. IRS agreed with our recommendation and in 2011 received the results of a third-party risk assessment to determine, in part, the security and privacy risks presented by large and small software providers. The assessment found that security presented the biggest overall risk among the areas reviewed—security of information, privacy of information, accuracy of returns, and reliability of systems—due, in part, to security being the least adequately controlled risk area by small software providers. This assessment was not designed to review the risks for other Authorized e-file Provider types, such as EROs. Our Fraud Risk Framework requires agencies to plan regular fraud risk assessments and suggests tailoring those assessments to the program. Effective managers plan to conduct such assessments at regular intervals and when there are changes to the program or operating environment, such as changes in technology that could result in increased security incidents. As part of a risk assessment, managers may examine the suitability of existing fraud controls. Such examination can help managers identify areas where existing control activities are not suitably designed or implemented to reduce risks to a tolerable level. By conducting a risk assessment for the Authorized e-file Provider program and identifying the provider types that present the greatest risks for fraud, IRS can better determine whether changes to the monitoring program are needed for each provider type. If the agency determines that changes are needed, updating documentation of monitoring activities— such as the IRM, internal guidance, and job aids, along with staff training—would provide IRS with better assurance that the greatest risk areas are addressed appropriately. IRS Uses Security Incident Information to Protect Taxpayers but Does Not Have a Complete Picture of the Size and Scope of Incidents IRS Uses Security Incident Reports to Track Taxpayer Accounts and Analyze Trends to Protect Revenue Multiple offices within IRS use information on security incidents to track trends in fraud schemes, which helps them to protect taxpayer information and to prevent the filing of fraudulent tax returns. For example, when Stakeholder Liaison receives reports about a security incident involving a paid preparer, staff collect additional information about the incident, including the cause of the incident and whether taxpayer information was compromised. Stakeholder Liaison can analyze the data to show geographical information, like the states most affected by breaches; the paid preparer types most affected by incidents; and the method of attack of incidents; among other things, according to a Stakeholder Liaison official. This official said that Stakeholder Liaison also uses this information to produce daily management reports to keep leadership apprised of the number of incidents reported daily, as well as the cumulative number of affected preparers and taxpayers during the year and a comparison to data from the previous year. Return Integrity and Compliance Services (RICS) officials use a risk- based method to determine the necessary mitigation and treatment plans following a security incident. For example, RICS officials might assess a security incident as high risk, meaning that a taxpayer’s personal, financial, and tax data were compromised. For such an incident, RICS officials place the affected Taxpayer Identification Numbers (TIN) on Dynamic Selection Lists—lists of TINs affected in breaches and at risk of tax-related identity theft—to monitor future tax return filings for potential fraud. On the other hand, for low-risk incidents—incidents where fraudsters may have accessed information like street address or date of birth but not Social Security numbers—RICS may compare victims’ current tax returns with prior returns to look for differences that could indicate possible identity theft. According to RICS officials, the office also runs individuals’ information through fraud filters to help identify returns with a high likelihood of identity theft. Criminal Investigation’s (CI) Cybercrimes unit shares security incident information with the field offices where the incident occurred, according to CI officials. Area coordinators evaluate the incident information and determine whether a criminal case should be developed. If so, coordinators develop a fraud scheme package and provide it to the agent assigned to the case to help identify other potential incidents resulting from similar schemes, according to CI officials. IRS May Not Have a Complete Picture of Third- Party Provider Security Incidents Because Its Reporting Requirements Are Not Comprehensive IRS has primarily tracked information on security incidents in its RICS Incident Management Database since December 2016, according to RICS officials. Security incidents can be categorized in a number of ways, such as when hackers infiltrate third-party providers’ systems. Between 2017 and 2018, there was an overall decrease in the number of reported high-risk security incidents that led to confirmed identity theft victims across all types of security incidents. However, the number of reported security incidents from third-party providers increased about 50 percent during this same period, as shown in table 4. In turn, the number of taxpayers affected by the security incidents at third-party providers also increased. However, IRS does not have comprehensive information about the incidents because, in part, its reporting requirements do not apply to all third-party providers. For example, the Authorized e-file Provider program requires only online providers to report security incidents to IRS as soon as possible but no later than the next business day after confirmation of the incident. The information that online providers are to report includes details about the security incident and the affected taxpayers’ accounts. If paid preparers or EROs experience a security incident at their place of business, they are not required to report any information to IRS about the incident; instead, IRS encourages paid preparers to share security incident information with IRS through Stakeholder Liaison. Additionally, IRS cannot track incidents that third-party providers do not report, according to IRS officials. IRS officials and industry representatives stated that some third-party providers may not report security incidents for fear of punishment from IRS (e.g., penalties, sanctions, or removal from the Authorized e-file Provider program) or negative impacts to their business reputation. IRS has other voluntary reporting mechanisms for tax software providers or other members of the tax preparation industry. For example, members of the Security Summit can use a voluntary reporting mechanism to submit information to RICS. Some members of the Security Summit can use an additional voluntary reporting system in the ISAC online platform, which sends alerts about security incidents to others in the platform. IRS also recently revised some of its requirements that could affect paid preparers’ reporting of security incidents while using other IRS services. For example, in October 2018, the agency updated its user agreement for e-Services, a suite of web-based tools that allow paid preparers, among others, to complete transactions online with IRS. This update included a requirement to report any unauthorized use of the e-Services account or any other breach of security as soon as users become aware of the incident. According to Internal Control Standards, agencies should use quality information, both internal and external, to achieve objectives. For example, agencies should obtain data on a timely basis so that they can be used for effective monitoring. Additionally, recommended controls in NIST Special Publication 800-53 require reporting of suspected security incidents by federal agencies and their subordinate organizations. Though IRS conducts a yearly review of requirements for Authorized e- file Providers to find needed updates, the incident reporting requirement has not been identified as needing updates since 2010, according to a senior Wage and Investment official. This is another instance where centralized leadership could have identified a need to update the incident reporting requirements. According to an EPSS official, IRS originally applied this incident reporting requirement to only online providers because these providers stored a large amount of data and carried the highest risk of data loss. Similarly, IRS officials said the reporting requirement for online providers does not apply to providers of tax software used by paid preparers because those software providers do not collect or store taxpayer information on their systems. Instead, the taxpayer information is stored on a paid preparer’s hard drive. If a security incident occurred at the business of a paid preparer who uses tax software, then the preparer, not the tax software provider, would report that incident to IRS, according to IRS officials. While voluntary reporting mechanisms and updating of user agreements for IRS’s website are important steps, without a clear and standardized reporting requirement for all types of providers, IRS will not have assurance that third-party providers consistently report their security incidents in a timely manner. IRS needs this information to better understand the size and scope of information security incidents, which it uses to protect compromised individual taxpayer accounts and prevent identity theft refund fraud. IRS Has Not Documented Processes for Third-Party Provider Security Incident Reporting or Data Storage Security incident information can be reported to IRS through various channels from the public to IRS offices, and the data are ultimately stored in the RICS Incident Management Database regardless of the office that initially received the information. Figure 5 depicts the flow of information from the public to IRS offices, as well as the flow of information between the offices and to IRS databases. While RICS has documented its information intake, tracking, and storage processes in the RICS Incident Management Plan, IRS does not have a comprehensive document that describes these processes across the different IRS offices. For example, incident information submitted to EPSS and Stakeholder Liaison eventually moves to RICS to be tracked in the Incident Management Database. Additionally, RICS officials told us that they track each of these reported incidents separately and that the main repository should not contain duplicate reports of the same incidents, though multiple databases may contain information about the same incident. RICS officials added that, before a new incident is added to the Incident Management Database, staff conduct a query in the database to ensure that the incident was not already added. However, IRS has not documented how the security incident data processes should flow, relying instead on informal communication efforts of the staff and the assumption that staff know where the data belong and will provide that information to the appropriate offices. Internal Control Standards state that management should develop and maintain documentation of its internal control system and implement control activities through policies. The standards also state that documentation of responsibilities through policies and periodic review of activities can contribute to the effectiveness of implementation. This limited nature of the documentation may be due to the newness of some of these data processes. For example, a Stakeholder Liaison official told us that the data intake process for Stakeholder Liaison and entry into the Return Preparers Database started at the beginning of 2018. Prior to that, a Stakeholder Liaison manager stored information about security incidents in an individual email account because there was no mechanism for storing the data in a systematic manner. Further, a senior Wage and Investment Division official stated that the processes to intake, store, and share the data among the different IRS offices continue to evolve, and that documents describing these practices may quickly become obsolete. While these processes may still be evolving, documenting them can help IRS combat identity theft by helping to ensure that security incidents are properly recorded and monitored in the IRS systems. Documenting the processes may also allow for more complete data, as the data would follow a specific routing and review process. This would reduce the risk of the data not following the various channels they go through now. Such documentation can also help IRS retain organizational knowledge, mitigate the risk of having that knowledge limited to a few personnel, and ensure that the agency implements these processes effectively in the future. Conclusions Tens of millions of taxpayers use third-party providers, such as paid preparers or tax software providers, to comply with their federal income tax obligations. It is critical that taxpayers’ information, which includes personally identifiable and other sensitive information, be kept secure to maintain public confidence and avoid data breaches that expose that information for use by fraudsters. Identity theft is a constantly evolving crime, but IRS’s information security standards for third-party providers’ systems have not kept pace with the changing environment. One reason for this is that IRS lacks the explicit authority to require minimum standards for the systems of paid preparers and Authorized e-file Providers. Without this authority, Congress and IRS have limited assurance that the processes used to collect, store, and submit taxpayers’ returns adequately protect taxpayers’ information against electronic data breaches and potential tax refund fraud. Modifying its Authorized e-file Provider program requirements to explicitly state the elements of an information security program as required under the FTC Safeguards Rule would help IRS ensure that Authorized e-file Providers are aware of, and comply with, the rule. Doing so could also help these providers better protect taxpayers’ information. Additionally, IRS is missing an opportunity to capitalize on the achievements of Security Summit members to help ensure that tax software providers have an equivalent level of standards in place to adequately protect taxpayer information. The lack of centralized leadership at IRS with responsibility for coordinating all aspects of protecting taxpayer information held by third- party providers has enabled missed opportunities. Such designated leadership could help ensure greater collaboration between the various IRS offices that have roles to play in this area. This leadership could have also ensured that security standards for online providers in the Authorized e-file Provider program would have been updated. Instead, IRS introduced these standards in 2010 and has not subsequently updated them. Incorporating cybersecurity into its monitoring visits for EROs would provide IRS with greater assurance that EROs’ systems are adequately protecting taxpayers’ information from an increased risk of both tax- related and non-tax-related identity theft. Further, ensuring that IRS is using a risk-based approach to review all types of Authorized e-file Providers would provide assurance that the greatest risk areas of fraud are addressed appropriately. Finally, IRS’s efforts to protect taxpayer information at third-party providers would also be strengthened by greater consistency in requirements across provider types for reporting security incidents. Greater consistency would help to ensure IRS is obtaining timely and reliable information from third-party providers so IRS can better understand the size and scope of security incidents—data it uses to protect compromised individual taxpayer accounts and prevent identity theft refund fraud. Documenting the intake, storage, and sharing of the security incident data would also help IRS ensure that the security incidents are properly recorded and monitored. Matter for Congressional Consideration Congress should consider providing IRS with explicit authority to establish security requirements for the information systems of paid preparers and Authorized e-file Providers. (Matter for Consideration 1) Recommendations for Executive Action We are making the following eight recommendations to IRS. The Commissioner of Internal Revenue should develop a governance structure or other form of centralized leadership, such as a steering committee, to coordinate all aspects of IRS’s efforts to protect taxpayer information while at third-party providers. (Recommendation 1) The Commissioner of Internal Revenue should modify the Authorized e- file Provider program’s requirements to explicitly state the required elements of an information security program as provided by the FTC Safeguards Rule. (Recommendation 2) The Commissioner of Internal Revenue should require that all tax software providers that participate in the Authorized e-file Provider program follow the subset of NIST Special Publication 800-53 controls that were agreed upon by the Security Summit participants. (Recommendation 3) The Commissioner of Internal Revenue should regularly review and update the security requirements that apply to tax software providers and other Authorized e-file Providers. (Recommendation 4) The Commissioner of Internal Revenue should update IRS’s monitoring programs for electronic return originators to include techniques to monitor basic information security and cybersecurity issues. Further, IRS should make the appropriate revisions to internal guidance, job aids, and staff training, as necessary. (Recommendation 5) The Commissioner of Internal Revenue should conduct a risk assessment to determine whether different monitoring approaches are appropriate for all of the provider types in the IRS’s Authorized e-file Provider program. If changes are needed, IRS should make appropriate revisions to the monitoring program, internal guidance, job aids, and staff training, as necessary. (Recommendation 6) The Commissioner of Internal Revenue should standardize the incident reporting requirements for all types Authorized e-file Providers. (Recommendation 7) The Commissioner of Internal Revenue should document intake, storage, and sharing of the security incident data across IRS offices. (Recommendation 8) Agency Comments and Our Evaluation We provided a draft of this report to the Commissioner of Internal Revenue for review and comment. In its written comments, which are summarized below and reproduced in appendix III, IRS agreed with three of the recommendations and disagreed with five of the recommendations. IRS also provided technical comments, which we incorporated as appropriate. IRS agreed with our recommendations to regularly review and update the security requirements that apply to the tax software provider and other Authorized e-file Providers; standardize the incident reporting requirements for all types of Authorized e-file Providers; and document intake, storage, and sharing of the security incident data across IRS offices. IRS did not provide additional detail on the actions it plans to take to address these recommendations. IRS disagreed with five of our recommendations, generally citing for all of them the lack of clear and explicit authority it would need to establish security requirements for the information systems of paid preparers and others who electronically file returns. For our recommendation to develop a governance structure or other form of centralized leadership, IRS stated it would require statutory authority that clearly communicates its authority to establish security requirements for the information systems of paid preparers and others who electronically file tax returns. Further, IRS stated that without such authority, implementing the recommendation would be an inefficient, ineffective, and costly use of resources. We disagree that convening a governance structure or other centralized form of leadership would require additional statutory authority or be inefficient, ineffective, and costly. As discussed in the report, IRS has seven different offices across the agency working on information security-related activities that could benefit from centralized oversight and coordination, such as updating existing standards, monitoring Authorized e-file Provider program compliance, and tracking security incident reports. We continue to believe that establishing a governance structure would help provide this level of leadership, coordination, and collaboration to IRS’s current efforts and therefore help alleviate the missed opportunities that we identified in the report, such as updating outdated security standards. Further, IRS could choose a leadership mechanism that it determines to be low cost and most efficient to gain a higher degree of coordination. Without this structure, it is unclear how IRS will adapt to changing security threats in the future and ensure those threats are mitigated. In our draft report, we made a recommendation that IRS modify the Authorized e-file Provider program to be consistent with the FTC Safeguards Rule. In its response, IRS stated that it did not have explicit authority to establish policy consistent with the FTC Safeguards Rule or enforce compliance with it. However, IRS clearly states in its Revenue Procedure 2007-40 that violations of the provisions of the Gramm-Leach- Bliley Act and the implementing rules and regulations promulgated by FTC are considered violations of the revenue procedure and may subject an Authorized e-file Provider to penalties or sanctions. Therefore, we believe IRS has already incorporated compliance with the FTC Safeguards Rule as part of its Authorized e-file Provider program. The intent of this recommendation is not to suggest that IRS develop new policies related to the elements of the Safeguards Rule. Instead, we believe IRS has the opportunity to explicitly state in its requirements for Authorized e-file Providers the elements of an information security program, as listed in the Safeguards Rule. This action will help third party providers become aware of their specific legal obligations to protect taxpayer data under the Gramm-Leach-Bliley Act. As such, we clarified text in the body of the report and the text of the recommendation to better reflect our intent. For our recommendation to require all tax software providers that participate in the Authorized e-file Provider program to follow the subset of NIST Special Publication 800-53 controls that were agreed upon by the Security Summit participants, IRS stated that it does not have the statutory authority for such a requirement. However, under its existing authority, IRS has already established some information security requirements for a portion of tax software providers—those that are online providers. IRS has the opportunity to further establish standards for all tax software providers by incorporating the subset of NIST controls into its Authorized e-file Provider program, which would capitalize on the work it has completed with the Security Summit members. We continue to believe that without setting and requiring the same security standards for all tax software providers, IRS does not have assurance that these providers have an equivalent level of standards in place to adequately protect taxpayer information. For our recommendation that IRS update its monitoring programs for electronic return originators, IRS stated it does not have the statutory authority to establish policy on information security and cybersecurity issues, nor to enforce compliance if noncompliance is observed. However, as we reported, IRS already monitors physical aspects of information security, which goes beyond existing Authorized e-file Provider program requirements. Since most individuals now file tax returns electronically, having checks for physical security without comparable checks for cybersecurity does not address current risks, as cyber criminals and fraudsters are increasingly attacking third-party providers, as IRS has noted. We believe that incorporating some basic cybersecurity monitoring into the visits would provide IRS the opportunity to help inform the most vulnerable third-party providers of additional guidance and resources. For our recommendation to conduct a risk assessment to determine whether different monitoring approaches are appropriate for all of the provider types in the Authorized e-file Provider program, IRS stated that changes to the monitoring program would not have value to the overall program performance absent statutory authority. We disagree with this conclusion. As discussed in the report, IRS does not currently systematically monitor the existing security requirements for online providers, nor does it conduct information security or cybersecurity monitoring for all types of Authorized e-file Providers. We believe that IRS could conduct a risk assessment of its current monitoring program within existing statutory authority and make necessary changes that would provide better assurance that all types of providers are receiving some level of oversight and that IRS is addressing the greatest risk areas appropriately. We are sending copies to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We are also sending copies of the report to the Commissioner of Internal Revenue and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or Lucasjudyj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) assess what is known about the taxpayer information security requirements for the systems used by third-party providers, (2) describe Internal Revenue Service’s (IRS) outreach efforts to third-party providers on the requirements, (3) assess IRS’s monitoring processes for ensuring third-party providers’ compliance with the requirements, and (4) assess IRS’s requirements for third-party provider security incident reporting and how IRS uses that information. To assess what is known about the taxpayer information security requirements for the systems used by third-party providers, such as paid preparers and tax software providers, we reviewed applicable laws and regulations such as the Gramm-Leach-Bliley Act and relevant portions of the Internal Revenue Code, including 26 U.S.C. § 6011. This section of the Internal Revenue Code prescribes the filing of income tax returns, as well as the electronic filing requirements for returns prepared by paid preparers. We reviewed 26 U.S.C. §7803, which provides that the IRS Commissioner has the authority to administer and manage the execution and application of tax laws, while balancing the rights of, among other things, confidentiality and privacy of the taxpayer. We also reviewed the Federal Trade Commission’s (FTC) Safeguards Rule, which requires financial institutions, including tax return preparers, affiliates, and service providers, to ensure the security and confidentiality of customer records and information. This rule applies to those who are significantly engaged in providing financial products or services that include preparation and filing of tax returns. We reviewed IRS Revenue Procedure 2007-40, which informs Authorized e-file Providers of their obligations to IRS, taxpayers, and other participants in the Authorized e-file Provider program and outlines the rules governing filing electronically with IRS. We reviewed IRS publications describing the obligations in IRS’s Revenue Procedure 2007-40 and the requirements of the Authorized e- file Provider program, including IRS Publication 3112, IRS e-file Application and Participation, and IRS Publication 1345, Handbook for Authorized IRS e-file Providers of Individual Income Tax Returns. We assessed these documents to determine if the requirements for third- party providers were incorporating the laws and following leading practices as outlined by Standards for Internal Control in the Federal Government (Internal Control Standards) and A Framework for Managing Fraud Risk in Federal Programs (Fraud Risk Framework). The Fraud Reduction and Data Analytics Act of 2015, and Office of Management and Budget guidance implementing its provisions, affirm that agencies should adhere to the leading practices identified in our Fraud Risk Framework. We also compared the standards published in Publication 1345 for online providers to the National Institute of Standards and Technology (NIST) Special Publication 800-52: Guidelines for the Selection, Configuration, and Use of Transport Layer Security (TLS) Implementations to determine if the standards were following leading practices. We reviewed the subset of NIST Special Publication 800-53: Security and Privacy Controls for Federal Information Systems and Organizations controls that the Security Summit members agreed to voluntarily implement. We also reviewed other IRS publications that provide third-party providers with descriptions of leading practices in keeping taxpayer information safe, including IRS Publication 4557, Safeguarding Taxpayer Data: A Guide for Your Business; IRS Publication 4600, Tips for Safeguarding Taxpayer Data; IRS Publication 5293, Protect Your Clients; Protect Yourself: Data Security Resource Guide for Tax Professionals; and IRS Publication 5294, Protect Your Clients; Protect Yourself: Data Security Tips for Tax Professionals. In assessing these documents, we identified the extent of consistency among publications. We interviewed IRS officials who were responsible for various aspects of IRS’s security requirements for paid preparers and tax software providers. We conducted semistructured interviews with the following 10 industry groups and related organizations that represented a cross section of the tax preparation industry to determine their knowledge about existing information security requirements. American Coalition for Taxpayer Rights American Institute of Certified Public Accountants Council for Electronic Revenue Communication Advancement Electronic Tax Administration Advisory Committee Federation of Tax Administrators National Association of Tax Professionals National Society of Tax Professionals We reviewed IRS organization documents, including organizational charts and associated Internal Revenue Manual (IRM) provisions for the offices that have responsibilities for securing taxpayer information. We reviewed the stated missions of the offices of Electronic Products and Services Support (EPSS); Small Business/Self-Employed; Return Integrity and Compliance Services (RICS); Criminal Investigation (CI); Return Preparer Office; Office of Professional Responsibility; Cybersecurity; and Stakeholder Liaison. We also interviewed officials from these offices to determine how they coordinated the responsibilities for overseeing the security of taxpayer data among the offices. We compared IRS activities to the Internal Control Standards that identify controls that help an entity adapt to shifting environments, evolving demands, changing risks, and new priorities. To describe the outreach efforts IRS takes for third-party providers, we reviewed IRS outreach documents such as publications, news releases, social media posts, emails, webinars, and online education campaigns. We interviewed IRS officials and conducted semistructured interviews with 10 industry groups and related organizations to determine IRS’s communication efforts related to security standard enforcement and identify potential challenges that IRS faces in its outreach. To assess IRS’s monitoring processes for ensuring third-party providers’ compliance with information security requirements, we reviewed the agency’s monitoring procedures for the Authorized e-file Provider program per Rev. Proc. 2007- 40; IRS Publication 3112, IRS e-file Application and Participation; and IRS Publication 1345, Handbook for Authorized IRS e-file Providers of Individual Income Tax Returns. We reviewed the IRM section related to Monitoring the IRS e-file Program, monitoring checklists, and related job aides to determine the extent to which monitoring practices address security requirements in IRS Publication 1345. We assessed IRS’s monitoring efforts against our Fraud Risk Framework’s principles to combat fraud in a strategic, risk- based manner. We also interviewed the IRS officials responsible for overseeing the monitoring program. To assess IRS’s requirements for third-party provider reporting of security incidents and how IRS uses that information, we reviewed IRS guidance about security incident reporting requirements. We analyzed IRS data on the number and type of security incidents tracked in the RICS Incident Management Database from 2017 and 2018, the only data available following its creation in December 2016. We interviewed RICS officials about the quality of data in this database and determined that the data were sufficiently reliable to describe a minimum count of security incidents. Specifically, we asked about the responsibilities of officials collecting and using the data, the procedures in place to capture all reported data, and controls for ensuring the accuracy of the data and resolving any errors, among other things. We reviewed IRS guidance and program user agreements to determine security incident reporting requirements for third-party providers. We reviewed IRS process documentation and interviewed IRS officials from EPSS, RICS, CI, Return Preparer Office, Cybersecurity, and Stakeholder Liaison to determine the collection, routing, and storage processes for security incident information. We assessed IRS’s processes and documentation practices against leading practices outlined in NIST Special Publication 800-53 and Internal Control Standards. We interviewed IRS officials to identify ways that IRS uses this security incident information. We conducted semistructured interviews with the 10 industry groups and related organizations listed above to determine their knowledge about existing security incident reporting requirements. We conducted this performance audit from November 2017 to May 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Security and Privacy Standards for Online Providers The Internal Revenue Service (IRS) mandated that online providers adhere to six privacy, security, and business standards as part of the Authorized e-file Provider program, as listed in table 6. These standards have not been updated since they were developed in 2010. Appendix III: Comments from the Internal Revenue Service Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the contact named above, Jeff Arkin (Assistant Director), Robyn Trotter (Analyst-in-Charge), Christina Bixby, Alyssia Borsella, Mark Canter, Jehan Chase, Larry Crosland, Ann Czapiewski, James Andrew Howard, Michele Fejfar, and Robert Gebhart made key contributions to this report.
Why GAO Did This Study Third-party providers, such as paid tax return preparers and tax preparation software providers, greatly impact IRS’s administration of the tax system. If these third parties do not properly secure taxpayers’ personal and financial information, taxpayers will be vulnerable to identity theft refund fraud and their sensitive personal information will be at risk of unauthorized disclosure. IRS estimates that it paid out at least $110 million in identity theft tax refund fraud during 2017, and at least $1.6 billion in identity theft tax refund fraud during 2016. GAO was asked to review IRS’s efforts to track, monitor, and deter theft of taxpayer information from third parties. Among other things, this report assesses what is known about the taxpayer information security requirements for the systems used by third-party providers, IRS’s processes for monitoring compliance with these requirements, and IRS’s requirements for third-party security incident reporting. GAO analyzed IRS’s information security requirements, standards, and guidance for third-party providers and compared them to relevant laws, regulations, and leading practices, such as NIST guidance and Standards for Internal Control in the Federal Government . GAO reviewed IRS’s monitoring procedures and its requirements and processes for third-party reporting of security incidents, and compared them to Internal Control Standards and GAO’s A Framework for Managing Fraud Risk in Federal Programs . GAO also interviewed IRS and tax industry group officials. What GAO Found Federal law and guidance require that the Internal Revenue Service (IRS) protect the confidentiality, integrity, and availability of the sensitive financial and taxpayer information that resides on its systems. However, taxpayer information held by third-party providers—such as paid tax return preparers and tax preparation software providers—generally falls outside of these requirements, according to IRS officials. In 2018, about 90 percent of individual taxpayers had their tax returns electronically filed by paid preparers or used tax preparation software to prepare and file their own returns. IRS seeks to help safeguard electronic tax return filing for various types of third-party providers through requirements under its Authorized e-file Provider program. However, IRS’s efforts do not provide assurance that taxpayers’ information is being adequately protected. Paid Preparers. IRS has not developed minimum information security requirements for the systems used by paid preparers or Authorized e-file Providers. According to IRS’s Office of Chief Counsel, IRS does not have the explicit authority to regulate security for these systems. Instead, the Internal Revenue Code gives IRS broad authority to administer and supervise the internal revenue laws. The Department of the Treasury has previously requested additional authority to regulate the competency of all paid preparers; GAO has also suggested that Congress consider granting IRS this authority. Congress has not yet provided such authority. Neither the Department of the Treasury request nor the GAO suggestion included granting IRS authority to regulate the security of paid preparers’ systems. Having such authority would enable IRS to establish minimum requirements. Further, having explicit authority to establish security standards for Authorized e-file Providers’ systems may help IRS better ensure the protection of taxpayers’ information. Tax Software Providers. As part of a public-private partnership between IRS and the tax preparation industry, 15 tax software providers voluntarily adhere to a set of about 140 information security controls developed using guidance from the National Institute of Standards and Technology (NIST). However, these controls are not required, and these providers represent only about one-third of all tax software providers. Additionally, IRS established six security, privacy, and business standards for providers of software that allows individuals to prepare their own tax returns (as opposed to software that paid preparers use). However, IRS has not substantially updated these standards since 2010, and they are, at least in part, outdated. For example, IRS cites an outdated encryption standard that NIST recommends not using due to its many known weaknesses. A key factor contributing to missed opportunities to address third-party cybersecurity is IRS’s lack of centralized leadership. Consequently, IRS is less able to ensure that third-party providers adequately protect taxpayers’ information, which may result in identity theft refund fraud. IRS monitors compliance with its electronic tax return filing program requirements for those paid preparers who electronically file returns; however, IRS’s monitoring has a limited focus on cybersecurity issues. For example, the monitoring techniques largely focus on physical security (e.g., locked filing cabinets) rather than verifying that preparers have an information security policy consistent with NIST-recommended controls. Without effective monitoring of cybersecurity controls, IRS has limited assurance that those paid preparers’ systems have adequate controls in place to protect clients’ data. IRS recently began collecting information on high-risk security incidents, such as hackers infiltrating third-party provider systems. Reported incidents increased from 2017 to 2018, the only years for which IRS has data. However, IRS does not have a full picture of the scope of incidents because of inconsistent reporting requirements, including no reporting requirements for paid preparers. What GAO Recommends GAO suggests that Congress consider providing IRS with explicit authority to establish security requirements for paid preparers’ and Authorized e-file Providers’ systems. GAO is also making eight recommendations, including that the Commissioner of Internal Revenue Develop a governance structure or other form of centralized leadership to coordinate all aspects of IRS’s efforts to protect taxpayer information while at third-party providers. Require all tax software providers to adhere to prescribed information security controls. Regularly review and update security standards for tax software providers. Update IRS’s monitoring programs to include basic cybersecurity issues. Standardize incident reporting requirements for all types of third-party providers. IRS agreed with three recommendations, including the above recommendations to regularly review and update security standards for tax software providers, and standardize incident reporting requirements. IRS disagreed with five recommendations—including the other three listed above—generally citing the lack of clear and explicit authority it would need to establish security requirements for the information systems of paid preparers and Authorized e-file Providers. GAO believes that IRS can implement these recommendations without additional statutory authority.
gao_GAO-19-244
gao_GAO-19-244_0
Background The military services preposition stocks ashore and afloat so that DOD is able to respond to multiple scenarios during the initial phases of an operation until the supply chain has been established. The military services maintain their own configurations and types of equipment and stocks to support their respective prepositioned stock programs: The Army stores sets of combat brigade equipment, supporting supplies, and other stocks at land sites in several countries and aboard ships. The Marine Corps stores equipment and supplies for its forces aboard ships stationed around the world and at land sites in Norway (see fig. 1). The Navy’s prepositioned stock program provides construction support, equipment for off-loading and transferring cargo from ships to shore, and expeditionary medical facilities to support the Marine Corps. The Air Force’s prepositioned stock programs include assets such as direct mission support equipment for fighter and strategic aircraft as well as base operating support equipment to provide force, infrastructure, and aircraft support during wartime and contingency operations. Prepositioned stocks are employed by the geographic combatant commanders, who have the authority to, among other things, organize and employ forces assigned to them as they deem necessary to accomplish assigned missions. DOD apportions the services’ prepositioned stocks among the geographic combatant commands according to joint guidance, and the afloat prepositioned stocks may be apportioned to more than one geographic combatant command. Requirements for prepositioned stocks are developed based on an approved operation plan. The services determine how best to meet the needs of the geographic combatant commanders, which may include the use of prepositioned stocks. Geographic combatant commanders periodically review their plans, assess the risk to those plans, and report the results to the Chairman of the Joint Chiefs of Staff. The approval of the Secretary of Defense is generally required to use the prepositioned stocks. DOD’s Prepositioned Stock Implementation Plan Does Not Fully Address Four of the Seven Required Elements DOD’s implementation plan for managing prepositioned stocks includes information that addresses three of the seven required elements enumerated in section 321 of the NDAA for Fiscal Year 2014. However, the plan, which is 5 pages in length, lacks the detail needed to fully address the remaining four required elements (see table 1). The Assistant Secretary of Defense for Logistics and Materiel Readiness approved the implementation plan on August 29, 2017, but an official from the Office of the Secretary of Defense told us that DOD did not formally issue the plan. As such, it does not bear a DOD seal, signature, or issuance number and most prepositioning service officials we spoke with were not aware of the plan’s existence. As shown in the table, DOD fully addressed three elements in section 321 of the NDAA for Fiscal Year 2014 by describing how the department will achieve its vision, desired end state, and goals, assigning roles and responsibilities, and including a schedule for the implementation of the plan. However, we assessed the remaining elements as partially addressed or not addressed because DOD did not provide the required information in its implementation plan. Specifically: Element two (comprehensive list of DOD’s prepositioned materiel and equipment programs, partially addressed). DOD’s implementation plan contains a list of the department’s prepositioned stock programs but that list omits one Army and eight Air Force prepositioned stock programs. In table 2, we compare the list of prepositioned stock programs that service officials provided to us with the list in DOD’s implementation plan. An official from the Office of the Secretary of Defense told us in April 2017 as part of a previous review that the department would not address this required element in the implementation plan because the department lists its prepositioned stock programs in its annual report to Congress. The implementation plan notes that DOD submits a comprehensive list of materiel to Congress each year per 10 U.S.C. §2229a. However, the annual report to Congress does not include a comprehensive list of the department’s prepositioned materiel and equipment programs. Rather, the annual report describes most of the department’s prepositioning programs but it omits one Army and six Air Force programs not listed in the implementation plan. Apart from the statutory requirement, Standards for Internal Control in the Federal Government state that management should communicate quality information externally so that external parties can help the entity achieve its goals and address risks. Without a comprehensive list of prepositioned materiel and equipment programs, DOD decision makers do not have all of the information they need to conduct effective oversight to assist the department in achieving its vision and goals. Element three (detailed description of how the plan will be implemented, partially addressed). The plan identifies policy, governance, and assessment initiatives through which the department aims to achieve its goals. However, the plan does not provide a detailed description of how the department will implement these three initiatives. Specifically, the plan states that DOD will identify policy gaps and revise or develop policy at all levels to better oversee prepositioned stocks; assigns the Under Secretary of Defense for Acquisition, Technology, and Logistics, the Chairman of the Joint Chiefs of Staff, and the services to review and revise the current prepositioning policies as appropriate; and tasks the geographic combatant commanders to ensure that theater campaign plans provide clear guidance for service prepositioned stock planning. However, the plan does not provide details on when geographic combatant commanders should finalize clear guidance for service prepositioned stock planning or describe what the guidance should include. The plan also states that DOD will use a governance body composed of the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Chairman of the Joint Chiefs of Staff; the geographic combatant commanders; and the services to provide joint oversight of the prepositioned stock programs. However, the plan is unclear as to whether the Global Prepositioned Materiel Capabilities Working Group is the governance body. For example, the plan states that DOD’s joint oversight framework will include the Global Prepositioned Materiel Capabilities Working Group but also assigns the group to present capability shortfalls and gaps to a governance body and implement governance body decisions. Further, the plan states that DOD will use current systems of record and established metrics to evaluate performance and measure prepositioned stock status and capability. However, these are existing mechanisms to monitor the services’ programs and do not provide details on how the department will assess implementation of the plan itself. In 2017, a Joint Staff official told us that the implementation plan would be broad and high-level but would be more detailed than the DOD’s strategic policy. However, the plan’s descriptions of the implementation initiatives lack sufficient detail on what the department will do to implement the plan. Apart from the statutory requirement, Standards for Internal Control in the Federal Government establish that objectives should be defined in specific and measureable terms that clearly define what is to be achieved. Without sufficient detail, DOD risks being unable to fully support the emphasis and high priority that the 2018 National Defense Strategy gives to prepositioned stocks. Element six (description of the resources required to implement the plan, not addressed). DOD’s implementation plan does not describe the resources required to implement the plan. Rather, the plan states that prepositioning programs are resourced and managed by the services in support of combatant command operational and training requirements. In describing the joint oversight framework, the plan states that DOD will leverage the processes that already exist to resource prepositioning stock requirements including a focused effort on prepositioning as part of the annual planning, programming, budget and execution process, and the Joint Capabilities Integration Development System. Officials from the Office of the Under Secretary of Defense for Policy told us when they were developing the implementation plan that they understood this element as requiring information about the resources such as funding, personnel, and technology that would be needed to implement the plan. However, the plan does not include a description of the funding, personnel, or technology resources required to implement the plan. DOD officials reported that the services received $1.2 billion for prepositioned stocks in fiscal year 2018 and that the annual report to Congress also contains further information on the funding. However, this information does not describe the resources needed to implement DOD’s plan for prepositioned stocks as required by the NDAA for Fiscal Year 2014. Apart from the statutory requirement, Standards for Internal Control in the Federal Government establish that organizations should gather relevant operational and financial information for effective monitoring. Without a description of the resources required for implementation, decision makers do not have enough information to understand whether the department has sufficient resources to implement the plan. Element seven (description of how the plan will be reviewed and assessed to monitor progress, partially addressed). DOD’s implementation plan describes how the department will monitor the services’ prepositioned stock capabilities and readiness but does not describe how the department will review and assess the plan itself. The plan states that the department will use standard metrics contained in the readiness reporting systems of record to monitor prepositioning capability and readiness of the services’ programs. The plan assigns the services and combatant commands to assess prepositioned stock programs and posture annually and notes that all of the services are to begin reporting through the Defense Readiness Reporting System in the first quarter of fiscal year 2018. However, similar to element three, the plan does not fully address the mandated element in that it does not describe how the department will review or assess the plan as a tool toward achieving the stated vision and desired end state. The plan directs the Global Prepositioned Materiel Capabilities Working Group—which is responsible for providing oversight of prepositioned stock programs and resolving joint issues concerning prepositioned stocks—to assess actions to ensure desired results are achieved but does not describe how it is to do this. Apart from the statutory requirement, Standards for Internal Control in the Federal Government state that management should monitor its internal controls to determine their effectiveness and make modifications as necessary. Without reviewing and assessing the implementation plan, DOD will be unable to determine whether the current plan is helping the department progress toward its identified vision and desired end state for its prepositioned stock programs. DOD did not fully address the required elements in the implementation plan because, according to officials from the Office of the Secretary of Defense for Policy and the Joint Staff, implementation of the plan for managing prepositioned stock programs is the role of the services. According to these officials, DOD developed the implementation plan without details to allow the services to determine how to implement their respective prepositioning stock programs. Further, an official from the Under Secretary of Defense for Policy noted that DOD’s annual report to Congress on prepositioned stock programs contains some of the required information. However, as discussed earlier, we found that the annual report to Congress does not include all of the information to satisfy the required elements, such as a comprehensive list of the department’s prepositioned stock programs; and most service officials we spoke with were unaware of the plan. Moreover, section 321 of the NDAA for Fiscal Year 2014 required DOD to develop an implementation plan that contained all seven elements. Absent an implementation plan that fully addresses all of the elements required in the NDAA for Fiscal Year 2014 and aligns with internal control standards, DOD continues to provide incomplete information to Congress and stakeholders within the department on its prepositioned stock programs. DOD Has Made Little Progress in Implementing a Joint Oversight Approach for Managing the Military Services’ Prepositioned Stock Programs DOD’s Progress in Establishing a Joint Oversight Approach Has Been Slow In 2011, Congress began mandating DOD take steps to develop a joint strategy. Beginning in 2005 and subsequently in 2011, we reported that DOD lacked a joint oversight framework of the services’ programs. However, as shown in figure 2, DOD has made limited progress in addressing congressional requirements and our reporting recommendations related to joint oversight of prepositioned stock programs. DOD’s Guidance on Joint Oversight Lacks Detail and Other Related Efforts Have Limitations DOD’s recent approach to joint oversight has been to update guidance and implement other related efforts. For example, over the past 2 years, the Office of the Secretary of Defense and the Joint Staff have updated existing documents and issued new policy documents, which each contain broad statements about the need for joint oversight of the services’ prepositioned stock programs: In December 2016, the Chairman of the Joint Chiefs of Staff updated its Logistics Planning Guidance for Prepositioned War Reserve Materiel. The document states that all service prepositioned stock programs require joint alignment with national priorities and global combatant command requirements across the full range of military operations. The instruction specifically directs the Joint Staff to develop a framework for joint oversight processes for synchronizing the services’ prepositioning strategies to minimize duplicative efforts and to maximize efficiencies and return on investment for prepositioned stocks. However, this document does not detail how the Joint Staff is to develop this framework and does not describe the elements that are to be included as a part of an effective approach for joint oversight. In March 2017, the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics issued its Pre-Positioned War Reserve Materiel Strategic Policy. One of the purposes of the document is to establish joint oversight of the military services’ pre- positioning efforts to maximize efficiencies across DOD. The directive assigns the Chairman of the Joint Chiefs of Staff with the responsibility to develop a framework for synchronizing the services’ prepositioning strategies to minimize duplicative efforts and to maximize efficiency and return on investment across DOD. However, similar to the instruction above, this document does not detail how the Joint Staff is to develop this framework or describe the elements that are to be included as a part of an effective approach for joint oversight. In August 2017, the Assistant Secretary of Defense for Logistics and Materiel Readiness finalized DOD’s implementation plan for managing prepositioned stock programs, which we discuss earlier in this report. The plan calls for improved DOD guidance that builds a framework and establishes joint oversight to synchronize service prepositioned stock programs with DOD’s strategic guidance and priorities. The plan also calls for balancing service prepositioned stock programs to maximize effectiveness and efficiency while minimizing potential duplication across the department. However, in addition to the shortcomings of the plan that we discuss earlier in this report, the plan also does not provide a detailed discussion of what is needed to implement a department-wide framework for joint oversight. Further, although the plan states that clear policy is the foundation for joint oversight, the plan itself was not issued as formal guidance, and, as noted earlier, most prepositioning service officials we spoke with were not aware of the plan’s existence. DOD officials stated that they are continuing to update existing guidance as needed and that the services are responsible for implementing and managing their own prepositioned stock programs. DOD also provides Congress annual reports on the status of the services’ prepositioned stock programs. However, in June 2015, we reported that the annual report provided inconsistent information among the services’ programs using a nonstandardized definition of “prepositioned stocks” and that the annual report is not an effective tool for joint oversight. We recommended that DOD develop a standardized definition of “prepositioning” for its annual report that is consistent with the definition used in the department’s joint service guidance and apply this definition consistently to identify prepositioning materiel and equipment across DOD. DOD concurred with our recommendations. However, as of October 2018, DOD continued to use varying definitions of prepositioned stocks. A broad definition exists at the strategic level, but service-level definitions vary depending on what each service’s prepositioned stock needs are. For example, the Army’s definition of prepositioned stocks is based on the equipment and stocks required to meet the unique mission requirements of brigade combat team configurations. Within this definition, the Army includes equipment sets used for training units, but the other services do not. DOD officials stated that although there is a broad definition of prepositioned stocks, the services are responsible for managing their individual programs to include what equipment and stocks are a part of their respective programs based on their mission and needs. Further, in 2008, DOD directed the establishment of the Global Prepositioned Materiel Capabilities Working Group and assigned it responsibility for addressing joint issues concerning war reserve materiel requirements and positioning. According to DOD’s prepositioned stock implementation plan, the working group is DOD’s focused joint oversight framework effort to execute the following for prepositioned stock programs: analyze service and combatant commander input in the annual report identify potential opportunities to enhance efficiency and reduce operational risk, present capability shortfalls/gaps to a governance body for implement governance body decisions in coordination with the services and combatant commands, and assess actions to ensure desired results are achieved. According to DOD guidance, the Assistant Secretary of Defense for Sustainment and the Chairman of the Joint Chiefs of Staff appoint co- chairs for the working group, which will include members from the military services, the Defense Logistics Agency and the combatant commands and meet annually or more often, as needed. However, since 2011, our work has shown that DOD has been unable to ensure that the working group’s activities include the full range of the tasks the group was established to perform because the working group lacks clear oversight and reporting relationships to authoritative bodies within DOD. We recommended that DOD assess the continued relevance of the Global Prepositioned Materiel Capabilities Working Group’s assigned tasks, and DOD concurred. In September 2012, we reported that, according to DOD officials, the main responsibility of the working group had been to consolidate the services’ individual submissions on their prepositioned stock programs into DOD’s annual report for Congress, and that the working group had met only sporadically and had not yet addressed many of the duties specified in its charter. This continues to be the case. We found that, according to DOD officials, quarterly working group meetings were frequently postponed, attendance was not fully representative of all stakeholders, and the discussions during a September 2018 meeting we observed were primarily focused on gathering information from the services for preparations for the upcoming annual report to Congress and receiving service updates on the current status of their respective prepositioned stock programs. DOD has not fully implemented joint oversight of the services’ prepositioned stock programs because the department’s guidance lacks detail and the department has not fully implemented requirements within other intended joint oversight efforts, such as the working group. Instead, DOD’s approach has been for the services to manage their own respective programs with limited oversight at the department level. Standards for Internal Control in the Federal Government state that objectives should be defined in specific and measureable terms that clearly define what is to be achieved, who is to achieve it, how it will be achieved, and the time frames for achievement. These standards also state that management should evaluate performance and hold individuals accountable for their internal control responsibilities. In addition, the NDAA for Fiscal Year 2014 mandates a framework for joint departmental oversight that reviews and synchronizes the military services’ prepositioned stock strategies to minimize potentially duplicative efforts and maximize efficiencies in prepositioned stocks across the DOD. Further, our prior work in the area of fragmentation, overlap, and duplication in the federal government has found that Congress and executive branch agencies have opportunities to contribute toward fiscal sustainability and act as stewards of federal resources. These include taking actions to reduce, eliminate, or better manage duplication, overlap, or fragmentation among federal agencies and programs; achieve cost savings; or enhance revenues. “Fragmentation” refers to those circumstances in which more than one organization within an agency is involved in the same broad area of national need and opportunities exist to improve service delivery. Without strengthening joint oversight across the department, DOD continues to have a fragmented approach to its management of prepositioning programs, which has led to inefficiencies. For example, according to Joint Staff officials, there is no uniform process by which the services are reporting the readiness of prepositioned stock assets. Joint Staff officials also said that having a joint oversight approach would help them have a more complete picture on the readiness of prepositioned stocks across the services and help the services in developing more consistent reporting methods. Service officials we interviewed have also noted that there may be duplication among DOD’s prepositioned stock programs resulting from limited joint oversight. For example, Navy officials stated that because each service utilizes medical assets as a part of its prepositioned stock programs, there is potential duplicative medical equipment across the services, which may result in inefficiencies. Finally, our ongoing classified work is finding a lack of joint oversight related to DOD’s management of prepositioned stocks in Europe. Although DOD’s current approach relies on the services managing their own prepositioned stock programs and Title 10 requires the services to train, man, and equip their forces, without fully implementing joint oversight—including providing more detailed information on how to implement such an approach in its guidance and reviewing its other efforts, such as the working group—DOD will continue to experience fragmented management of its prepositioned stock programs. Further, given the lack of progress DOD has made in the past several years, providing information to Congress on its efforts in this area could help hold the department to greater accountability. Conclusions Prepositioned stocks play a pivotal role during the initial phases of an operation. We have reported for over a decade on the importance of DOD having a department-wide strategic policy and joint oversight of the services’ prepositioned stock programs, and Congress has required that DOD take action in this area. DOD issued guidance to include an implementation plan for managing prepositioned stock programs. However, the plan does not address all of the required elements enumerated in section 321 of the National Defense Authorization Act for Fiscal Year 2014, and DOD’s various guidance documents include broad direction for joint oversight. Without revising the implementation plan to have more complete information—including a full list of programs, a detailed description of how DOD will implement key initiatives, a description of the resources required, and an approach for monitoring and assessing the plan itself—the services will continue to operate their prepositioned stock programs with limited direction from DOD. Further, without fully implementing joint oversight, including providing more details in guidance and reviewing related efforts, and providing accountability to Congress on how the department will implement such oversight, DOD’s current fragmented management approach will continue to exist, which creates the potential for duplication and inefficiencies among the services’ prepositioned stock programs. Recommendations for Executive Action We are making the following six recommendations to DOD: The Secretary of Defense should ensure that the Assistant Secretary of Defense for Sustainment, in coordination with the Chairman of the Joint Chiefs of Staff, issue a more detailed implementation plan or include implementation plan details in identified formal department-wide guidance to include an updated list to provide quality information, including all of DOD’s prepositioned materiel and equipment programs. (Recommendation 1) The Secretary of Defense should ensure that the Assistant Secretary of Defense for Sustainment, in coordination with the Chairman of the Joint Chiefs of Staff, issue a more detailed implementation plan or include implementation plan details in identified formal department-wide guidance to include a detailed description of how DOD will implement the three key initiatives in the plan—policy, governance, and assessment—including clearly identifying what is to be achieved in these areas. (Recommendation 2) The Secretary of Defense should ensure that the Assistant Secretary of Defense for Sustainment, in coordination with the Chairman of the Joint Chiefs of Staff, issue a more detailed implementation plan or include implementation plan details in identified formal department-wide guidance to include a description of the resources (i.e., relevant operational and financial information) required to implement the plan including dollar and personnel amounts. (Recommendation 3) The Secretary of Defense should ensure that the Assistant Secretary of Defense for Sustainment, in coordination with the Chairman of the Joint Chiefs of Staff, issue a more detailed implementation plan or include implementation plan details in identified formal department-wide guidance to include a description of how the department will review and assess the implementation plan for effectiveness. (Recommendation 4) The Secretary of Defense should ensure that the Assistant Secretary of Defense for Sustainment, in coordination with the Chairman of the Joint Chiefs of Staff, take steps to fully implement joint oversight of DOD’s prepositioned stock programs, including providing detailed information on how to implement such an oversight approach in department guidance and reviewing other joint oversight efforts, in order to synchronize the military services’ preposition stock strategies to avoid fragmentation. (Recommendation 5) The Secretary of Defense should ensure that the Assistant Secretary of Defense for Sustainment, in coordination with the Chairman of the Joint Chiefs of Staff, update Congress on the department’s progress in joint oversight management in the prepositioned stock annual report or in a separate report. (Recommendation 6) Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. In its comments, reproduced in appendix II, DOD concurred with each of the six recommendations and described planned actions it will take to implement them. We are providing copies of this report to the appropriate congressional committees; the Secretary of Defense; the Assistant Secretary of Defense for Sustainment; and the Chairman of the Joint Chiefs of Staff. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Offices That We Contacted Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, individuals who made key contributions to this report include Alissa H. Czyz, Assistant Director; Vincent M. Buquicchio; Pamela Davidson; Mae Jones; Cody Knudsen; and Yong Song.
Why GAO Did This Study The military services preposition stocks worth billions of dollars at strategic locations around the world to provide U.S. forces with critical assets before supply chains have been established. In the 2018 National Defense Strategy, DOD emphasized that prepositioned stocks provide key logistical support for the department's missions. For many years, GAO has identified the potential for duplication among the military services' prepositioned stock programs due to a fragmented management approach and limited joint oversight within DOD. In the NDAA for Fiscal Year 2014, Congress required DOD to develop an implementation plan to manage prepositioned stock programs. DOD finalized its plan in August 2017. The act included a provision for GAO to review the plan and report on related issues. GAO assessed the extent to which (1) DOD's implementation plan addresses mandated reporting elements and (2) DOD has made progress in implementing a joint oversight approach for managing the services' prepositioned stock programs. GAO compared the implementation plan and DOD's joint oversight approach with congressional requirements and federal standards for internal control and interviewed DOD officials. What GAO Found The Department of Defense's (DOD) implementation plan for managing the military services' prepositioned stock programs does not fully address four of the seven elements required by the National Defense Authorization Act (NDAA) for Fiscal Year 2014. For example, DOD's plan did not include all information required by the NDAA, such as a complete list of the services' programs, information on how DOD would pursue key initiatives, or the resources required to implement the plan. DOD officials told GAO that they developed a plan without detail to allow the services to determine for themselves how to implement their programs. However, absent an implementation plan that fully addresses NDAA requirements, DOD continues to provide incomplete information to Congress on the department's prepositioned stock programs. Since 2011 when Congress required DOD to take action and since 2005 when GAO first reported on the issue, DOD has not fully implemented a joint oversight approach for managing prepositioned stock programs (see figure). DOD's recent approach for implementing joint oversight has been to update guidance documents and develop other efforts, such as a working group, but the services continue to manage their programs with little joint oversight. Without taking steps to fully implement joint oversight, including providing detailed information on how to achieve this in guidance and reviewing other efforts, DOD's management will continue to be fragmented and it risks duplication and inefficiencies among the services' programs. Moreover, updating Congress on DOD's progress would help assure decision makers that DOD intends to follow their direction in establishing joint oversight of prepositioned stock programs. What GAO Recommends GAO is making six recommendations, including that DOD provide information required by the NDAA, fully implement joint oversight of prepositioned stock programs, and update Congress on progress made. DOD concurred with all of the recommendations.
gao_GAO-20-122
gao_GAO-20-122_0
Background History of DOE PILT Orders DOE has issued PILT orders and policies to articulate DOE’s procedures for carrying out the PILT provision of the Atomic Energy Act. DOE has changed its PILT procedures over time, which is reflected in multiple PILT orders and policies. These changes modified eligibility requirements for PILT, as well as how PILT payments were to be calculated. In 1958, a predecessor agency to DOE issued the first order on PILT. Under the order, payments were to be based on the property value when the land was acquired and the tax rate of the year for which the payment was made; however, it allowed for exceptions to this rule. The 1958 order also allowed DOE to pay sites retroactively for years prior to their initial PILT application. In 1987, DOE issued a new PILT order with changes to address budget constraints. The new order introduced more stringent requirements for new PILT applicants; prior PILT recipients were not subject to the new restrictions. The 1987 order included an eligibility requirement called a “gross benefits test.” Under this requirement, payments were only allowed if the tax loss that was incurred exceeded the total value of all benefits derived from DOE’s activities in the community. The 1987 order also included a provision that required payments to be reduced by the amount of tax benefits a community received from DOE’s activities and eliminated retroactive payments to communities for the years prior to their application for PILT. In 1993, DOE revised its policy in response to concerns about inequities arising from the application of the 1987 order. Specifically, the 1993 policy eliminated the gross benefits test and modified the provision that required payments to be reduced to account for tax benefits from DOE activities. In addition, it allowed payments to all communities to be based on the current tax rates and value of the property in the condition in which it was acquired. In 2003, DOE issued its most recent PILT order. This order updated responsibilities outlined in the 1993 policy and shifted some details to a separate policy document. It also eliminated a detail of the 1993 policy regarding special burdens payments. PILT Process and Organizations In order for a community to be eligible for PILT payments, it must submit to DOE an initial PILT application. DOE uses the one-time initial application to establish the eligibility of land at a certain community. Officials from the relevant DOE site and program offices, along with officials from DOE’s Office of the Chief Financial Officer (CFO), Office of Management, and General Counsel at DOE headquarters, evaluate the application based on several criteria, such as: (1) the property must have been subject to taxation by local or state authorities immediately prior to being acquired by the federal government, (2) payments must not be retroactive, (3) payments should not be in excess of the taxes that would have been collected if the property had remained on the local tax rolls in the condition in which it was acquired, and (4) property values will be based on the highest and best use of the property based on the classification of the property when it was acquired. The CFO makes the final determination of whether to approve or reject the application. Once an application is approved, DOE and the community enter into an intergovernmental assistance agreement, which emphasizes that payments are subject to the availability of funds and to legislative or administrative reductions and states that PILT is not an entitlement to the community. After establishing eligibility through the application process, each community submits to DOE an annual PILT invoice reflecting its requested PILT amount. These annual PILT invoices specify how much a community estimates its PILT payments should be based on the community’s calculations for a specific tax year. DOE site offices—offices at various DOE sites across the United States that report to DOE program offices— review each PILT invoice and determine whether enough funding is available to pay the amount requested in the PILT invoice. If a community’s PILT invoice reflects a reclassification of the property to a new tax classification or category, a change in the amount of eligible land, or another significant change in the method of calculating the requested PILT payment by the community, the community must submit a new PILT application. PILT processes involve multiple organizations, including several parts of DOE as well as local governments (see fig. 1). DOE headquarters— including the CFO, Office of Management, and General Counsel, and program offices—is responsible for reviewing and approving initial or revised PILT applications. The CFO and program offices are responsible for ensuring that funding needed for PILT payments is included in budget requests. As of fiscal year 2019, the program offices involved with PILT include: the Office of Environmental Management, which has the mission to clean up sites contaminated by nuclear weapons development and nuclear energy research; the National Nuclear Security Administration, which is responsible for maintaining and enhancing the safety, reliability, and performance of the U.S. nuclear weapons stockpile; the Office of Science, which manages national laboratories and supports research of physics, materials science, and chemistry; the Office of Nuclear Energy, which focuses on research, development, and demonstration of nuclear reactors; and the Office of Legacy Management, which is responsible for providing long-term surveillance and maintenance of DOE sites that have closed. Under the current PILT order, DOE site offices are responsible for providing recommendations for any initial and revised PILT applications and for administering payments. These DOE site offices operate in their PILT recipient communities. DOE site offices are overseen by DOE program offices. For example, cleanup activities related to nuclear weapons production at the Hanford and Savannah River sites are overseen by the Office of Environmental Management, while the Argonne and Brookhaven National Laboratories are overseen by the Office of Science. The site of the now closed Fernald Plant is overseen by the Office of Legacy Management. At some sites, multiple communities at the site receive PILT payments. For example, three communities at the Oak Ridge site receive PILT payments: the City of Oak Ridge, Anderson County, and Roane County. Property Taxes Property taxes in the United States are levied by a number of different taxing authorities, including state and local governments, but mostly by local governments. Local governments, such as counties, can levy and collect taxes on behalf of smaller jurisdictions within their boundaries. Broadly speaking, property taxes are based on the assessed value of the property times the tax rate. Assessed value. The assessed value of the property is generally a function of the market value and the assessment ratio. The market value depends on the characteristics of the property and can vary across locations as a result of local conditions, including the supply and demand for the type of property. The assessment ratio is a percentage modifier applied in certain circumstances to alter the market value of the property. Some states and counties apply a lower assessment ratio to certain classifications of property, such as agricultural property. Tax rate. The tax rate is a figure—typically in the form of a percentage—that is applied to the assessed value of the property to determine the total property tax amount. Tax rates vary across locations, depending on local and state tax laws and policies. In addition, for a given property tax bill, local governments may apply a wide variety of tax rates, with different rates applied for different government-supported functions, such as education, emergency services, and roads. The classification of the property can thus influence the tax rates. PILT Payments Vary Considerably across Sites and Have Generally Increased, Particularly at Two Sites PILT payments vary considerably across DOE sites, with the communities at two sites with the most eligible land receiving the majority of payments. Total PILT payments made to communities at the 12 DOE sites that receive PILT payments have increased from approximately $9.5 million in 1994 to approximately $23 million in 2017 in fiscal year 2017 dollars. Payments to communities at the Hanford and Savannah River sites account for the majority of that growth. Communities at Most of DOE’s 74 Sites Do Not Receive PILT Payments According to DOE, communities at the majority of DOE sites do not receive PILT payments because they are ineligible for PILT or have not applied to receive payments. Specifically, of the 74 DOE sites, communities at 44 sites are ineligible for PILT. Of the 30 sites where communities are eligible or potentially eligible, 18 have communities that have not applied for PILT or currently do not receive PILT, while communities at 12 sites currently or recently received PILT as of 2017, according to DOE documents. Of the over 2 million acres covered by DOE sites, approximately 70 percent—approximately 1.5-million acres—is ineligible for PILT, according to documents provided by DOE. According to DOE, communities at most of the 44 ineligible sites are not eligible under the provisions of the Atomic Energy Act because they are on property that either: was not on local tax rolls prior to acquisition, is private land, is land controlled by another federal agency, or is university-owned. Some examples of property that is ineligible include: the Waste Isolation Pilot Plant, New Mexico, which is situated on federal land and thus not subject to prior state or local taxation; Hazelwood Interim Storage Site, Missouri, which is on land DOE leases from a private owner; Sandia Lab, Kauai, Hawaii, which is on land controlled by another federal agency; and the Radiobiological Laboratory of Utah, Utah, which is on university-owned land. In addition, in some cases, sites include a mix of eligible and ineligible acreage. Of the approximately 680,000 acres of property at the 30 sites that are eligible or potentially eligible for PILT, about 25 percent is located at the 18 sites where the communities did not receive PILT payments, according to fiscal year 2017 data provided by DOE. Examples of those sites with eligible property that have not received payments include the Weldon Spring Quarry in St. Charles County, Missouri, and the Atlas Complex in Clark County, Nevada. DOE headquarters officials that we spoke with stated that they are unsure why some communities with eligible property have not applied for PILT. Of the property that is eligible for PILT, approximately 75 percent is located at the 12 sites where the community has applied for and receives PILT payments. These sites began receiving payments at least as early as the 1950s and as late as 2012. Some sites are located in communities that previously, but no longer, receive PILT payments. For example, the community at the Mound Site, which is under the Office of Legacy Management, received its last payment in 2006. Figure 2 shows PILT eligibility and receipt by site and by acreage. In fiscal year 2017, communities at 12 DOE sites received or had pending PILT payments. These sites are located in 10 states. The sites vary in size and the amount of land at the site that is eligible under DOE’s PILT order. The two largest sites in terms of eligible acreage—Hanford and Savannah River—are the only sites that have more than 100,000 PILT- eligible acres, at nearly 180,000 and 200,000 respectively. Although the Idaho site includes about 570,000 acres, according to DOE officials, only 5 percent of those are eligible for PILT because they were previously on local tax rolls when DOE acquired the land, while the rest of the land was not on the tax rolls. Five sites—Brookhaven National Laboratory, Argonne National Laboratory, the Fernald Plant, Los Alamos National Laboratory, and Bettis Atomic Power Laboratory—have total PILT-eligible acreage of less than 2,000 acres, with the smallest, Bettis Atomic Power Laboratory, having around 200 PILT-eligible acres. Figure 3, below, shows the name, location, and PILT-associated acreage of DOE sites where local communities received PILT payments in 2017 or had pending PILT payments. PILT Payments Varied Considerably, with Communities at Two Sites Receiving the Majority of Total Payments Payments to communities at the 11 DOE sites that received PILT payments in fiscal year 2017 varied considerably, from less than $65,000 to more than $9 million, totaling over $23 million. Communities at the Hanford and Savannah River sites, representing over 75 percent of all PILT-eligible acreage, received approximately 70 percent of total PILT payments—approximately $9.7 million and $6.5 million, respectively. Of the communities at the remaining 9 sites, communities at 2 received more than $1 million, and communities at 2 received less than $100,000. Figure 4 shows payment amounts for the communities at the 11 sites that received payments in fiscal year 2017. See appendix III for detailed information on PILT payments from 1994 to 2017. Growth in PILT Payments since 1994 Results from Increases in Payments to Communities at Two Sites and the Addition of New PILT Recipients Growth in PILT payments since 1994 is primarily a result of increases in payments to communities at two sites—Hanford and Savannah River—in addition to new PILT recipient communities at DOE sites. Since 1994, total annual PILT payments have grown from $8,582,446 to $23,170,049 in fiscal year 2017 constant dollars, as figure 5 shows. Since 1994, increases in payments to the communities at the Hanford and Savannah River sites are responsible for the nearly 60 percent of remaining total growth in PILT payments. PILT payments have increased from a total of over $19 million in 2012 to over $23 million by 2017 in real terms. Nearly all of that growth in total payments during that time is a result of higher payments to communities at the Hanford site, which community and DOE site officials attributed to increases in local land value resulting from the growth in agriculture in the region. PILT payments to the three communities at the Hanford site increased by 43 percent, or nearly $3 million, in that time frame. Communities at the Hanford site were not the only ones to experience a large payment growth rate. PILT payments to communities at two other sites, Pantex and Idaho National Laboratory, increased by approximately 90 percent and 55 percent respectively over the same time period; however, this growth was approximately $100,000 and $85,000 respectively for those communities and therefore did not account for much of the overall growth in PILT payments. The majority of communities that currently receive PILT payments began receiving them beginning in or after 1994. DOE’s 1993 policy eliminated the gross benefits test and modified a provision that required payments to be reduced by the amount of tax benefits a community received from DOE’s activities. These changes allowed for additional sites to enter into PILT agreements with DOE and allowed other sites to obtain higher payment amounts. Since 1994, communities at seven additional sites were approved for and have begun receiving PILT payments. The addition of these new PILT recipient communities after the 1993 policy change, primarily Brookhaven National Laboratory, is responsible for approximately 15 percent of the growth of total annual payments. PILT Payments Generally Vary Based on Local Differences, but DOE Is Not Providing Adequate Assurance That Payments Meet PILT Goals Variations in PILT payments across sites are largely due to differences among the sites, including the different histories and market conditions at each site. However, the PILT order’s lack of requirements about PILT documentation, review of PILT invoices, and payment determinations has limited DOE’s ability to provide adequate assurance that payments fully reflect the terms of their original agreements and consistently meet PILT goals. DOE’s PILT Order Allows for Variations in PILT Payments The goal of PILT, as stated in the Atomic Energy Act and reflected in DOE’s order implementing the act, is to render financial assistance to communities, while generally not making payments in excess of the taxes that would have been payable for the property in the condition in which it was acquired. DOE officials stated that an additional PILT goal is to compensate communities for the revenues they would have received under those conditions. Although the order does not require payments to reflect the revenues communities would have received, it states that, on a case-by-case basis, PILT payments will be based on the same assessment values and tax rates that the communities apply to comparable properties with the same use and/or tax classification. Since these values and rates differ between sites, payments may also differ under the order. PILT Payments Generally Vary Based on Local Differences That Influence Property Taxes Consistent with DOE’s PILT order, PILT payments to communities vary given the characteristics of the property, market conditions, and tax policies applied at each site, in order to reflect the revenue the communities would have received had the property remained on their tax rolls. DOE generally bases PILT payments on the recipient communities’ estimates of the property taxes they would have received. The communities calculate their estimated payments and then communicate their requested payment amounts in annual invoices to DOE. DOE does not prescribe the use of a particular formula by communities seeking payments. However, DOE officials noted that communities usually base the calculations they use to develop their annual PILT invoices on property taxes and that they generally calculate these using a relatively standard formula. Key information in this calculation includes the amount of land, its estimated value, assessment ratio, and the property tax rate (see figure 6). Differences in PILT payments to different sites are generally not a function of variations in the payment formula, but rather of variations among the inputs into the formula, although DOE has sometimes altered payments in other ways. Based on our analysis of PILT payments in fiscal year 2017, we found that values of property, assessment ratios, and property tax rates vary across DOE sites and communities. The assessed value of the property is partially determined by characteristics, or history, of the property and market conditions. State and local tax policies may determine both the assessment ratio and the property tax rate. Characteristics of the property. The amount of PILT eligible property and its classification are factors that partially determine payment amounts. DOE provides the highest payments to communities at sites with the greatest amount of eligible acreage—the Hanford, Savannah River, and Oak Ridge sites. Similarly, lower acreage at some sites usually results in lower payments. For example, Los Alamos National Laboratory and the Fernald Plant are among the smallest sites and payments to these communities are among the smallest. In addition, the land use classification of the property, such as whether it was used for agricultural or commercial purposes when it was acquired, influences its value. Some classifications of land tend to have higher market values than others; for example, commercial land generally has a higher value than agricultural land. The land at the Bettis Atomic Power Laboratory site, located in western Pennsylvania, is classified as commercial property and was valued in 2017 for PILT purposes at an average of $64,476 per acre. As a result, although Bettis Atomic Power Laboratory has among the smallest acreage of any site—at approximately 200 acres—its payments are the fifth highest. In contrast, the land at the Pantex site, located in the Texas Panhandle, is classified as agricultural and homestead property and was valued in 2017 for PILT purposes at an average of $976 per acre. Market conditions. The market value of property varies across PILT sites as a result of local market conditions. Greater demand for land contributes to higher per-acre values than when there is less demand for land. This contributes to variations among land values, even within a given classification, for the communities’ annual PILT invoices to DOE. For example, irrigable agricultural land at Benton County—one of the communities that hosts the Hanford site—was valued at about $6,500 per acre in 2017, which DOE and county officials attributed primarily to high demand for agricultural property in Washington State’s Columbia Valley River Basin. In contrast, Carson County— which hosts the Pantex Plant and is in a region with lower farm real estate values and is not near a major city—valued its land at $976 per acre in 2017, as previously noted. State and local tax policies. Some states and counties reduce assessment ratios for certain types of property, such as agricultural property. For example, the assessed value of the property is reduced to a fraction of its market value. Some communities have reflected these assessment ratios in their calculations for their annual PILT invoices to DOE. Because assessment ratios can vary widely across locations—from 6 percent to 100 percent among communities that received PILT payments in fiscal year 2017—they can create large variations in PILT payments. For example, the communities at the Oak Ridge site assess agricultural property at 25 percent of the full market value, which they reflect in their annual PILT invoices to DOE. On the other hand, the Town of Brookhaven, which hosts Brookhaven National Laboratory, applied a 90 percent assessment ratio to its PILT-eligible property, which is categorized as residential. In addition, tax rates vary across communities. For example, in fiscal year 2017, the City of Oak Ridge applied a 2.5 percent tax rate to determine its payments; whereas, Carson County applied a 0.6 percent tax rate. DOE’s PILT order requires DOE to deduct from PILT payments an amount equal to any payments by the federal government that will be used by the community for the same, identifiable, discrete purpose. In practice, when communities calculate their annual PILT requests, they subtract this amount from their total payment requests. According to DOE and some community officials, communities have made these deductions to offset payments they received through the Department of Education’s Impact Aid program. DOE’s PILT Order Does Not Fully Incorporate Needed Internal Controls DOE’s PILT order calls for communities to document key determinants of PILT payments in PILT applications, but it does not include requirements or procedures for DOE or communities to document key determinants of PILT payments after the initial PILT application. In addition, although the order lists evaluation criteria on which PILT payments should be based, it does not establish a process or requirements for DOE offices to review PILT invoices to ensure payments are consistent with those criteria. The order also does not require regular, independent—such as headquarters- level—involvement in such a review process. Lastly, the PILT order lacks specificity on how payments should be determined in certain scenarios. The PILT order’s lack of sufficient internal controls may have contributed to some cases in which payments may not reflect PILT goals. DOE’s PILT Order Provides for Key Determinants in Applications, but Does Not Require DOE to Document Them in Agreements DOE’s PILT order lists application and evaluation criteria that it says will serve as the basis of PILT payments. Those criteria include factors, which we refer to as “key determinants,” such as: description of the property; tax rates and assessment values for comparable property; use and zoning classification of the property; and payments from the federal government that will be used for the same identifiable, discrete purpose. These key determinants are fundamental to determining how much revenue a community would have received if the property had remained on its tax rolls and to ensure that the communities’ PILT payments are not higher than that amount. The order calls for these key determinants to be documented in PILT applications. However, DOE’s PILT order does not require communities or DOE to document such key determinants of PILT payments at any later stage. Specifically, the order does not require DOE or communities to include this information in PILT intergovernmental agreements, which are agreements between DOE and each community and serve as a basis for obligating funding under PILT. The order also does not require communities to include such information in their annual PILT invoices that they submit to request PILT payments. GAO, Standards for Internal Control in the Federal Government, GAO-14-704G (Washington, D.C.: September 2014). key determinants of PILT payments for each community, DOE does not have adequate assurance that its payments are consistent with the agreed upon bases of PILT payments, and DOE is more likely to make payments that do not meet PILT goals. DOE’s PILT Order Establishes DOE Site Office Administration of Payments but Lacks Requirement for Independent Review of PILT Invoices DOE’s PILT order states that “DOE plans to evaluate applications for PILT, and to calculate” PILT payments using specific guidelines based on key determinants, such as the description of the property, tax rates and assessment values for comparable property, use and zoning classification of the property, and deductions equivalent to certain federal payments; however, it does not call for a review process to determine whether calculations used for PILT invoices follow those guidelines. DOE’s PILT order calls for site, program office, and headquarters review of original and revised PILT applications. However, most original applications were developed decades ago and revised PILT applications are only required if the community would like to reclassify property, change the amount of property, or make other significant changes. DOE’s PILT order does not require independent, headquarters-level review at any later stage. The PILT order states that site offices will manage the administration of PILT payments. However, it does not specifically call for DOE organizations to review communities’ annual PILT invoices to determine whether PILT invoices follow payment calculation guidelines and do not exceed the amount communities would have received had the property remained on the tax rolls. DOE headquarters officials said that headquarters officials do not review annual PILT invoices. Some DOE CFO officials and officials at some sites stated that DOE sites treat the annual payments as bills to be paid, without applying much scrutiny. To the extent that PILT invoices are reviewed, they are reviewed at the site level by officials who may live in the same communities that receive PILT payments. DOE CFO officials stated that site offices are more knowledgeable of local tax authorities and local conditions than DOE headquarters and that they have expertise—in the form of local realty, legal, budget, and supervisory staff—that DOE headquarters staff rely on for the execution of PILT payments. Nevertheless, there may be an appearance of bias if the only review of PILT invoices is conducted at the site level by individuals who may benefit indirectly from payments to their communities. Because DOE’s PILT order lacks a requirement for review and validation of annual PILT invoices, DOE is not well positioned to determine whether communities’ payment requests in PILT invoices are consistent with DOE goals. Under federal standards for internal control, management should design control activities to achieve objectives and respond to risks, such as by comparing actual performance to planned or expected performance and analyzing significant differences. By requiring site office and headquarters review of key payment determinants in PILT invoices, DOE may realize benefits, including the ability to (1) evaluate whether PILT invoices are consistent with agreed-upon bases of PILT payments and PILT goals, and (2) ensure greater independence in the review process to avoid the appearance of bias on the part of site officials, who may live in the communities receiving PILT payments and may indirectly benefit from the payments. Without requirements for DOE site offices to review key PILT payment determinants in communities’ invoices for accuracy and consistency with the agreed-upon bases of PILT payments and PILT goals and for headquarters-level review and validation of annual PILT invoices, DOE is more likely to have payments that do not meet PILT goals. DOE’s PILT Order Lacks Specificity about Some Aspects of Payment Determinations DOE’s PILT order lacks specificity about how it will determine PILT payment amounts in some scenarios. The PILT order includes information about some key determinants of PILT payments, such as tax rates, assessment values, and property classification, but the order does not provide guidance on other factors that may affect PILT payments, such as tax relief programs. In addition, the order states that the property value will exclude the value of improvements made after the federal government acquired the real property, but it does not state whether property values should include the value of resources such as timber. Last, the order states that payments will be reduced by an amount equal to any payments to the state or local jurisdiction for the same identifiable, discrete purpose. However, the order does not define the phrase “same identifiable, discrete purpose.” Under federal standards for internal control, management should design control activities to achieve objectives and respond to risks, such as by documenting internal control in management directives, administrative policies, or operating manuals. While DOE has documented some key determinants of PILT payments in its order, it does not clearly document how DOE should address tax relief programs in payment determinations. Without additional guidance in the PILT order on how communities should calculate payment requests for their PILT invoices, DOE is more likely to make payments that do not meet PILT goals, as is described in the following section. DOE Has Limited Assurance That Payments Meet Goals DOE does not have adequate assurance that payments are meeting PILT goals. This limited assurance that payments meet PILT goals may be in part a result of deficiencies in DOE’s internal controls for PILT. Based on our reviews of PILT documentation and interviews with DOE officials, we identified cases in which payments did not appear to meet the stated PILT goal of compensating communities for the revenue they would have received if the property had remained on the tax rolls. Specifically, we identified five examples of payments potentially not meeting goals as a result of issues with: property classification, determination of land value, application of state tax adjustments, payment deductions, and payment adjustments. Property classification. We identified a case in which payments appear to be higher than the amount communities would have received had the property remained on the tax rolls in the condition in which it was acquired. In the case of Benton County, the property classification that forms the basis of its requested PILT payments does not appear to be based on the classification of the property when it was acquired. Benton County’s original PILT agreement from 1996 shows that, when acquired, Hanford property in the county was classified as 11 percent farmland and 88 percent rangeland. However, the agreement also states that, considering uses of the land at the time of the agreement, 72 percent of the land would be treated for the purpose of PILT as farmland in the category of “irrigable land” and only 27 percent as rangeland. In 2017, irrigable land in Benton County was valued at $6,495 per acre whereas rangeland was valued at $410 per acre—higher percentages of irrigable land compared to rangeland therefore result in higher payments. Using these land classifications is inconsistent with the PILT goal that payments will not exceed the taxes that would have been payable for the property in the condition in which it was acquired. DOE headquarters officials we spoke with were not aware of this discrepancy in Benton County’s property classification. In addition, DOE did not have documentation to explain DOE’s decision, but an Office of the General Counsel official noted that DOE agreed to these terms as part of a settlement agreement at a time when a number of issues, beyond just PILT issues, were in dispute between Benton County and DOE. Because of this inconsistency in land classifications, it appears that Benton County’s payments may not have reflected the revenues the county would have received had the property remained on the tax rolls in the condition in which it was acquired. Had DOE maintained more thorough documentation and had there been independent review of PILT invoices, these higher payments might have been avoided. Determination of land value. We identified one case in which payments were not clearly linked to the revenue communities would have received if the property had remained on the tax rolls. Specifically, DOE negotiated with Savannah River Site counties to apply a dollar amount per acre that is not directly tied to assessed property values. DOE and the counties originally negotiated values in 1988 of $1,000 per acre for Aiken and Barnwell and $426 for Allendale counties. Those amounts remained flat until 2007, when DOE agreed to adjust them with a “time value of money” factor to $1,641 and $712 respectively. According to county officials, the counties and DOE agreed to use a negotiated rate rather than a rate based on current assessment values partly because of the difficulty of conducting appraisals because of the large amount of land, lack of comparable properties, and the high expense of an appraisal. Because of this reliance on a negotiated, rather than assessed value, it is unclear whether these payments reflect the revenues the counties would have received had the property remained on the tax rolls in the condition in which it was acquired. Had DOE required independent review of key determinants of PILT payments, this deviation from using assessed values might have been avoided. Application of tax relief programs. We identified a third case in which payments may have been higher than the revenue communities would have received if the property had remained on the tax rolls. With regard to the Hanford Site, the Open Space Taxation Act of Washington State is a tax relief program that community officials said allows assessment ratios of about 40 percent to be applied for land that is being used for agriculture or as rangeland. In the past, none of the three counties that receive PILT at the Hanford site applied special assessment ratios under this tax relief program in calculating PILT payments. Hanford site officials informed us that they were aware of this tax law and requested that the three counties at the Hanford site apply it. The DOE officials explained that the counties refused because DOE was not using those lands for agriculture or rangeland. The officials stated that the counties at Hanford decided that DOE did not meet the purpose and the terms of the program. However, if the land had remained on the tax rolls in the condition in which it was acquired, it could also be assumed that it might have been farmed or used as rangeland, in which case the counties may have applied the special assessment ratios. Although DOE’s order does not state whether PILT payments should take into account such tax relief programs, failure to take such programs into account may have resulted in DOE paying the counties at Hanford more than they would have received had the property remained on the tax rolls in the condition in which it was acquired, contrary to the order. If DOE’s PILT order had included more specificity about how tax relief programs should be addressed, DOE might have had greater assurance that these payments were not higher than the revenue the communities would have received had the property remained on the tax rolls in the condition in which it was acquired. Payment deductions. We identified a case in which it was unclear whether payments aligned with PILT goals. DOE has provided non- PILT funding to Los Alamos public schools and the Los Alamos fire department. According to DOE officials, DOE has annually provided $8 million to the county’s schools; DOE provided over $20 million for the county’s fiscal year 2020 firefighting services. DOE also provides PILT funding to Los Alamos County, which was $244,183 in fiscal year 2017. About a decade ago, DOE considered whether it should stop making PILT payments to Los Alamos County because of its other support for the community and the provision in the PILT order requiring deductions from PILT for other payments by the federal government that will be used for the same identifiable, discrete purpose. However, DOE has decided to continue paying Los Alamos County PILT. The county’s position is that the schools are a separate entity from the county government and that its payments should not be reduced to account for amounts received directly by the schools, but in 2017 the county nonetheless reduced its PILT request by the amount it would have provided to Los Alamos schools. It is unclear how the PILT order should be applied in situations like this where payments, including PILT payments, are made to multiple entities. Making continued payments in such a situation, however, may exacerbate perceptions of inequities across sites. If DOE’s PILT order had included more specificity about the reduction of payments to account for other federal payments for the same identifiable, discrete purpose, DOE might have had greater assurance that these payments meet PILT goals. Payment adjustments. We identified a case in which the PILT order’s lack of specificity led to uncertainty for PILT payment recipients when DOE’s payments did not align with the communities’ calculations of what the communities determined they would have received if the property had remained on the tax rolls. When the PILT invoices from the three counties at the Hanford Site increased by about 73 percent in real terms from a total of about $6 million in 2010 to about $10.7 million in 2017, DOE began providing payments that were lower than what the counties requested in their PILT invoices. Specifically, in 2017, DOE provided 91 percent of what the counties requested, and in 2018 DOE provided 65 percent of what they requested, which DOE officials said was because payment requests exceeded the amounts set aside for PILT purposes. DOE did not cite problems in the counties’ PILT invoices or document problems with the counties’ PILT invoices. Payment adjustments are allowable under the PILT order—both the Atomic Energy Act and DOE’s PILT order give DOE discretion as to payment amounts. However, because the order also lists key determinants for PILT payments that are based on the taxes communities would have received had they remained on the tax rolls and because DOE has typically provided what communities have requested, communities we spoke with said they began to rely on PILT in their budget formulations. The communities had developed their budgets based on the assumption that payments would align with the amounts they determined they would have received had their property remained on the tax rolls, but it is now difficult for them to plan ahead with the new uncertainty. In response to this uncertainty in the payment amount, in 2019, one of the counties at Hanford—Benton County—provided DOE with a PILT invoice that was about $5 million lower than the previous year. According to the county officials we spoke with, the goal of providing a lower PILT payment invoice was to increase the likelihood that they would receive the full amount. DOE’s order does not include any information about under what conditions DOE will adjust payments— such as if payments calculations are not consistent with PILT payment determinants—to guide DOE’s oversight. The order also does not require DOE to document or communicate such information ahead of time. Had DOE’s PILT order included more specificity on these topics, communities might have had more clarity regarding whether their payment calculations were consistent with PILT goals and whether they were likely to receive the amounts they requested. Conclusions PILT payments help replace tax revenue that communities are no longer receiving because of DOE’s acquisition of property in their communities. Our past work reported that DOE allowed different standards for PILT invoices at different sites, depending on when the community applied for PILT payments, raising concerns about inequitable treatment of communities. In 1993, DOE updated its PILT order to address one of these concerns by eliminating the gross benefits test that had been applied to new communities. However, some concerns remained. DOE intentionally allows payments to communities to vary across locations because property characteristics, market conditions, and tax policies differ; this variance enables payments to reflect the taxes the communities would have received if the property had remained on local tax rolls. However, DOE’s PILT order lacks: (1) requirements for documenting key determinants of PILT payments in intergovernmental agreements and invoices, (2) requirements for independent review of PILT invoices for consistency with agreed-upon bases of payments, and (3) specificity about payment determinations in certain scenarios. This has resulted in a relatively hands-off approach to management and oversight of communities’ annual PILT invoices as well as some uncertainty about how to determine PILT payments. This is inconsistent with federal internal-control standards and has limited DOE’s ability to provide adequate assurance that DOE is meeting PILT goals. Until DOE strengthens its internal-control activities, communities may continue to perceive that there are inequities in PILT, and DOE will not be able to provide adequate assurance that it is meeting PILT goals. Recommendations for Executive Action We are making the following three recommendations to DOE: The Secretary of Energy should direct DOE’s Office of the Chief Financial Officer to revise DOE’s PILT order to require DOE to maintain documentation of key determinants of PILT payments for each community to help ensure that payments are consistent with the agreed-upon bases of PILT payments and PILT goals. (Recommendation 1) The Secretary of Energy should direct DOE’s Office of the Chief Financial Officer to revise DOE’s PILT order to require DOE site offices to review key determinants of PILT payments in communities’ PILT invoices for accuracy and consistency with the agreed-upon bases of PILT payments and PILT goals and for DOE headquarters to document its review and validation of site office determinations. (Recommendation 2) The Secretary of Energy should direct DOE’s Office of the Chief Financial Officer to revise DOE’s PILT order to provide additional guidance on how communities should calculate their payment requests for their PILT invoices. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this product to DOE for review and comment. In its comments, reproduced in appendix IV, DOE neither agreed nor disagreed with our recommendations but did describe actions that it intends to take in response to our recommendations. DOE stated that it will undertake a comprehensive assessment of the PILT program, its objectives, and the manner in which DOE accomplishes PILT’s objectives. DOE also stated that it will convene a working group to identify high-level options for PILT and recommend appropriate changes, if necessary, to DOE leadership. Although further analysis of PILT could be worthwhile, we believe our review sufficiently demonstrated that DOE’s PILT order lacks sufficient internal controls. As a result, we continue to believe that implementing our recommendations for revising the PILT order could provide better assurance that payments meet PILT goals. DOE also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have questions about this report, please contact David C. Trimble at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology The objectives of our review were to assess: (1) how, if at all, PILT payments vary across sites and how they have varied over time, and (2) reasons for variations in payments and the extent to which the Department of Energy (DOE) is providing assurance that payments meet PILT goals. To assess how, if at all, PILT payments vary across sites and how they have changed over time, we obtained and analyzed documentation from DOE regarding total number of DOE sites, their eligibility for PILT, and reasons for lack of eligibility, when applicable. We analyzed DOE documentation of eligible acreage at sites that are affiliated with communities that receive PILT payments and compared this with acreage of DOE property that is not eligible for PILT. Idaho National Laboratory, Idaho: Bingham County, Butte County, Clark County, Jefferson County Office of Legacy Management: Fernald Plant, Ohio: Hamilton County We took several steps to assess the reliability of PILT payment data. We collected data in two phases. The first used PILT datasets that DOE had collected prior to our review. These covered years 1989–2009 and 2012– 2017. We used those data to develop a preliminary understanding of how PILT payments varied across sites and over time. We asked DOE to collect a second, complete, data set for the purpose of our review. That data set covered years 1994-2017. Using these data, we identified possible outliers and missing data and interviewed relevant agency officials at the headquarters, field office, and site office level to determine the extent to which the data were reliable. In addition, we interviewed relevant agency officials at the headquarters, field office, and site offices regarding their internal data reliability and data control measures. A number of written questions regarding their annual PILT invoices, PILT payments, federal offsets, and other related topics that were responded to by all 12 site offices. We also requested DOE payment information that would allow spot checking of the data that DOE provided. We requested that each of the 12 sites provide documentation of their payments for one in every 5 years between 1994 and 2017. We compared this documentation with data DOE submitted for those years to spot check the data for accuracy. We reviewed past GAO reports on PILT and past GAO and DOE reports on DOE financial management systems. We determined the data to be sufficiently reliable for our purposes. For both objectives, we conducted interviews with or obtained written responses from the following DOE offices, which included representatives of all of the sites that received recent PILT payments: DOE headquarters: Office of the Chief Financial Officer and General Counsel. DOE program offices that manage sites hosted by PILT-recipient communities: National Nuclear Security Administration, Office of Environmental Management, Office of Legacy Management, Office of Nuclear Energy, and Office of Science. DOE site offices hosted by PILT-recipient communities: Argonne National Laboratory, Bettis Atomic Power Laboratory, Brookhaven National Laboratory, Fernald Plant, Knolls Atomic Power Laboratory, Hanford site, Idaho National Laboratory, Los Alamos National Laboratory, Oak Ridge site, Pantex Plant, Portsmouth site, and Savannah River site. To assess reasons for variations in payments, we identified how DOE communities calculate their requested PILT payment amounts and how DOE officials determine how much DOE will pay. We reviewed DOE’s PILT order, DOE Order 143.1, to determine how DOE specifies payments are to be calculated. We also interviewed DOE site office officials about how they expect communities to determine their requested payment amounts. We compared DOE expectations regarding annual payment request calculations with PILT invoices that communities submit to request payments. Because communities appeared to generally calculate payments to align with expected property tax revenue they would have received had the DOE-acquired property remained on the tax rolls in the condition in which it was acquired, we compared this information with information on how local and state governments determine property taxes. When we needed further clarification about how communities had determined their requested payment amounts, we sent follow up questions to DOE site officials regarding the PILT invoices they had reviewed. Once we identified how communities calculate PILT invoices, we analyzed communities’ fiscal year 2017 payment request documentation to determine how factors—such as characteristics of the property, market conditions, and state and local tax policies—influence payment amounts. We interviewed DOE site officials and some community officials, at the communities that received some of the largest payments, about instances when payments varied from what communities requested. We analyzed PILT invoices, agreements, and payment data to identify how communities and sites had determined and documented key determinants and decisions, such as property classification, deductions because of other federal payments, land values, and assessment rates. We analyzed DOE’s PILT order to identify PILT goals and requirements related to: PILT payment determinations, DOE review of communities’ PILT invoices, and PILT documentation. We compared this with federal standards for internal control. We interviewed officials from selected communities that received some of the largest payments to determine how they used PILT payments, how they assess land value, and challenges they have faced with PILT. These communities included all communities at the two sites with the largest aggregate PILT payments in fiscal year 2017: Benton, Franklin, and Grant counties at the Hanford site and Aiken, Allendale, and Barnwell counties at the Savannah River site. Regarding these same topics, we also interviewed staff at community organizations that represent communities that host DOE sites, including: the Energy Communities Alliance and the National Association of Counties. Findings from these communities at two sites and two community organizations cannot be generalized to those we did not interview as part of our review. We conducted this performance audit from October 2018 to October 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on audit objectives. Appendix II: Department of Energy’s (DOE) Sites That Provide Payments in Lieu of Taxes (PILT) to Communities Argonne National Laboratory The Argonne National Laboratory covers 1,363 acres in DuPage County outside of Chicago, Illinois. Established in 1946 to conduct “cooperative research in nucleonics” as part of the Atomic Energy Commission’s development of nuclear reactors, Argonne National Laboratory now has over 3,200 employees in addition to nearly 800 scientists who visit the site yearly. Additionally, Argonne has over 7,900 facility users who participate in research at five major user facilities located on site. Bettis Atomic Power Laboratory The Bettis Atomic Power Laboratory, covering approximately 200 acres in West Mifflin outside of Pittsburgh, Pennsylvania, is a part of the Naval Nuclear Propulsion Program in the Department of Energy. The Laboratory began operations in 1948 in order to support the engineering, design, and construction of the prototypes of the first nuclear powered submarine, and by 1955 the USS Nautilus was successfully launched. Since then, the Laboratory led development on other nuclear powered crafts including the first nuclear powered ship and aircraft carrier, the USS Long Beach and USS Enterprise, respectively. Today, the Laboratory focuses on design and engineering support for nuclear-powered submarines and aircraft carriers, in addition to development for the nuclear power elements of next generation aircraft carriers. Brookhaven National Laboratory The Brookhaven National Laboratory was established in 1947 by the Atomic Energy Commission. Formerly Camp Upton, a U.S. Army installation site, Brookhaven is located on a 5,263-acre site on Long Island in Upton, New York, approximately 60 miles east of New York City. Historically, Brookhaven was involved in the construction of accelerators and research reactors such as the Cosmotron, the High Flux Beam Reactor, and the Brookhaven Graphite Research Reactor. These research facilities led the way in high-energy physics experiments and subsequent discoveries but also resulted in creation of hazardous wastes. As a result, Brookhaven was listed as a Superfund Site in 1989 and a subsequent agreement with state and federal regulators led to the building and operation of groundwater remediation facilities, and the decontamination and decommissioning of the High Flux Beam Reactor and the Brookhaven Graphite Research Reactor including offsite waste disposal. Fernald Plant The Fernald Plant covers 839 acres in southwestern Ohio near Cincinnati, Ohio. The Fernald Plant’s production mission took place from 1951–1989 as it housed the Feed Materials Production Center, which processed uranium as the first step in the nuclear weapons production cycle. In 2006, the remediation and restoration of the site was completed and at the time was one of the largest environmental cleanup operations ever undertaken in the United States. Currently, monitoring of the site and a groundwater extraction and treatment remediation under the Office of Legacy Management is the remaining remediation activity. The site includes restored native plants and grasses and the largest manmade wetlands in Ohio. Hanford Site DOE is responsible for one of the world’s largest environmental cleanup projects: the treatment and disposal of millions of gallons of radioactive and hazardous waste at its 586 square mile Hanford Site in southeastern Washington State. Hanford facilities produced more than 20 million pieces of uranium metal fuel for nine nuclear reactors along the Columbia River. Five plants in the center of the Hanford Site processed 110,000 tons of fuel from the reactors, discharging an estimated 450 billion gallons of liquids to soil disposal sites and 53 million gallons of radioactive waste to 177 large underground tanks. Plutonium production ended in the late 1980s. Hanford cleanup began in 1989 and now involves (1) groundwater monitoring and treatment, (2) deactivation and decommissioning of contaminated facilities, and (3) the construction of the waste treatment and immobilization plant intended, when complete, to treat the waste in the underground tanks. Idaho Site DOE’s Idaho Site is an 890-square-mile federal reserve, only some of which is eligible for PILT, situated in the Arco Desert over the Snake River Plain Aquifer in central Idaho. The site is home to both the Idaho National Laboratory (INL) and the Idaho Cleanup Project. Work at the INL focuses on research and development of nuclear energy technologies, critical infrastructure protection research, and support of national defense and homeland security. The environmental cleanup mission includes remediation of contaminated legacy wastes generated from World War II- era conventional weapons testing, government-owned research and defense reactors, spent nuclear fuel reprocessing, laboratory research, and defense missions at other DOE sites. Knolls Atomic Power Laboratory The Knolls Atomic Power Laboratory, located on 173 acres in Niskayuna, near Schenectady, NY, was established in May 1946. The original mission of the Knolls laboratory was to provide technical support for the chemical separation of plutonium and uranium from irradiated fuel. In the 1950s, Knolls changed focus to Navy submarine propulsion development. Knolls developed a series of nuclear reactor and propulsion plant designs for the U.S. Navy. Knolls is the lead design laboratory for the newest Virginia Class fast attack submarines and is leading the design effort on the next generation ballistic missile submarine. Los Alamos National Laboratory The laboratory, founded in 1943 during World War II, served as a secret facility for research and development of the first nuclear weapon. The site was chosen because the area provided controlled access, steep canyons for testing high explosives, and existing infrastructure. The Manhattan Project’s research and development efforts that were previously spread throughout the nation became centralized at Los Alamos and left a legacy of contamination. Today, the Los Alamos National Laboratory Cleanup Project is responsible for the treatment, storage, and disposition of a variety of radioactive and hazardous waste streams; removal and disposition of buried waste; protection of the regional aquifer; and removal or deactivation of unneeded facilities. Oak Ridge Site DOE’s Oak Ridge Reservation is located on approximately 33,500 acres in East Tennessee. The reservation was established in the early 1940s by the Manhattan Engineer District of the United States Army Corps of Engineers and played a role in the production of enriched uranium during the Manhattan Project and the Cold War. DOE is now working to address excess and contaminated facilities, remove soil and groundwater contamination, and enable modernization that allows the National Nuclear Security Administration to continue its national security and nuclear nonproliferation responsibilities and the Oak Ridge National Laboratory to continue its mission for advancing technology and science. Pantex Plant The Pantex Plant covers 2,000 acres and is located northeast of Amarillo, Texas. One of six production facilities in the National Nuclear Security Administration’s Nuclear Security Enterprise, since 1975 the Pantex Plant has operated as the nation’s primary facility for the assembly, dismantlement, and maintenance of nuclear weapons. The last new nuclear weapon was completed in 1991, and since then, the Pantex Plant has dismantled, retired, or stored thousands of nuclear weapons. Portsmouth Site The Portsmouth Gaseous Diffusion Plant is located in Pike County, Ohio, in southern central Ohio, approximately 20 miles north of the city of Portsmouth, Ohio. This facility was initially constructed to produce enriched uranium to support the nation’s nuclear weapons program and, later, commercial nuclear reactors. Decades of uranium enrichment and support activities required the use of a number of typical and special industrial chemicals and materials. Plant operations generated hazardous, radioactive, mixed (both hazardous and radioactive), and nonchemical (sanitary) wastes. Past operations also resulted in soil, groundwater, and surface water contamination at several sites located within plant boundaries. Savannah River Site The Savannah River Site complex covers 198,344 acres, or 310 square miles, encompassing parts of Aiken, Barnwell, and Allendale counties in South Carolina, bordering the Savannah River. The site is a key DOE industrial complex responsible for environmental stewardship, environmental cleanup, waste management, and disposition of nuclear materials. During the early 1950s, the site began to produce materials used in nuclear weapons, primarily tritium and plutonium-239. Five reactors were built to produce nuclear materials and resulted in unusable by-products, such as radioactive waste. About 35 million gallons of radioactive liquid waste are stored in 43 underground tanks. The Defense Waste Processing Facility is processing the high-activity waste, encapsulating radioactive elements in borosilicate glass, a stable storage form. Since the facility began operations in March 1996, it has produced more than 4,000 canisters (more than 16 million pounds) of radioactive glass. Appendix III: Payments In Lieu of Taxes (PILT) by the Department of Energy (DOE) since 1994 Appendix IV: Comments from the Department of Energy Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Amanda Kolling, Assistant Director; Antoinette Capaccio; Ellen Fried; Laura Holliday; Skip McClinton; and Sara Sullivan made key contributions to this report. Also contributing to this report were Jeff Arkin, Cindy Gilbert, Michael Kendix, Richard Johnson, and Oliver Richard.
Why GAO Did This Study The Atomic Energy Act, as amended, authorizes DOE to make PILT payments to communities that host DOE sites that meet specific criteria. PILT is discretionary financial assistance that provides payments to communities based on the property taxes they would have received had the property remained on their tax rolls. House Report 115-230 accompanying a bill for the Energy and Water Development and Related Agencies Appropriations Act of 2018 included a provision for GAO to review DOE PILT. This report assesses (1) how PILT payments vary, if at all, by site and over time, and (2) reasons for variations in payments and the extent to which DOE is providing assurance that payments meet PILT goals. GAO analyzed data on DOE payments to communities that DOE reported as having received PILT payments between 2008 and 2017. GAO compared 2017 data across sites and identified changes in payments to those communities between 1994 and 2017. GAO reviewed PILT's authorizing statute, DOE's PILT order, and PILT documentation. GAO interviewed officials from DOE, communities, and community organizations. What GAO Found The Department of Energy's (DOE) payments in lieu of taxes (PILT)—payments made to some local communities that host DOE sites—vary considerably across the sites and have generally increased over time. Communities at 11 DOE sites received PILT payments in fiscal year 2017 (the most recent fiscal year for which complete data were available), totaling approximately $23 million (see figure). Payments to communities at the Hanford and Savannah River sites accounted for approximately 70 percent of that total, while payments to six sites combined accounted for less than 5 percent. Total PILT payments have more than doubled since 1994, primarily because of growth in payments to communities at the Hanford and Savannah River sites and because communities at other sites began receiving payments since 1994. DOE intentionally allows for variations of payments across sites so that payments may reflect the revenues communities would have received had the property remained on the tax rolls in the condition in which it was acquired, which DOE officials stated is a goal of PILT. However, DOE's PILT order's lack of requirements has limited DOE's ability to provide adequate assurance that payments consistently meet this and other PILT goals. The PILT order does not require documentation of the key determinants that went into the calculation of payments, or an independent review process to determine whether payment calculations are consistent with PILT goals. The PILT order also lacks specificity about payment determinations in certain scenarios. Without updates to the PILT order to strengthen DOE's internal controls, DOE will continue to lack adequate assurance that payments meet PILT goals. What GAO Recommends GAO is making three recommendations that DOE update its PILT order to: improve collection and documentation of key determinants of PILT payments, implement a review process, and clarify how communities should calculate payment requests. DOE neither agreed nor disagreed and plans instead to further study PILT. We believe our report supports implementation of these recommendations.
gao_GAO-20-197
gao_GAO-20-197_0
Background Fourth Generation TRICARE Contracts (T- 2017) For the T-2017 contracts, DHA consolidated its TRICARE regions from three regions (North, South, and West) to two regions (East and West). Humana Government Business is the managed care support contractor for the East Region, and Health Net Federal Services is the managed care support contractor for the West Region. Health care delivery under the T-2017 contracts began on January 1, 2018. DHA expects the costs of the two contracts to total approximately $58 billion over a 5-year performance period, which is scheduled to end on December 31, 2022. The primary responsibilities of the managed care support contractors include the following: developing civilian provider networks, which include hospitals and processing referrals and authorizations for beneficiaries to receive processing health care claims; providing comprehensive, readily accessible customer services for beneficiaries and providers; and establishing and maintaining a medical management program that includes requirements in the TRICARE Operations Manual. In addition, DHA officials told us that they have begun their planning activities for the fifth generation of TRICARE contracts, referred to as the T-5 contracts. If DHA exercises all option years for the T-2017 contracts, health care delivery under the T-5 contracts is expected to begin in 2023. Overview of the Acquisition Process for the T-2017 Contracts DHA’s acquisition process for the T-2017 contracts consisted of four steps: (1) planning the acquisition, (2) issuing the Request for Proposals (RFP) and soliciting responses, (3) awarding the contracts, and (4) post award activities (see figure 1). 1. Acquisition planning. DHA defined the contract requirements—the work to be performed by the contractor—and developed an acquisition plan to meet those requirements. The T-2017 program manager and contracting officer developed key acquisition documents—including the T-2017 Acquisition Strategy and the Acquisition Plan—and conducted market research. The T-2017 Acquisition Strategy provides a high-level description of the milestones in the acquisition process and how those milestones will be achieved. The T-2017 Acquisition Plan outlines the specific actions necessary to execute the approach outlined in the approved acquisition strategy. 2. Request for proposals. DHA issued an RFP that documented the requirements for T-2017—including the contract type, significant contract dates, pricing arrangements, and the criteria to be used to assess offerors’ proposals. 3. Award. DHA established a source selection team to evaluate the proposals received in response to the RFP. The source selection authority selects the winning proposals using a best value tradeoff process after considering reports written by other members of the source selection team. 4. Post-Award Activities. DHA provides a 12-month transition period between its outgoing and incoming contractors to ensure that its incoming contractors are prepared for their new responsibilities. The transition period for the T-2017 contracts began on January 1, 2017, and ended on December 31, 2017. The incoming contractors assumed full responsibility for health care delivery on January 1, 2018. NDAA 2017 Section 705 Requirements The NDAA 2017 required a number of changes to the TRICARE program through its contracts. Specifically, section 705(a) of the NDAA 2017 required DOD to develop and implement value-based incentive programs in its contracts to help improve the quality of health care services provided to eligible TRICARE beneficiaries by rewarding civilian providers with additional payments for improved performance based on certain metrics. In addition, section 705(c) of the NDAA 2017 directed the department to develop and implement a strategy—by January 1, 2018— for its TRICARE contracts that includes 13 specific elements, such as telehealth services and beneficiary referrals, among others (see table 1). The act required DOD to modify its TRICARE contracts to ensure consistency with the required strategy providing for the 13 elements. DHA Made Selective Changes between the T-3 and T-2017 Contracts; Some Changes Are Consistent with Provisions and Themes in Prior NDAA Legislation DHA made selective changes between the T-3 and T-2017 contracts and acquisition strategy. According to DHA officials, the contracts are generally the same, and changes were made to clarify or streamline TRICARE requirements and administrative processes. The T-2017 Acquisition Strategy states that the T-2017 performance work statement, which identifies the TRICARE requirements to be implemented by the contractors, is essentially unchanged from the T-3 contracts. DHA officials explained that their leadership prioritized the continuation of beneficiary services during the T-2017 planning process over making significant changes to contract requirements that could potentially be disruptive. We found that some of the changes that were made to the T- 2017 contracts are consistent with specific provisions and themes we identified in prior NDAA legislation. Some Changes to the T- 2017 Contracts Are Consistent with Specific NDAA 2017 Provisions Although the NDAA 2017 was enacted after the T-2017 contracts had been awarded, some of the contract changes for T-2017 may be consistent with specific provisions outlined in section 705(c)(1), such as provisions related to improving access to care, health outcomes, health care quality, beneficiaries’ experience, as well as lowering health care costs. However, DHA officials stated that because health care delivery under the T-2017 contracts began in 2018, it is too early to measure any benefits from these changes. These contract changes include (1) the consolidation of contract regions, (2) the combining of administrative costs, and (3) the introduction of new contract incentives. 1. Consolidation of contract regions. While DHA awarded the T-3 contracts for three regions (West, South, and North), it consolidated two of the regions (North and South) for the T-2017 contracts (see figure 2). By eliminating the additional regional contract, DHA anticipates a savings of approximately $25 million a year in overhead and management costs. In addition, beneficiaries are less likely to have a disruption in care when moving. For example, beneficiaries who moved between the former North and South regions would now stay enrolled with the same contractor in the larger East region. 2. Combined administrative costs. For T-2017, DHA combined all administrative costs in one contract line item in order to lower total cost of care. For example, under the T-3 contracts, DHA reimbursed the contractors for processing individual claims with a higher rate for paper claims and a lower rate for electronic claims. Without a difference in costs for T-2017, contractors are incentivized to lower their costs and prioritize electronic claims, which DHA officials say are more efficient. 3. Contract incentives. DHA incorporated incentives into the T-2017 contracts to encourage contractors to negotiate reimbursement rate discounts with network providers in order to reduce health care costs. The T-2017 contracts state that the contractor must meet a required discount rate on care provided by network providers. If this discount rate is not met, DOD will offset the discount deficit amount from the next payment due to the contractor. DHA expects that negative incentive will reduce health care costs and result in government savings. Several Contract Changes Are Consistent with Acquisition Themes in Prior NDAA Legislation We also found examples of changes to the contract or acquisition process for T-2017 that are consistent with selected acquisition themes we identified in prior NDAA legislation. These acquisition themes are 1) leveraging commercial best practices, 2) promoting competition, and 3) focusing on value. We previously reported that the identified acquisition themes can reduce costs and increase value for the government. 1. Leveraging commercial best practices: T-2017 required contractors to increase utilization of commercial best practices, including the use of automation technology to process referrals and authorizations, episodes of care, and procedure diagnosis coding. As we have previously reported, federal agencies can leverage commercial best practices to lower costs and maximize the value of the services they buy. According to DHA officials, adapting automation technology already in use in the health care industry should improve the quality of services, beneficiary satisfaction, and result in cost savings to the government. In addition, officials from one of the current TRICARE contractors stated that the T-2017 RFP was structured to incentivize contractors to innovate and bring best practices from their industry experience in both the commercial sector and other government programs, such as Medicare. For example, the T-2017 contract included a new requirement for contractors to use industry best practices when collecting health care data, in order to identify and reduce gaps in care and enhance quality of care for beneficiaries. 2. Promoting competition: We found that DHA made an effort to promote competition for the T-2017 RFP. Competitive contracts can result in cost savings for the federal government and promote accountability for results. In the acquisition planning phase, DHA identified an increased number of interested contractors through market research, from eight for T-3 to 22 for T-2017. In addition, DHA officials stated that they took steps during the acquisition planning process to ensure that the incumbent contractors did not have a significant advantage over prospective contractors. For example, the T-2017 contracting officer was assigned early in the planning process and did not participate in management of the T-3 contracts or in interactions with the incumbent contractors. DHA officials stated that they expected greater contractor interest in the East region because the larger beneficiary population of that region would result in a more valuable contract. However, DHA received a total of seven proposals each for T-3 and T-2017, including one new company that participated in T-2017 but had not previously submitted a proposal for T-3. 3. Focusing on value: We found that DHA’s approach for T-2017 emphasizes value and quality, not just lower costs. Specifically, DHA focused on the value of improving health care by considering the total cost of care over time, rather than the cost of individual health care. This is reflected in the T-2017 Acquisition Strategy, which prioritizes quality and delivery of health care above lowest cost. For example, the T-2017 contracts required additional preventive screenings and diseases covered under a chronic care program to achieve improved quality of care despite the cost of screenings. We have previously reported that these preventive health services are determined to be cost-effective when they improve the benefit (e.g., health outcomes) in a less costly way than a given alternative care option. Some preventive services may also result in cost savings, where the cost of implementing the service is less than the expected future costs to treat a disease or condition. DHA Has Implemented Two Value-Based Incentive Pilots; Other Pilots Are Planned As of October 2019, DHA had implemented two of the three value-based pilots described in its January 2018 report to Congress, which outlines the department’s plans for addressing the NDAA 2017’s requirement for developing value-based incentive programs. Specifically, in this report, DHA described its intent to implement three value-based pilots in response to section 705(a) of the NDAA 2017—(1) the Performance- Based Maternity Payments Pilot, (2) the Medication Adherence Pilot, and (3) the High-Value Primary Care Provider Pilot—through modifications to its TRICARE contracts over the next 6 to 18 months. 1. Performance-Based Maternity Payments Pilot. DHA modified its T- 2017 contracts to begin implementing the Performance-Based Maternity Payments Pilot in April 2018. This pilot was designed to provide both non-financial and financial incentives to hospitals that achieve and maintain excellence in maternity care quality. The first phase of this pilot focused on non-financial incentives by promoting greater transparency about the quality of maternity care delivered by hospitals in the TRICARE network. Specifically, DHA implemented a “steerage model” that identifies higher-performing hospitals in the managed care support contractors’ provider directories using specific visual prompts in order to encourage beneficiaries to seek care from those institutions. The second phase of the pilot began in October 2018 and incorporated performance-based payments, or financial incentives, for network hospitals that achieve a certain level of performance on specified maternity care quality measures. The anticipated end date for the pilot is March 2021. 2. Medication Adherence Pilot. DHA modified its TRICARE pharmacy contracts to begin implementing the Medication Adherence Pilot in February 2018. This pilot is designed to incentivize beneficiaries’ adherence to medication regimens by reducing or eliminating copayments for two medications (one for diabetes and another for cardiovascular-related illnesses). 3. High-Value Primary Care Provider Pilot. As of October 2019, DHA officials told us they were still assessing the feasibility of implementing the High-Value Primary Care Provider Pilot, which would provide financial incentives (such as additional payments or reduced network discounts) to primary care providers who exceed certain quality thresholds, as well as financial incentives (such as reduced co-shares and copayments) for beneficiaries who use these providers. DHA officials said other value-based efforts are being planned to address section 705(a) of the NDAA 2017, such as value-based pilots and demonstrations that aim to incentivize providers to provide quality care— including hospital, home health, and episode-based bundled payments pilots, among others. DHA has reported that these projects will offer DHA the opportunity to test value-based payment models and incorporate innovative ideas and solutions into its TRICARE contracts. DHA Has Partially Implemented Six of 13 Required Elements, but It Is Unclear When Implementation of All Elements Will Be Complete As of January 2020, we found that DOD had partially implemented six of the 13 elements required by sections 705(c)(5) and (c)(6) in the NDAA 2017, in its T-2017 contracts. DHA leadership explained that they had decided that the department would separately address each of the 13 elements through modifications to the TRICARE contracts rather than developing a single strategy that would address all of the elements. According to DHA officials, some of the 13 elements would be implemented through modifications to the T-2017 contracts while other elements would be addressed in the T-5 contracts as certain elements would require more time to develop. Section 705(c)(5): This section includes nine elements that focus on various aspects of health care delivery. We found that DHA had partially implemented six of the nine elements—including provider networks, medical management, telehealth services, beneficiary enrollment, value-based methodologies, and prevention and wellness incentives (see table 2). Although DHA officials generally described their approach for addressing the three other elements, they were not able to provide documentation, such as implementation plans, with specific time frames or actions needed to fully implement each of them. Specifically, when asked about time frames for complete implementation, DHA officials told us that many of the elements should be addressed through the T-5 contracts. DHA officials also told us the department’s approach to addressing these elements—such as provider networks—will be informed by ongoing and future value- based pilots and demonstrations; however, data from these pilots and demonstrations are not expected to be available until they have concluded. Section 705(c)(6): This section included four required elements that focus on the delivery of health care in rural, remote, and isolated areas. DHA has not implemented any of these requirements. DHA officials told us they are considering requirements for T-5 that will address the four elements, but did not provide documentation with specific time frames and actions needed to fully implement each of them (see table 3). Without plans that include specific time frames and actions needed, it is unclear exactly how and when DHA will fully implement all 13 elements into its TRICARE contracts. As we have previously reported, sound planning calls for results-oriented organizations to develop plans that (1) provide tools to ensure accountability, such as time frames, and (2) identify specific activities to obtain desired results, among other things. Developing and implementing plans with time frames and actions needed can help to ensure that DHA fully implements all 13 required elements, which is particularly important since it is in the process of developing its T-5 contracts. Conclusions The NDAA 2017 required DHA to make numerous changes to its TRICARE program—some of which impact its T-2017 managed care support contracts. In particular, the act required DHA to modify these contracts to ensure consistency with 13 specific elements related to improving health care delivery, such as with provider network flexibility, increased use of telehealth services, and prevention and wellness incentives, among others. While DHA has taken steps to begin implementing some of these elements in its current T-2017 contracts, it has not developed implementation plans with time frames and specific actions needed to guide its efforts, which could help ensure that DHA successfully implements all of the required elements. Until these elements are fully implemented, the department may not achieve the TRICARE program improvements Congress intended related to access to care, health outcomes, quality of care, beneficiaries’ experience, and cost efficiency. Recommendation for Executive Action We are making the following recommendation to DHA: The Director of DHA should develop and implement plans with timeframes and specific actions needed for all 13 required elements to be reflected in the TRICARE contracts. (Recommendation 1) Agency Comments We provided a draft of this report to DOD for comment. In its written comments, reproduced in appendix I, DOD generally agreed with our findings and concurred with our recommendation. The department reiterated its plans to address each of the elements required by sections 705(c)(5) and (c)(6) in the NDAA 2017 as part of its T-5 contracts. DOD also provided technical comments, which we incorporated as appropriate. In addition, DOD provided updated information on the status of its efforts to address certain elements required by section 705(c)(5). As a result of this information, we updated the status of the following two elements from “not implemented” to “partially implemented” in our overall assessment for the following reasons: 1) Provider Networks: The department provided evidence that the Accountable Care Organization demonstration was implemented on January 1, 2020, and that beneficiaries were enrolled in the program. 2) Medical Management: The department provided evidence that it awarded a contract for the TRICARE Select Patient Navigator Pilot on December 27, 2019, and that the contractor began work on January 1, 2020. The department also provided updates on the status of two additional elements—Financial Incentives and Medical and Lifestyle Incentives. However, while we updated the department’s plans for these elements in the report, we determined that their status should remain “not implemented” in our overall assessment for the following reasons: 1) Financial Incentives: The department provided evidence that it plans to provide financial incentives to Kaiser Permanente providers on an annual basis under the Accountable Care Organization demonstration. These incentives are expected to begin in 2021. 2) Medical and Lifestyle Incentives: According to department officials, these incentives for beneficiaries may be provided by Kaiser Permanente on an annual basis under the Accountable Care Organization demonstration, at no cost to the government. These officials told us they were unsure whether and how such incentives may be more broadly applied to the TRICARE program. We are sending copies of this report to the Department of Defense, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Sharon Silas, Director, Health Care at (202) 512-7114 or silass@gao.gov or William T. Woods, Director, Contracting and National Security Acquisitions at (202) 512-4841 or woodsw@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix II. Appendix I: Comments from the Department of Defense Appendix II: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contact named above, Bonnie Anderson, Assistant Director; La Sherri Bush, Analyst-in-Charge; LaKendra Beard, Jacquelyn Hamilton, Jessica Karnis, Miranda Riemer, and Lauren Wright made contributions to this report. Also contributing were Sam Amrhein and Vikki Porter.
Why GAO Did This Study In fiscal year 2018, DOD provided health care services to more than 9 million eligible beneficiaries through TRICARE, its regionally structured health care program. In each of its two regions (East and West), DOD uses contractors to manage health care delivery through civilian providers. The NDAA 2017 required a number of changes to the TRICARE program through its contracts. Specifically, it required DOD to implement a strategy with 13 specific elements—related to provider networks, telehealth services, and referrals, among other areas—for its contracts. The NDAA 2017 and the accompanying Senate Report 114-255 included provisions for GAO to examine DOD's managed care support contract acquisition process and requirements. This report (1) describes changes DOD made to its TRICARE contracts and acquisition process between its T-3 and T-2017 contracts and (2) examines the extent to which DOD implemented the 13 elements as required by the NDAA 2017, among other things. GAO reviewed and analyzed relevant federal statutes, T-3 and T-2017 planning and contracting documents, and interviewed DOD officials and TRICARE contractors. What GAO Found The Department of Defense (DOD) made selective changes to its TRICARE managed care support contracts and acquisition process from the third generation of contracts (T-3) to the fourth generation (T-2017) of contracts. According to DOD officials, the contracts are generally the same, and changes were made to clarify or streamline TRICARE requirements and administrative processes. Officials told GAO they prioritized the continuation of beneficiary services, rather than implement significant contract changes that could potentially be disruptive. Some of the T-2017 changes include a reduction from three to two contract regions and a different method for paying the contractors. GAO found that DOD has partially implemented six of the 13 elements required by the National Defense Authorization Act for Fiscal Year 2017 (NDAA 2017), in its T-2017 contracts. DOD leadership explained that they decided to implement each of the 13 elements separately rather than by developing a single strategy that addressed all of the elements. DOD officials explained that some of the 13 elements will be implemented through modifications to the T-2017 contracts, while others will be addressed in the fifth generation of managed care support contracts (T-5), which are expected to be awarded in 2021. While DOD has taken steps to begin implementing some of the required elements, GAO found that DOD lacks plans with specific time frames and actions needed to fully implement all of the elements. As a result, it is unclear exactly how and when all 13 elements will be implemented. What GAO Recommends GAO recommends that DOD develop and implement plans with time frames and specific actions needed for all 13 required elements in the TRICARE contracts. DOD concurred with GAO's recommendation and noted its plans to address each of the required elements in the T-5 contracts.
gao_GAO-19-620
gao_GAO-19-620_0
Timeliness of SBIR and STTR Proposal Review and Notification Overall, component agencies reviewed proposals and notified awardees within the required time for 12,890 of the 15,453 SBIR and STTR awards that we reviewed (84 percent), for fiscal years 2016 through 2018. The Small Business Act and SBIR/STTR policy directive require most agencies notify applicants of the agency’s decision within 90 calendar days and require NIH and NSF do so in 1 year. Agencies notified awardees after the required time period for 2,533 of 15,453 awards (16 percent). Three of the 28 component agencies met the notification requirement for every award in the data we reviewed, and nine additional component agencies did so for at least 90 percent of their awards. The remaining 16 component agencies met the notification requirement for less than 90 percent of their awards. Table 2 lists the mean and median notification times and the percentage of awardees notified within the required time period for each component agency. Some notifications occurred within days after the required time period, while others occurred months later. For example, all of the notifications by the Department of Education from fiscal year 2016 through 2018 that took longer than 90 days occurred in 91 days. Department officials attributed the one day difference to interpreting the 90-day requirement as a 3- month requirement. Similarly, all of the notifications for Army STTR awards that occurred after the 90-day requirement occurred within 92 days. Of the 2,533 awards with notifications after the required time, notifications occurred on average about 1 ½ months later. During the 3 fiscal years that we reviewed, some component agencies had substantial changes from year to year in the percentage of awardees that they notified within the required time period, while other component agencies consistently notified about the same percentage of awardees. For example, the Department of Energy’s Office of Science and the Army SBIR program each had a single fiscal year during which they notified less than 50 percent of awardees within the required time period, substantially less than during the other fiscal years we examined. Table 3 describes the percent of awardees notified within the required time by each component agency for each of the 3 fiscal years we examined. Agency officials described several factors that affect the time spent reviewing proposals and notifying awardees, including (1) the availability of reviewers, (2) the number of proposals to review, and (3) other agency- specific factors. Availability of reviewers. Officials from some component agencies we interviewed said the availability of agency staff or external reviewers affected the time they spent reviewing proposals. For example, USDA officials told us that the agency cannot notify awardees within 90 days because they need additional time to identify and recruit experts for their external peer review system. USDA officials compared their review process to that of the NSF and NIH, the two agencies that are directed to complete proposal review and notification within 1 year. Similarly, Navy officials said that the availability of reviewers was the biggest variable in completing their proposal review and notification process. These reviewers are Navy employees who contribute part of their time to reviewing SBIR and STTR proposals while continuing to perform their regular duties. According to Navy officials, although they give reviewers deadlines based on the number of proposals they have to review, conflicts with their regular duties or higher priority tasks may cause reviewers to miss their deadlines. In contrast, Department of Education officials said that they identify and train reviewers before the agency receives proposals so that the reviews may begin as soon as possible. Other agencies, however, may not know what areas of expertise reviewers will need until the agency has examined the proposals it received. Number of proposals. Officials from some component agencies we interviewed said the number of proposals they receive affected the time spent reviewing proposals and notifying awardees. For example, officials from the Department of Transportation said that the number of proposals they receive can range between two and 40, which makes it difficult to predict the workload of agency evaluators who perform the proposal reviews. Similarly, National Institute of Standards and Technology officials said that the number of proposals they receive fluctuates from year to year. Because agencies must review all proposals that meet the minimum requirements, an increase in the number of proposals directly increases the workload of proposal reviewers. Other agency-specific factors. Some component agency officials identified factors specific to their agency or process as factors affecting the time needed. For example: Two component agencies within the Department of Health and Human Services—the Centers for Disease Control and Prevention (CDC) and Food and Drug Administration (FDA)—notified none of their awardees within the required time in fiscal years 2016 through 2018. CDC and FDA participate in the solicitation and review process led by the NIH. However, while the NIH has 1 year to notify awardees, these agencies are required to notify awardees within 90 calendar days. CDC officials said that participating in the longer NIH program is more efficient than creating their own review process and allows them to leverage additional programs at NIH that support small business awardees. Environmental Protection Agency officials told us that their review process includes three consecutive reviews, which leads the agency to regularly request waivers to exceed the 90-day notification requirement. These reviews include an administrative review for responsiveness to the solicitation, an external peer review process, and an internal review by the SBIR program office. Some agency officials also identified continuing resolutions, sequestration, or government shutdowns as factors that could slow proposal review. Proposal review and notification activities could be affected because the availability or amount of funds for agency activities is uncertain in these instances. For example, a Defense Microelectronics Activity official told us that their agency generally completes its proposal review process within 90 days, but does not notify awardees until it has determined funding availability for awards later in the fiscal year. National Institute of Standards and Technology officials described a delay notifying one awardee, a replacement awardee, due to the initial awardee being determined ineligible during a pre-award assessment. The agency made a replacement selection immediately, but this replacement awardee was notified approximately 20 days after the 90-day requirement. Timeliness of SBIR and STTR Award Issuance Overall, component agencies issued 11,710 of the 15,453 awards we reviewed (76 percent) within the recommended time period, for fiscal years 2016 through 2018. The SBIR/STTR policy directive recommends that most agencies issue an award within 180 days and recommends that NIH and NSF do so in 15 months. Agencies issued 3,743 of the 15,453 awards (24 percent) after the recommended time period. Three of 28 component agencies issued every award in the data we reviewed within the recommended time, and five additional component agencies did so for at least 90 percent of their awards. The remaining 20 component agencies issued less than 90 percent of their awards within the recommended time period. For the 3,743 awards that agencies issued after the recommended time period, the average award was issued about two and a half months after the recommended time. Table 4 lists the mean and median award issuance times and the percent of awards issued within the recommended time for each component agency. During the 3 fiscal years that we reviewed, some component agencies had substantial changes from year to year in the percentage of awards they issued within the recommended time period, while other component agencies consistently issued about the same percentage of awards within the recommended time period. For example, the Department of Energy’s Advanced Research Projects Agency-Energy issued no awards within the recommended time in each of the three years we examined. Table 5 describes the percent of awards issued within the recommended time period by each component agency for each of the 3 fiscal years we examined. Agency officials described several factors that increased the time spent issuing awards, including (1) additional time needed to issue certain types of contracts, (2) the availability of grants and contracting officers, (3) delays coordinating among agency officials, (4) the responsiveness of awardees, and (5) the availability of funding for the awards. Cost reimbursement contracts. Officials from some component agencies we interviewed said that the contract type was a factor that affected the time needed to issue SBIR and STTR awards. Specifically, officials said cost reimbursement contracts took longer to issue because of the need to review the awardee’s accounting system in accordance with federal acquisition regulations. For example, officials from the Defense Advanced Research Projects Agency (DARPA) said cost reimbursement contracts routinely take more time to award than fixed- price contracts because of this accounting system review. According to DARPA officials, this review can add 45 days or more to the awards process. In February 2019, we found that the Department of Defense does not have a mechanism to monitor and ensure that contractor business system reviews and audits are conducted in a timely manner and recommended that the department develop such a mechanism. Our analysis of the SBIR and STTR award data confirmed that component agencies spent more time issuing awards identified as cost reimbursement contracts than issuing fixed price contracts. We found that SBIR and STTR awards identified as cost reimbursement contracts in the fiscal year 2016 through 2018 data took significantly longer to issue than those identified as fixed-price, as shown in figure 1. Fixed-price contracts took on average 152 days and cost reimbursement contracts took 231 days (79 days longer). Cost reimbursement contracts also took on average 40 days longer than contracts that were not specified as fixed or cost reimbursement. Availability of grants or contracting officers. The availability or experience of agency staff to negotiate the contract or grant can be a factor, according to some component agency officials. First, some officials said limited availability of grants or contracting officers was a factor in the time to issue awards and may result in delays. For example, officials from both Army program offices said that the workload for contracting officers is high, and SBIR and STTR awards are part of a larger contracting backlog. Similarly, officials from the National Institute of Standards and Technology and National Oceanic and Atmospheric Administration also said that the availability of grants and contracting officers is a pervasive issue for federal agencies that can affect award timeliness. Second, officials from some component agencies said that the contracting officer’s level of experience with small business awards affects the time needed to issue SBIR and STTR awards. Coordination among agency officials. Air Force officials said that the need for coordination among agency officials, such as between the contracting officer and proposal evaluators, can create delays. Because the proposal review and award process can require coordination among multiple officials who are not always immediately available, delays may occur as one official waits for input or information from another. Beginning in fiscal year 2018, the Air Force made changes to its proposal review and award process for a subset of awards that included scheduling dedicated time for reviewers, contracting officers, and other agency officials to jointly evaluate proposals and process awards. This change guaranteed the availability of agency officials and reduced the time needed for coordination among these officials. Overall, it allowed the agency to issue awards within a few days or weeks. According to agency officials, the Air Force awarded about 150 awards in 2018 through this process and they expect about one-third of Air Force awards in fiscal year 2019 and half of awards in fiscal year 2020 will use this expedited process. Responsiveness of awardees. Some component agency officials said that the responsiveness of the small business was a factor in delays. For example, officials from USDA said that the majority of SBIR grantees at USDA are first-time grantees who have never worked with the federal government, and this can extend the time it takes to issue the award. In order to receive an SBIR or STTR award, the small business must, among other things, submit a certification that it meets size, ownership, and other requirements. Delays in providing these certifications or other information required by the awarding agency can therefore delay award issuance. In our July 2018 report that reviewed DOD’s weapon-systems- related contracts awarded from fiscal year 2014 through fiscal year 2016, contracting officials stated that quicker contractor responses to requests for additional information could help reduce the time between when a solicitation is issued to when a contract is awarded. Availability of funding. Some component agency officials said that delays in determining the amount of funding available for small business awards due to continuing resolutions or delays in intradepartmental fund transfers may delay the issuance of awards. For example, NASA officials said that they estimate the agency’s R&D budget at the start of the fiscal year to calculate the amount required for SBIR and STTR awards. According to these officials, if NASA is operating under a continuing resolution at the start of the fiscal year, the estimate may be smaller than the final appropriated amounts. In this case, NASA would go back to its proposals to make additional awards from the pool of proposals that were rejected under the original estimate, and this would lead to longer issuance times for some awards. Agency Comments and Our Evaluation We provided a draft of this report to SBA and the 11 agencies that participated in the SBIR and STTR programs in fiscal years 2016 through 2018 for their review and comment. The SBA, Department of Defense, and Department of Education provided written comments that are reproduced in Appendix II, III, and IV. In addition, the Department of Energy, the NIH within the Department of Health and Human Services, Department of Transportation, and the National Institute of Standards and Technology within the Department of Commerce provided technical comments, which we incorporated as appropriate. The remaining agencies told us they had no comment. In its formal comments, the Department of Education stated that it has taken steps to ensure that future awardees will be notified within the required period. In their comments, SBA and the Department of Defense suggested phase I and II awards should be evaluated separately in future reports. In this report, we combined phase I and II awards because we did not find a statistically significant difference in notification time between phase I and II awards in the fiscal year 2016 through 2018 data that we examined. However, some analyses showed that phase II awards took longer to issue. We may further examine differences between phase I and phase II awards in subsequent reports. SBA also described the importance of minimizing delays between phase I and phase II awards. We did not evaluate the time between phase I and subsequent phase II awards in this report, but agree that the time between awards may be of interest in future reports because, as noted by SBA, the time between awards may affect small businesses' ability to retain key personnel. SBA also sought explanations for various dates and figures used in our analysis and we updated the report to include the definitions used when collecting award data and to describe our figures in more detail. The Department of Defense also stated that the SBIR and STTR policy directive does not explicitly include phase II awards in its 90 and 180-day timeliness requirements. However, we confirmed with SBA—the agency that issues the directive—that the 90-day requirement for notification of selection and the 180-day recommendation for award issuance apply to both phase I and phase II awards. The Department of Defense further stated that subsequent phase II awards could occur several years after the end of the initial phase II award and should not be included in the analysis of phase II awards. In this report, we took steps to eliminate these outliers from the data. We are sending copies of this report to the appropriate congressional committees, the Acting Administrator of the SBA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6888 or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Timeliness of Agencies’ Small Business Awards This appendix describes the awards made by agencies participating in the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs, based on the data provided to GAO for fiscal years 2016 through 2018. These data include figures showing the (1) proposal review and notification time, (2) award issuance time, and (3) distribution of awards by fiscal year and phase. The fiscal year and phase figure describes the number of phase I and phase II awards issued in fiscal years 2016 through 2018 and is based on the first year of the award activities. For example, if an agency obligated funding to a phase II award in fiscal years 2017 and 2018, the award is counted among the fiscal year 2017 phase II awards. NIST participated in SBIR only. Fiscal Year 2016-2018 Awards NOAA participated in SBIR only. Fiscal Year 2016-2018 Awards Air Force participated in SBIR and STTR. Fiscal Year 2016-2018 Awards Small Business Award Timeliness (Fiscal Year 2016-2018 Awards) Small Business Award Timeliness (Fiscal Year 2016-2018 Awards) Navy participated in SBIR and STTR. Fiscal Year 2016-2018 Awards MDA participated in SBIR and STTR. Fiscal Year 2016-2018 Awards DARPA participated in SBIR and STTR. Fiscal Year 2016-2018 Awards DHA participated in SBIR and STTR. Fiscal Year 2016-2018 Awards SOCOM participated in SBIR and STTR. Fiscal Year 2016-2018 Awards DLA participated in SBIR and STTR. Fiscal Year 2016-2018 Awards DTRA participated in SBIR and STTR. Fiscal Year 2016-2018 Awards CBD participated in SBIR and STTR. Fiscal Year 2016-2018 Awards NGA participated in SBIR and STTR. Fiscal Year 2016-2018 Awards DMEA participated in SBIR and STTR. Fiscal Year 2016-2018 Awards Education participated in SBIR only. Fiscal Year 2016-2018 Awards Office of Science participated in SBIR and STTR. Fiscal Year 2016-2018 Awards ARPA-E participated in SBIR and STTR. Fiscal Year 2016-2018 Awards NIH participated in SBIR and STTR. Fiscal Year 2016-2018 Awards CDC participated in SBIR only. Fiscal Year 2016-2018 Awards FDA participated in SBIR only. Fiscal Year 2016-2018 Awards DHS S&T participated in SBIR only. Fiscal Year 2016-2018 Awards DNDO participated in SBIR only. Fiscal Year 2016-2018 Awards DOT participated in SBIR only. Fiscal Year 2016-2018 Awards EPA participated in SBIR only. Fiscal Year 2016-2018 Awards NASA participated in SBIR and STTR. Fiscal Year 2016-2018 Awards NSF participated in SBIR and STTR. Fiscal Year 2016-2018 Awards USDA participated in SBIR only. Fiscal Year 2016-2018 Awards Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Rob Marek (Assistant Director), Tind Shepper Ryen (Analyst-in-Charge), Nora Adkins, David Aja, Jenny Chanley, Robert Letzler, Anika McMillon, Amanda Postiglione, and Ben Shouse made key contributions to this report.
Why GAO Did This Study Since the SBIR and STTR programs began in 1982 and 1992, respectively, federal agencies have awarded at least 162,000 contracts and grants totaling around $46 billion to help small businesses develop and commercialize new technologies. Eleven agencies participate in the SBIR program and five of them also participate in the STTR program. Each agency issues a solicitation requesting proposals at least once a year. Agencies then review proposal submissions and issue awards using grants or contracts. The SBIR and STTR policy directive recommends that most agencies issue awards no more than 180 calendar days from solicitation close. The John S. McCain National Defense Authorization Act for Fiscal Year 2019 included a provision for GAO to report on the timeliness of agencies' SBIR and STTR proposal review and award issuance. This report examines the time agencies spend issuing SBIR and STTR awards and the factors that affect the time spent, among other things. Within the 11 agencies, GAO reviewed 28 component agencies that participate in these programs. GAO analyzed agency-provided award data from fiscal years 2016 to 2018 for 15,453 awards and interviewed officials from the Small Business Administration and 26 of the component agencies. What GAO Found In fiscal years 2016 through 2018, agencies issued 11,710 of the 15,453 Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) awards we reviewed (76 percent) within the recommended time period. However, component agencies varied in the percentage of awards that they issued within the recommended time (see figure). Agency officials described a number of factors that can affect award issuance timelines, including: Some agencies use cost reimbursement contracts, which require additional agency review under federal acquisition regulations. Some contracting officers have limited expertise in issuing SBIR and STTR awards and their overall workloads can be heavy. Small businesses may be slow to respond to agency requests for information, such as requests for information needed to meet government contracting requirements.
gao_GAO-19-496
gao_GAO-19-496_0
Background The federal Health Center Program was established in the mid-1960s in an effort to help low-income individuals gain access to health care services. The Health Center Program, authorized in Section 330 of the Public Health Service Act, is administered by HRSA’s Bureau of Primary Health Care and makes grants—known as Section 330 grants—to four types of health centers that primarily serve low-income populations: 1. Community health centers. These health centers serve the general population with limited access to health care. They are required to provide primary health services to all residents who reside in the center’s service area. More than three-quarters of health centers are community health centers. 2. Health centers for the homeless. These health centers provide primary care services to individuals who lack permanent housing or live in temporary facilities or transitional housing. These centers are required to provide substance abuse services and supportive services targeted to the homeless population. 3. Health centers for residents of public housing. These health centers provide primary health care services to residents of public housing and individuals living in areas immediately accessible to public housing. 4. Migrant health centers. These health centers provide primary care to migratory agricultural workers (individuals whose principal employment is in agriculture and who establish temporary residences for work purposes) and seasonal agricultural workers (individuals whose principal employment is in agriculture on a seasonal basis but do not migrate for the work). HRSA’s Section 330 grants are funded by a combination of discretionary appropriations and, since 2011, mandatory appropriations provided from the CHCF. From fiscal years 2010 through 2017, total funding appropriated for Section 330 grants—which includes funding from discretionary appropriations and the CHCF—increased from about $2.1 billion to $4.9 billion (see fig. 1). According to HRSA data, approximately 70 percent of appropriations for Section 330 awards in fiscal year 2017—or about $3.5 billion—were funded by the CHCF. HRSA officials also told us that the total amount of CHCF appropriations may differ from the total amount of awards funded because, for example, appropriations may be (1) used for administrative costs, (2) reduced because of sequestration, or (3) carried over between fiscal years. Health centers are required to provide comprehensive primary health services, including preventive, diagnostic, treatment, and emergency health services. (See table 1.) All services that health centers provide must be available to patients at the center regardless of patient payment source or ability to pay and must be available (either directly or under a referral arrangement) to patients at all health center service sites. Services are provided by clinical staff—including physicians, nurses, dentists, and mental health and substance abuse professionals—or through contracts or cooperative arrangements with other providers. In addition to the services they provide, health centers are also required to document the unmet health needs of the residents in their service area and to periodically review their service areas to determine whether the services provided are available and accessible to area residents promptly and as appropriate. Health centers also must have a sliding fee scale based on a patient’s ability to pay and to be governed by a community board of which at least 51 percent of the members are patients of the health center. HRSA determines whether health center grantees meet these and other health center program requirements when making award determinations. While Health Centers’ Revenue Doubled from 2010 through 2017, the Share of Revenue from Grants Decreased Our analysis shows that total revenue received by health centers nationwide more than doubled from calendar years 2010 through 2017— from about $12.7 billion to $26.3 billion (see fig. 2). Over the same time period, both the number of health centers and the number of patients served also increased. The number of health centers increased from 1,124 centers in 2010—operating 6,949 sites—to 1,373 centers in 2017— operating 11,056 sites. In addition, the total number of patients served at health centers over the same time period increased by 7.7 million patients, from 19.5 million to 27.2 million. See appendix I for additional information. While the total revenue received by health centers more than doubled from 2010 through 2017, the share of revenue received from grants— including Section 330 grants and other federal and non-federal grants— decreased, from 38.0 percent of total revenue in 2010 to about 30.2 percent in 2017. During the same time period, the share of revenue health centers received from Medicaid, Medicare, and private health insurance increased (see fig. 3). (See app. II for more information on health centers’ revenue from 2010 through 2017.) While the share of health centers’ total revenue coming from all grants decreased from 2010 to 2017, the share of revenue from one type of grant—Section 330 grants—increased. Specifically, the share of revenue health centers received from Section 330 grants—a portion of which are funded by the CHCF—increased from 15.7 percent of health centers’ total revenue in 2010 to 18.0 percent in 2017 (see figure 4). Our analysis also shows that the share of revenue health centers receive from Section 330 grants varies by state. As figure 5 below shows, in 2017, health centers in 2 states received more than 40 percent of their total revenue from Section 330 grants, while health centers in 18 states received less than 20 percent of total revenue from these grants. HRSA Awarded CHCF Grants Primarily to Support Ongoing Operations and Services at Health Centers Our analysis of HRSA data shows that for the 7-year period from fiscal years 2011 through 2017, HRSA provided health centers with about $15.8 billion in Section 330 grants funded by the CHCF. Most of this funding—$12.6 billion, or nearly 80 percent of all grants awarded through the CHCF during this period—was awarded for the purpose of service area funding, which supports ongoing operations and services across the nearly 1,400 health centers nationwide (see fig. 6). The remaining $3.2 billion in CHCF grants were awarded to increase the amount of services provided at existing health centers; to increase the number of health centers and sites; and for other special initiatives, such as initiatives to support health information technology. Service area funding. From fiscal years 2011 through 2017, HRSA used the CHCF to provide health centers with approximately $12.6 billion in grants for service area funding, which supports ongoing operations and service delivery. HRSA officials told us that these CHCF grants are used to fill the gap between what it costs to operate a health center and the amount of revenue a health center receives. As such, the awards are a primary means through which health centers provide health care services that may be uncompensated, including services for patients who are uninsured or services not typically reimbursed by other payers, such as adult dental care, or other services such as transportation and nutritional education. These awards can cover uncompensated care costs for patients with incomes low enough to qualify for sliding fee assistance, which reduces or waives the cost of services for patients based on their ability to pay. In addition, these awards can cover patients who have private insurance but face substantial deductibles and cost-sharing. Officials we interviewed from the Congressional Research Service, George Washington University’s Milken Institute, and the National Association of Community Health Centers similarly noted that CHCF grants support services not typically covered by public health insurance, such as adult dental care services not generally covered by Medicare or Medicaid. Increasing services at existing health centers. From fiscal years 2011 through 2017, HRSA used the CHCF to provide health centers with about $1.2 billion in grants to help increase the amount of services offered at existing health centers that chose to apply for an award. This amount included funding to increase the availability of specific health care services, such as dental care, as well as funding to support health centers’ efforts to extend service hours or increase the number of available providers. Specifically, these grants were awarded for the following: Behavioral and mental health, substance abuse. Three grants totaling about $400.8 million were awarded to expand access to behavioral health, mental health, and substance abuse services. These awards focused primarily on integrating primary care and behavioral health care services and expanding substance use services at existing health centers, such as medication-assisted treatment for opioid-use disorder. Oral health. A grant for about $155.9 million was awarded to increase access to oral health services and improve oral health outcomes by funding new onsite providers and supporting the purchase and installation of dental equipment. Expanding Services. Two grants—one in fiscal year 2014 for $295.6 million and another in fiscal year 2015 for about $349.6 million—were made to increase access to comprehensive primary health care in various ways, at the discretion of individual health centers. At existing sites, health centers may have chosen to expand service hours, increase the number of health care providers, or expand services such as oral health, behavioral health, pharmacy, and vision services. Increasing the number of health centers and sites. From fiscal years 2011 through 2017, HRSA awarded about $1.1 billion—or about 7 percent of total CHCF funds—to organizations to help establish new health centers or new sites at existing health centers. Specifically, HRSA awarded grants for the following purposes: New Access Point (NAP) Awards. Most of the funding to increase access to health centers—about $648.5 million of the $1.1 billion— was provided through what are called NAP awards. According to HRSA officials, there are two primary ways these funds can be used—either to allow a new organization to become a health center (about 30 percent of grant applicants) or for an existing health center to add one or more service sites (about 70 percent of grant applicants). HRSA officials told us that they funded 1,059 NAP awards to new and existing health centers from fiscal year 2011 through 2017 for a combined total of 1,609 proposed new health centers or sites. These awards included 295 awards to new organizations and 764 awards to existing health centers adding one or more service sites. (See table 2 below for more information on the increase in health centers resulting from NAP awards.) Among the 1,609 total proposed new health centers or sites, 686 were in rural areas, including 191 new health centers and 495 additional sites at existing centers. Construction Grants. HRSA awarded construction grants totaling about $411.3 million through the Health Infrastructure Investment Program to help existing health centers alter, renovate, expand, or construct a facility. According to HRSA officials, construction grants may increase the number of health center sites or may result in the consolidation of sites while still expanding access to care. Health Center Planning Grants. HRSA awarded a Health Center Planning grant in fiscal year 2011 for about $10.3 million to support planning and development of comprehensive primary care health centers. Collectively, a total of 5,536 new health center sites were added in the United States from fiscal year 2011 through 2017. Of these new sites, 3,838 were in urban locations and 1,698 were in rural locations. While many of these new health center sites were from NAP awards, as previously described, other grants either funded by the CHCF or by discretionary appropriations may have contributed to the establishment of new health center sites. For example, HRSA officials told us that health center sites may be added through a change of scope to their service area competition award or through other types of grants funded by the CHCF, such as grants to increase adult dental services. However, according to HRSA officials, the data do not allow for directly associating the number of new sites with those grants, as the grants may be used for multiple purposes. Figure 7 below shows the locations of health center sites added during this time period that are active as of February 2019. Other special initiatives. From fiscal years 2011 through 2017, HRSA awarded about $898.9 million of CHCF funds for grants to health centers to support other special initiatives and to address identified priorities or emerging health care needs. Specifically, HRSA awarded grants to those health centers that chose to apply for the following purposes: Health information technology. Three grants totaling about $243.4 million were awarded to advance the adoption and implementation of health information technology. For example, the purpose of one grant—the Health Center Controlled Networks—was to advance the adoption, implementation, and optimization of health information technology. Another grant provided supplemental funding to improve the electronic reporting capabilities of health centers in Beacon Communities. HIV. Two grants totaling about $23.8 million were awarded with the goal to increase access to HIV care and services. One specifically targeted prevention and treatment services in those communities most affected by HIV. Outreach and enrollment. $222.0 million in grant funding was awarded to support health centers in raising awareness of affordable insurance options and providing eligibility and enrollment assistance to uninsured patients of health centers and residents in their approved service areas. Patient-Centered Medical Home. About $84.6 million in grant funding was awarded to support HRSA efforts to expand the number of patient-centered medical homes with a particular focus of improving quality of care, access to services, and reimbursement opportunities. Quality improvement. Approximately $305.1 million in grant funding was awarded to support health centers that displayed high quality performance so they can continue to strengthen quality improvement efforts. Specifically, the funds were to support health centers to further improve the quality, efficiency, and effectiveness of health care delivered to the communities served. Training and technical assistance. Two grants totaling about $14.3 million were awarded to support training and technical assistance for health centers in order to support programmatic, clinical, and financial operations. One grant focused on the delivery of training and technical assistance by national organizations and the other grant was based on statewide and regional needs. Zika. A grant for about $5.7 million was awarded to health centers that chose to apply to expand their existing activities to strengthen the response to the Zika virus in Puerto Rico, the U.S. Virgin Islands, and American Samoa. These activities included outreach, patient education, screening, voluntary family planning services, and/or treatment services. See appendix III for a complete list of all grants awarded through CHCF by category. Agency Comments We provided a draft of this report to HHS. HHS provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further action until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at farbj@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Information on Health Centers and Patients Served This appendix provides information on health centers and patients served. Specifically, figure 8 illustrates the number and location of health centers in 2017; figure 9 illustrates the growth in health centers and sites since 2010; figure 10 illustrates the growth in patients served at health centers since 2010; and table 3 provides information on how the payer mix for patients served at health centers has changed since 2010. Appendix II: Sources and Amounts of Revenue for Health Centers, Calendar Years 2010 through 2017 HRSA’s Uniform Data System defines other public insurance as state and/or local government programs, such as Washington’s Basic Health Plan or Massachusetts’ Commonwealth plan, that provide a broad set of benefits for eligible individuals. Other federal grants in HRSA’s Uniform Data System include Medicare and Medicaid Electronic Health Record Incentive grants. HRSA’s Uniform Data System defines non-federal grants and contracts as revenue from contracts that are not tied to the delivery of services and revenue received from state and local indigent care programs. HRSA’s Uniform Data System defines other revenue as non-patient related revenue not reported elsewhere. Examples include revenue from fund-raising, rent from tenants, medical record fees, and vending machines. Appendix III: Community Health Center Fund Awards for Health Centers, Fiscal Years 2011 through 2017 Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Kristi Peterson, Assistant Director; Amy Leone, Analyst-in-Charge; Margot Bolon, Krister Friday, Jeff Tamburello, and Eric Wedum made key contributions to this report. Also contributing were Vikki Porter, Rotimi Adebonojo, Giselle Hicks, and Jennifer Whitworth.
Why GAO Did This Study In 2017, nearly 1,400 health centers provided care to more than 27 million people, regardless of their ability to pay. Health centers were established to increase the availability of primary and preventive health services for low-income people living in medically underserved areas. Health centers rely on revenue from a variety of public and private sources, including revenue from CHCF grants. HRSA began awarding grants funded by the CHCF in fiscal year 2011. GAO was asked to review the sources and amounts of health center revenue. This report describes (1) trends in health centers' revenue and (2) the purposes for which CHCF grants have been awarded. GAO analyzed HRSA data collected from health centers and compiled in its Uniform Data System to identify the sources and amounts of revenue health centers received from 2010 through 2017, the most recent data at the time of GAO's analysis. GAO also reviewed HRSA grant documentation for grants funded by the CHCF for fiscal years 2011-2017—the most recent data at the time of GAO's analysis—including information on the award amount and purpose of the grant, and reviewed published studies that described the purposes for which CHCF grants have been made. Additionally, GAO interviewed HRSA officials, authors of the published studies, and an association representing health centers. GAO provided a draft of this report to HHS. HHS provided technical comments, which GAO incorporated as appropriate. What GAO Found Health centers' revenue more than doubled from calendar years 2010 through 2017, from $12.7 billion to $26.3 billion. Health centers' revenue comes from a variety of sources, including reimbursements from Medicaid, Medicare, private insurance, and federal and state grants. While total health center revenue increased from 2010 through 2017, the share of revenue from each source changed in different ways. In particular, revenue from federal and state grants decreased from 38.0 percent of total revenue in 2010 to about 30.2 percent of total revenue in 2017 while reimbursements from Medicaid, Medicare, and private insurance increased. Over the same time period, the number of health centers increased from 1,124 centers in 2010 to 1,373 centers in 2017. In addition, the number of patients served over the same time period increased by 7.7 million patients, from 19.5 million to 27.2 million. GAO's analysis of Health Resources and Services Administration (HRSA) data shows that from fiscal years 2011 through 2017, health centers received approximately $15.8 billion in federal grants funded by the Community Health Center Fund (CHCF), which was established by the Patient Protection and Affordable Care Act in 2010. Of this total amount, 79.7 percent—or $12.6 billion—was awarded for the purpose of maintaining operations at existing health centers (see figure). According to HRSA officials, these CHCF grants are used to fill the gap between what it costs to operate a health center and the amount of revenue a health center receives. As such, officials explained, the awards are a primary means through which health centers provide health care services that may be uncompensated, including services for uninsured patients or services not typically reimbursed by other payers, such as adult dental care. The remaining $3.2 billion in CHCF grants were made to increase the amount of services provided at existing health centers; increase the number of health centers and sites; and other special initiatives, such as implementing health information technology.
gao_GAO-19-415
gao_GAO-19-415_0
Background Secret Service’s Responsibilities and Personnel The Secret Service’s primary responsibility is to physically protect the President, Vice President, their immediate families, and visiting foreign dignitaries as well as the White House complex. The Office of Protective Operations is the principal office responsible for providing protection. Within the Office of Protective Operations, agents may be assigned to a number of divisions, such as PPD, VPD, or one of the other divisions that is responsible for protecting former presidents and visiting heads of state or heads of government. The Uniformed Division, which is also part of the Office of Protective Operations, is charged with protecting facilities and venues for Secret Service protectees. Uniformed Division officers control access to the White House complex—which includes the White House itself, the Eisenhower Executive Office Building, and the Department of the Treasury building—and the Vice President’s residence. The Secret Service’s secondary responsibility is to conduct criminal investigations in areas such as financial crimes, identity theft, counterfeiting of U.S. currency, computer fraud, and computer-based attacks on banking, financial, and telecommunications infrastructure. Over time, its investigative mission has grown to encompass a wide range of financial and cybercrimes. In addition to investigating financial and electronic crimes, special agents conduct protective intelligence— investigating threats against protected persons, including the President, and protected facilities, such as protectee residences. These activities are conducted through the Office of Investigations, which oversees the agency’s field office structure. Agents assigned to the field offices also support protective operations by, for example, providing physical protection with the assistance of federal, state, and local law enforcement entities when a protectee travels. Because agents are trained to conduct both criminal investigations and to provide protection, agents assigned to investigations in field offices can contribute to protective operations when needed. Security Incidents Involving the Secret Service Since 2012 The Secret Service has experienced a number of protection-related security incidents on the White House complex since April 2012. Incidents have included attempts to gain access to the White House complex by foot, car, and air. These incidents, among others, highlight some of the many challenges the Secret Service confronts while providing protection. See figure 1 for a description of selected security incidents. Secret Service Has Made Progress Implementing the Protective Mission Panel’s Recommendations, but Recommended Training Targets Have Not Been Achieved Secret Service Fully Implemented Some, But Not All, of the Panel’s Recommendations The PMP’s 2014 report made 19 recommendations regarding (1) training and personnel; (2) technology, perimeter security, and operations; and (3) leadership. We found that the Secret Service fully implemented 11 recommendations and is in the process of implementing the remaining eight recommendations. Table 1 summarizes the progress that the agency has made implementing each recommendation. Appendix I provides further details on the actions the Secret Service has taken to address each recommendation. Secret Service’s Priorities Have Been Communicated, but Recommended Training Targets in Support of Its Priorities Have Not Been Achieved Secret Service Has Taken Steps to Communicate Agency Priorities and Increased Time Spent on Protection Agencywide Following the PMP’s recommendation that the Secret Service should “clearly communicate agency priorities, give effect to those priorities through its actions, and align its operations with its priorities,” the Secret Service took steps toward communicating internally and externally the precedence of protection. For example, the agency hired a Senior Executive Director of Communications in 2016 and formed the Office of Communications and Media Relations in 2018 to manage the agency’s public affairs efforts and to oversee internal agency communication. Additionally, each year the Director of the Secret Service issues a strategic priority memorandum that identifies priority areas for the upcoming fiscal year budget. Further, from fiscal year 2014 through fiscal year 2018, special agents across the entire agency worked more hours on protection assignments and fewer hours on investigations. Specifically, in fiscal year 2014, agents spent 4.3 million hours (54 percent) on protection and 2.8 million hours (36 percent) on investigations, whereas in fiscal year 2018, agents spent 4.9 million hours (59 percent) on protection and 2.2 million hours (26 percent) on investigations. The number and percentage of hours spent on protection peaked in fiscal year 2016, but was higher in fiscal year 2018 than in fiscal year 2014. Figure 2 shows the distribution of agent work hours. The Secret Service has identified protection as its priority, and the Secret Service has identified training as an essential component of protection. In its December 2014 report, the PMP found that the security incident at the White House on September 19, 2014, arose from a “catastrophic failure of training”. The PMP therefore recommended, and Secret Service agreed at the time, that special agents assigned to PPD and VPD train for 25 percent of their work time. This was to be accomplished by allowing agents time to train during the designated training shift, known as the “fourth shift”. However, while training for special agents agencywide increased to 10 percent by 2018—more than triple the amount from fiscal year 2014— training for those assigned to PPD and VPD did not increase accordingly. Specifically, special agents assigned to PPD and VPD reported attending training for about 5.9 percent and 2.9 percent of their regular work hours in fiscal year 2018, respectively, compared with 3.3 percent and 1.9 percent in fiscal year 2014. (See figure 3.) According to our analysis of Secret Service self-reported data, in fiscal year 2018, special agents assigned to PPD and VPD missed achieving the 25- percent training target by 76 and 88 percent each. Figure 3 shows the share of regular work hours that agents assigned to PPD and VPD spent in training in fiscal years 2014 through 2018 compared to the annual target. The Secret Service established the 25 percent training target for agents assigned to PPD and VPD, and senior officials reaffirmed the target in March 2019. However, according to a senior Office of Protective Operations official, the fast operational tempo (i.e., heavy workload) hampered agents’ ability to participate in training. This official told us that the amount of protection that the Secret Service provides dictates how often agents are assigned protection assignments during the training shift. Senior Secret Service officials further added that the number of protectees and the amount of travel for the current protectees is higher for the current administration than for prior administrations, which reduces the time agents have available for training. According to Secret Service officials, the Secret Service’s ability to meet the PPD and VPD training targets is dependent on increased staffing levels. The Secret Service outlined its plans to increase staffing levels in the Secret Service FY 2018–FY 2025 Human Capital Strategic Plan, which was published in May 2017. The plan describes, among other things, the agency’s human capital strategic goals, the process used to determine staffing needs, and annual hiring targets. By the end of fiscal year 2025, the agency plans to employ 9,595 individuals overall, including 4,807 special agents—an increase of 1,193 special agents from the end of fiscal year 2018. To meet the special agent target, the Secret Service assumes an average net growth of about 182 special agents per year between fiscal years 2019 through 2025. However, the Secret Service’s human capital strategy does not address the immediate need to help PPD and VPD meet training targets. Even though the special agent staffing level increased from fiscal years 2014 to 2018 by 332 agents, training levels for agents assigned to PPD and VPD remained below the 25- percent target at 6 and 3 percent, respectively, in fiscal year 2018. Because of the agency’s zero-fail responsibility to protect the President and Vice President, the PMP concluded that it is imperative that the Secret Service strive to address training deficits as soon as possible. In addition, according to leading management practices related to training and development efforts, adequate planning allows agencies to establish priorities and determine the best ways to leverage investments to improve performance. As part of the planning, agencies may need to compare various training strategies in terms of, among other things, the availability of resources and risk of unfavorable consequences if training investments are not made. The agency has focused on increasing training as part of its eight-year human capital strategy, but the Secret Service has not developed a plan to ensure that it meets near-term protection-related training targets. One way that the agency could address PPD and VPD training needs in the short term is to shift agents from investigations to protection assignments. Because all agents are trained to provide protection and to conduct investigations, they can be moved between investigations and protection when dictated by operational circumstances. For example, in fiscal year 2016 agents worked about 548,000 fewer investigative hours in order to support protection than they did in fiscal year 2015. This shift was made to accommodate increased protection demand from candidates in the November 2016 presidential election. See figure 5. In fiscal year 2018, agents across the agency spent nearly 2.2 million hours on investigations. By comparison, agents assigned to PPD and VPD would have needed an additional 136,000 hours and 66,000 hours of training, respectively, in fiscal year 2018 to reach the training targets. Shifting agents from investigations to protection would reduce field offices’ capacity to complete investigations. However, the agency’s stated priority is protection, and training was identified by the PMP as a key component of protection. Increasing staffing levels, as planned, over the long term may adequately support the protective and the investigative priorities at the levels defined by the agency. However, the Secret Service is relying on hiring goals alone to achieve its training-related targets, and it may not be able to achieve its hiring goals because of, among other things, uncertainty about whether enough funding will be requested and appropriated to expand the agency at planned levels. For example, an increase of 89 special agents was requested in the fiscal year 2020 budget submitted to Congress, 88 special agents short of the 177 planned for in the Secret Service FY 2018–FY 2025 Human Capital Strategic Plan. Further, the Secret Service has not developed a plan specifically for meeting training targets for agents assigned to PPD and VPD given current and planned staffing levels. While reviewing a draft of this report and after further consideration of the resources required to achieve the PMP-recommended training targets, in May 2019 the Secret Service stated that it no longer agrees with the training target recommended by the PMP and plans to reevaluate it. Developing and implementing a plan for meeting established training targets given current and planned staffing levels will help ensure that protection-related training targets are met in the near term and that agents assigned to PPD and VPD are prepared to carry out Secret Service’s priority—protection. Secret Service Lacks a Policy with a Documented Process for Collecting Complete and Appropriate Training Data for the Uniformed Division The PMP recommended, and Secret Service agreed, that Uniformed Division officers—who provide protection at the White House—train for 10 percent of their work time. However, the Secret Service cannot fully assess progress towards achieving the 10-percent training target because it lacks complete and appropriate protection-related training data for Uniformed Division officers. Standards for Internal Control in the Federal Government state that management should design control activities to achieve objectives and respond to risks, and that management should implement control activities through policies. Appropriate types of control activities include, for example, the accurate and timely recording of transactions. Internal control standards also state that management should use quality information to achieve the entity’s objective. Quality information is appropriate, current, complete, accurate, accessible, and provided on a timely basis. According to Secret Service officials, training data for Uniformed Division officers are collected through various means and systems. For example, officials stated that they use a database called ePerson to capture certain types of training, such as firearms and physical training requalification sessions. In addition, officials report that Secret Service separately uses DHS’s Performance and Learning Management System (PALMS), which DHS designed to consolidate existing learning management systems for each of DHS’s agencies. PALMS collects data on computer-based training and training provided at the James J. Rowley Training Center automatically, but requires manual entry for training provided at offsite locations. According to Secret Service officials, there are a significant amount of internal on-the-job training instances that do not get recorded. As a result, training data collected on Uniformed Division training hours are incomplete. Further, we reviewed the training data that the Uniformed Division provided to us and identified a number of data quality issues affecting the data’s completeness and appropriateness. For example, certain training was identified by location only or lacked descriptions to clearly link the training to the skills Uniformed Division officers require while working at the White House. Additionally, Secret Service counted training unrelated to protection, such as training on electronic travel vouchers and retirement planning, towards achievement of the 10-percent protection- training target. This occurred because the Secret Service lacks a policy with a documented process identifying how to capture Uniformed Division training information and the type of training to be captured. According to Uniformed Division management, the Secret Service initiated a process in 2017 to enhance the collection and compilation of Uniformed Division training information. Specifically, each Uniformed Division branch training coordinator is to send a list of completed training to Uniformed Division management every 2 weeks. That training information, along with information captured in other systems such as PALMS, is then to be manually compiled by a Uniformed Division staff person at Secret Service’s headquarters. However, the Secret Service has not consistently employed the new process since initiating it in 2017. For example, according to Secret Service officials, the individual responsible for compiling the data was absent from the position for 3 months, and they did not know whether the data for that period were compiled at that time. As personnel—such as the branch training coordinators or the individual responsible for compiling the data—change positions, it is important that the Secret Service have a policy with a documented process to ensure that data collection continues over time and given staff changes. Further, the process does not include information on how or whether to capture internal on-the-job training instances, or instruction on the type of training to be captured to demonstrate that the training is protection-related training. Developing and implementing a policy with a documented process to collect complete and appropriate Uniformed Division officer training data would better position Secret Service to assess Uniform Division officer training data and make informed decisions about whether and how training needs and the 10-percent training are being met. Conclusions Protecting the White House, the President, and the Vice President, among others, is a zero-fail responsibility. As such, the Secret Service must be prepared to face every evolving threat in a rapidly changing environment. This involves having certain specific security skills and routine training on an ongoing basis. In December 2014, the PMP recommended that the Secret Service align its operations with its priorities, and chief among these is protection. It further recommended, and the Secret Service agreed to, achieving specified training targets. While training alone will not guarantee the safety of the Secret Service’s protectees, developing and implementing a plan for meeting protection- related training targets would better prepare special agents to effectively respond to the security threats faced by the President and other protectees. Further, the Secret Service lacks a documented process for collecting Uniformed Division training data that the agency can use to determine whether officers trained for 10 percent of their work hours, as recommended by the PMP. Implementing such a policy could help the Secret Service make informed decisions about Uniformed Division training. Recommendations for Executive Action We are making the following two recommendations to the Secret Service: The Director of the Secret Service should develop and implement a plan to ensure that special agents assigned to PPD and VPD reach annual training targets given current and planned staffing levels. (Recommendation 1) The Director of the Secret Service should develop and implement a policy that documents the process for collecting complete Uniformed Division officer training data and establishes the types of information that should be collected. (Recommendation 2) Agency Comments and Our Evaluation We provided a copy of this report to DHS for review and comment. DHS provided written comments, which are reproduced in appendix II. DHS also provided technical comments, which we incorporated as appropriate. In its comments, Secret Service, through DHS concurred with the two recommendations. However, related to the first recommendation— develop and implement a plan to ensure that special agents assigned to PPD and VPD reach annual training targets given current and planned staffing levels— Secret Service also stated that after further consideration, the agency no longer believes that the annual training target for Presidential and Vice Presidential Protective Divisions should be set at 25 percent of their work time. We incorporated this change in our report. In its comments, the agency stated that the Secret Service Office of Training will work with the Office of Protective Operations to evaluate the training metric for PPD and VPD and develop a plan focusing on increasing capacity at training facilities, achieving staffing growth, and creating efficiencies in protective division scheduling. With respect to the second recommendation—to develop and implement a policy that documents the process for collecting complete Uniformed Division officer training data and establish the types of information that should be collected—Secret Service, through DHS stated that it will develop rigorous and uniform standards for collecting and reporting training data related to the Uniformed Division branch. The agency also stated that it will continue to add training programs to the Performance and Learning Management System and capture informal and on-the-job training hours for the Uniformed Division. DHS stated that the Secret Service expects to review the Enterprise Personnel Schedule System within the next 2 months and anticipates these efforts will result in a more accurate and expansive method for reporting Uniformed Division training. We are sending copies of this report to the appropriate congressional committees and the Acting Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or AndersonN@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Implementation of U.S. Secret Service Protective Mission Panel Recommendations In October 2014, the then-Secretary of Homeland Security established the United States Secret Service Protective Mission Panel (PMP) and tasked the independent panel to review the security incident that occurred at the White House on September 19, 2014 as well as related security issues. The PMP made 19 public recommendations—as well as additional classified recommendations—to the U.S. Secret Service (Secret Service) in three areas: (1) training and personnel; (2) technology, perimeter security, and operations; and (3) leadership. In this appendix, we list the 19 recommendations and accompanying text as published in the Executive Summary to Report from the United States Secret Service Protective Mission Panel to the Secretary of Homeland Security, dated December 15, 2014. The Secret Service Recruitment and Retention Act of 2018 includes a provision that we report on a detailed summary of the Secret Service’s progress implementing the PMP’s recommendations. Specifically, for each recommendation, we provide our assessment of the Secret Service’s progress, describe some of the actions the Secret Service has taken to implement the recommendations, and identify the actions the agency said it plans to complete. PMP Recommendation 1 Provide a true “Fourth Shift” for training the Presidential and Vice Presidential Protective Divisions, so that they spend two weeks out of every eight in training, and ensure that Uniformed Division officers are in training for no less than 10 percent of their time. According to the PMP, “Only with constant training can all of the teams at the White House perform the coordinated actions needed to effectively respond.” Status: Implementation in progress. Actions Taken by the Secret Service Summary: The Secret Service instituted a fourth and fifth shift for its Presidential Protective Division (PPD) and a fourth shift Vice Presidential Protective Division (VPD). The fourth shift was created to provide agents with time to participate in training educational opportunities, conduct advances, and take leave. Implementation is still in progress because neither PPD nor VPD special agents consistently used this time to train and are missed the training targets established by this recommendation. In commenting on a draft of this report in May 2019, the Secret Service stated that it no longer agrees with the training target and plans to reevaluate it. The Secret Service does not have a documented process for collecting complete and appropriate Uniformed Division training data that the agency can use to determine whether officers trained for 10 percent of their work hours. The Secret Service adopted the PMP goal for agents assigned to PPD and VPD to train for 25 percent of their regular work hours. However, in fiscal year 2018, according to self-reported data, these agents attended training for about 5.9 percent and 2.9 percent of their regular work hours, respectively. Although the fourth shift was developed to provide special agents assigned to PPD and VPD time away from shift work during which they could attend training, agents have largely not attended training during the fourth shift, according to agency officials. Agents are instead assigned to additional advance assignments or use leave, as agents are not allowed to take leave during the three regular shifts except in an emergency. Additional action(s) Secret Service plans to take: In its FY 2018–FY 2025 Human Capital Strategic Plan and FY 2018 – FY 2025 Training Strategic Plan, the Secret Service stated that increasing staffing levels would allow the agency more flexibility with how it schedules shifts and advance assignments, thereby freeing up special agents’ and Uniformed Division officers’ time for training. The agency plans to have 4,807 agents and 1,797 Uniformed Division officers by the end of fiscal year 2025, up from 3,614 agents and 1,559 officers at the end of fiscal year 2018. The Secret Service also plans to reevaluate the training target for special agents assigned to PPD and VPD. PMP Recommendation 2 Implement integrated training focused on ensuring that all teams at the White House know their roles in responding to specific threats. According to the PMP, “Teams need to train with the full complement of forces with which they will operate in real life, and the training needs to be provided force-wide, not just to those on duty on the day that training is scheduled.” Status: Implemented; ongoing work may be required to ensure recommendation is sustained. Actions Taken by the Secret Service Summary: To provide integrated training between different Secret Service units, the agency conducted 10 live training exercises and six discussion-based tabletop exercises with personnel from different units in fiscal year 2017. About every two weeks, agents assigned to PPD and VPD each conduct drills and training scenarios, some of which incorporate Uniformed Division officers. In addition, the Secret Service began offering Emergency Action and Building Defense training in October 2014 to Uniformed Division personnel. In fiscal year 2017, the Secret Service conducted 10 live training exercises and 6 discussion-based tabletop exercises with personnel from different units. Among the live exercises, the Secret Service conducted a readiness exercise at the White House in December 2017. About every two weeks, agents assigned to the PPD and VPD conduct drills and training scenarios, some of which incorporate Uniformed Division officers, according to a Secret Service official. These drills take place at the White House, the Naval Observatory, the Department of Treasury building, and the Rowley Training Center. Secret Service began offering Emergency Action and Building Defense training in October 2014 to Uniformed Division personnel. Topics addressed in the course include judgment, firearm control, constitutional law, and emergency medicine. The Emergency Action and Building Defense course is part of the training that new Uniformed Division recruits take. In recent years, the Secret Service conducted joint training exercises with local, state, federal, and foreign tactical units. According to Secret Service officials, the agency conducted 53 of these joint training exercises in fiscal year 2015 through 2018. PMP Recommendation 3 Train in conditions that replicate the physical environment in which they will operate. According to the PMP, “A security team should also be trained so that it is intimately familiar with the space in which it is operating.” Status: Implementation in progress. Actions Taken by the Secret Service Summary: To train in conditions that replicate the White House, the Secret Service secured approval to build a White House Mockup Training Facility at the James J. Rowley Training Center in Beltsville, Maryland. However, the Department of Homeland Security’s (DHS) fiscal year 2019 budget request to Congress did not include funding for the facility. In 2017, the National Capital Planning Commission approved the Secret Service’s revised master plan for the Rowley Training Center, which includes the White House Mockup Training Facility. The fiscal year 2019 Resource Allocation Plan request submitted by the Secret Service to DHS included $77.4 million for the construction project over 5 years. However, the fiscal year 2019 DHS budget request did not include funding for the facility. Some agent and officer training takes place in the operating environment. According to a Secret Service official, agents assigned to the PPD and VPD run drills and training scenarios about every two weeks. Some of the training takes place at the White House, the Naval Observatory, and the Department of the Treasury building, although most training takes place at the Rowley Training Center. These drills and scenarios sometimes also include Uniformed Division officers. In December 2017, the Secret Service conducted a readiness exercise involving multiple units at the White House. Additional action(s) Secret Service plans to take: The Secret Service will proceed with construction of the White House Mockup Training Facility when funding is available. It was not included in DHS’s fiscal year 2019 budget request. PMP Recommendation 4 Increase the Uniformed Division, as quickly as can be appropriately managed, by an initial 200 positions, and the Presidential Protective Division by 85 positions. Perform additional analyses and, likely, further increases as necessary. According to the PMP, “Both the Uniformed Division and the Presidential Protective Division are currently stretched beyond their limits.” Status: Implemented; ongoing work may be required to ensure recommendation is sustained. Actions Taken by the Secret Service Summary: The Secret Service increased the number of Uniformed Division officers and the number of agents assigned to PPD by 214 and 151 persons, respectively, from the end of fiscal year 2014 to the end of fiscal year 2018. Further, special agent and Uniformed Division officer external attrition rate declined year-over-year from fiscal year 2016 to fiscal year 2018. The Secret Service also conducted additional analyses to determine optimal staffing levels to be reached by the end of fiscal year 2025. The Secret Service met or exceeded its hiring goals for special agents and Uniformed Division officers in fiscal year 2016 and fiscal year 2018. From the end of fiscal year 2014 to the end of fiscal year 2018, the Secret Service increased the number of agents assigned to PPD from 248 to 399—a net increase of 151 agents—and the number of Uniformed Division officers from 1,345 to 1,559—a net increase of 214 officers. The Secret Service conducted additional analyses and set hiring goals in the FY 2018–FY 2025 Human Capital Strategic Plan. Specifically, by the end of fiscal year 2020, the Secret Service aims to have 3,927 special agents, 1,657 Uniformed Division officers, and 2,366 Administrative, Professional, and Technical staff. By the end of fiscal year 2025, the Secret Service aims to have 4,807 special agents, 1,797 Uniformed Division officers, and 2,991 Administrative, Professional, and Technical staff. Additional action(s) Secret Service plans to take: The Secret Service is planning to continually validate the human capital strategic plan to ensure that staffing levels are responsive to changes in the agency’s operational tempo. The Secret Service also plans to fill administrative jobs that are currently filled by Uniformed Division officers with Administrative, Professional, and Technical employees, so that the Uniformed Division personnel are more focused on protection. PMP Recommendation 5 Reform and professionalize recruiting, hiring, promotion, and rotation process that puts the most talented, capable individuals in place as efficiently as possible. According to the PMP, “The Secret Service must continue efforts to develop a professionalized recruiting and hiring process that finds talented individuals, evaluates candidates rigorously for the Presidential Protective Division, and hires them quickly.” Status: Implemented; ongoing work may be required to ensure recommendation is sustained. Actions Taken by the Secret Service Summary: The Secret Service hired a Chief Human Capital Officer to run the new Office of Human Resources. In addition, the agency implemented several initiatives to strengthen recruiting, expedite hiring, and clarify the promotion process. The Secret Service reorganized the Office of Human Resources as a stand-alone directorate and hired a Chief Human Capital Officer, who is a professional administrator. The Secret Service developed a National Recruitment Strategy for FY 2016–FY 2020 and the Recruitment and Outreach Plan for FY 2018 to help ensure that the agency is able to meet its staffing requirements through effective and targeted recruitment strategies. Recruitment strategies include increasing the agency’s social media presence, improving the training of its recruiters, and expanding cooperation with the Department of Defense to recruit service members departing the military. The Secret Service developed a number of hiring initiatives, which according to agency officials, reduced time-to-hire for special agents and Uniformed Division officers from an average of 395 days in fiscal year 2016 to 285 days in fiscal year 2018. For example, the agency created Entry Level Assessment Centers that allow applicants to complete several application steps in one week, including the entrance examination, the Special Agent and Uniformed Division Pre- employment Review, and the security interview. The agency also established the Applicant Coordination Center to track applicant processing. In particular, the Applicant Coordination Center brings together a polygraph examiner, a nurse, a security clearance adjudicator, and a human resources specialist to usher candidates through the hiring steps. The agency also began using the web-based Applicant Lifecycle Information System to view applicant materials, process security investigations, send conditional job offers, and track candidates’ progress in one place. The Secret Service published special agent Career Progression guidelines in September 2015 and published a revision to the special agent Merit Promotion Process for agents in May 2017. Additional action(s) Secret Service plans to take: According to agency officials, the Secret Service is currently revising the Uniformed Division Merit Promotion Process and is implementing the Administrative, Professional, and Technical Career Progression Plan. The agency also plans to update and consolidate internal policies for agent and Uniformed Division officer recruitment and hiring. PMP Recommendation 6 Ensure that the Office of Technical Development and Mission Support proactively reviews and refreshes the Service’s technological footprint. The Service should receive dedicated funds for technology, both within its own budget and within DHS Science and Technology’s budget, to accomplish these tasks. According to the PMP, “Technology systems used on the complex must always remain on the cutting edge, and the Secret Service must invest in technology, including becoming a driver of research and development that may assist in its mission.” Status: Implementation in progress. Actions Taken by the Secret Service Summary: To address current technical capabilities and future needs, Secret Service officials stated that, as of October 2018, the Office of Technical Development and Mission Support was drafting a strategic investment plan. According to Secret Service officials, the agency is continuing to explore new technology to enhance its technological capabilities to mitigate threats, including threats to airspace. According to an agency official, as of October 2018, the Office of Technical Development and Mission Support is drafting a five-year strategic investment plan. The plan is to address current technical capabilities as well as needs into the future. According to Secret Service officials, for more than 10 years, the Secret Service’s Science and Technology Review Committee has met quarterly to discuss protection-related technology requirements. The committee is chaired by the Chief Technology Officer, overseen by the Enterprise Governance Council, and open to representatives from all Secret Service directorates. The Enterprise Governance Council is composed of Deputy Assistant Directors from several Secret Service offices and is responsible for overseeing the agency’s investments in science and technology, information technology, and other capital assets. Also according to agency officials, the Secret Service works with the DHS Science and Technology Directorate, partner agencies, and external stakeholders on technological issues. In particular, the Science and Technology Directorate develops pilot programs based on the Secret Service’s technical requirements. According to an agency official, the Secret Service conducts performance reviews of different technology systems each month with the aim of evaluating the performance of every deployed system at least once per year. In fiscal year 2017, Congress appropriated $2.5 million to the Secret Service for research and development. In addition, the Act appropriated $1.8 billion to the Secret Service for Operations and Support, and the Secret Service allocated $98.2 million to the Secret Service for the Operational Mission Support, which helps to protect the President, Vice President, and others from emerging explosive, chemical, biological, radiological, and cyber threats. The funding for the Operational Mission Support program is divided between technology operations and support; procurement, construction, and improvements; and research and development. According to agency officials, the Secret Service does not receive dedicated technology funds through the DHS Science and Technology Directorate. Additional action(s) Secret Service plans to take: Complete and execute the Office of Technical Development and Mission Support’s 5- year strategic investment plan. The plan is intended to address research and development regarding, among other things, ways to mitigate emerging physical and technical threats and identify additional threats. PMP Recommendation 7: Replace the outer fence that surrounds the 18 acres of the White House complex to give Secret Service personnel more time to react to intrusions. According to the PMP, “The current seven-and-a-half-foot fence, not just along Pennsylvania Avenue but around the compound’s entire perimeter, must be replaced as quickly as possible.” Status: Implementation in progress. Actions Taken by the Secret Service Summary: As of March 2019, the Secret Service was planning to begin construction of the first phase of the new White House fence in May 2019. As a temporary measure, in 2015, the Secret Service and the National Park Service installed bike-rack barricades about 12 feet in front of the permanent fence. As a temporary measure, in 2015 the Secret Service and the National Park Service installed bike rack barricades about 12 feet in front of the permanent White House fence. According to Secret Service officials, the bike-rack barricades give Secret Service personnel more time to respond to fence-jumpers. The Secret Service additionally installed several interim countermeasures to the existing fences, including additional spikes. The Secret Service is preparing to break ground and begin construction on the Phase I sections of the White House fence in May or June 2019, according to an agency official in December 2018. Phase I includes a fence surrounding the White House and its immediate grounds. The Commission of Fine Arts and the National Capitol Planning Commission approved the Phase I project in January and February 2017, respectively. Additional action(s) Secret Service plans to take: Phase I fence construction is scheduled to begin in May or June 2019. Phase II planning and construction. In its fiscal year 2019 budget request, the Secret Service requested $3 million for preliminary design development of Phase II of the White House fence project. Phase II is to expand the new fence to the Treasury Building and the Eisenhower Executive Office Building. PMP Recommendation 8 Clearly communicate agency priorities, give effect to those priorities through its actions, and align its operations with its priorities. According to the PMP, “Secret Service’s leadership must make those choices in a manner to ensure that its core protective mission remains first priority.” Status: Implementation in progress. Actions Taken by the Secret Service Summary: The Secret Service has taken steps to communicate that protection is its priority. The agency reiterated its priorities in its 2018– 2022 Strategic Plan and hired a Director of Communications in 2016 to manage the agency’s public affairs efforts and to oversee internal agency communication. However, the Secret Service has not fully aligned its operations with its priorities. For example, in response to the PMP’s 2014 report identifying that the security incident of September 2014 arose from a “catastrophic failure of training,” the Secret Service agreed to having its Presidential and Vice Presidential Protective Divisions train 25 percent of their work hours. Implementation of this recommendation is in progress because its operations do not fully align with the stated priorities, as these divisions are not training at agreed-upon levels. To implement this recommendation, the Secret Service sought to improve internal and external communication efforts. The agency did so by hiring a senior executive Director of Communications in 2016 and forming the Office of Communications and Media Relations in 2017 to manage the agency’s public affairs efforts and to oversee internal agency communication. In addition, in October 2015, the Secret Service developed an internal agency communication platform known as Spark!. The Spark! platform allows all employees to share ideas and submit suggestions on how to improve the agency’s performance and efficiency, thereby improving communication within the agency. The DHS Office of Policy reviewed the dual missions of the Secret Service and issued a report that emphasized the importance of the protective and investigative missions (January 2017). Secret Service officials cited this report as evidence that the agency evaluated its priorities and resource allocation decisions. Secret Service data show that agents increased the share of work hours spent on protection compared with investigation from fiscal year 2014 to fiscal year 2018. Overall, based on our analysis of Secret Service data, 59 percent of special agent hours in fiscal year 2018 were spent on protection and 26 percent on investigations. This is in contrast to 54 percent protection and 36 percent investigations in fiscal year 2014. However, agents worked an average of 2.2 million hours annually on investigations over that period, even though agents assigned to the PPD and VPD did not meet training targets during that time. Despite shifting resources toward protection, the Secret Service’s operations and associated resource allocation do not fully align with its stated priority, protection, because training is an essential component of agents’ protection assignments. PMP Recommendation 9 Promote specialized expertise in its budget, workforce, and technology functions. According to the PMP, “Filling important administrative functions with agents rather than professional administrators may not be optimal.” Status: Implemented; ongoing work may be required to ensure recommendation is sustained. Actions Taken by the Secret Service Summary: To promote specialized expertise in certain business functions, the Secret Service hired a Chief Operating Officer to run the agency’s business functions and elevated senior civilian (non-agent) executives, including the Chief Technology Officer and the Chief Financial Officer, to lead key staff offices. The agency also hired new professional administrators, instead of promoting special agents, to serve in other senior positions, such as Chief Human Capital Officer and Chief Strategy Officer. The Secret Service reorganized the agency to promote specialized expertise in certain functions. In 2015, the Secret Service established the position of Chief Operating Officer. This principal administrator, who is equivalent in rank to the Deputy Director, directs the agency’s business and programmatic activities, with a focus on improving performance, hiring and retaining personnel, and aligning budgetary and strategic planning efforts. The Secret Service professionalized the leadership of several directorates by elevating or hiring civilian senior executives, instead of placing special agents in these specialized positions. For example, in 2015, the civilian who was serving as Chief Financial Officer was placed in charge of the newly created Office of the Chief Financial Officer. Similarly, the Chief Information Officer was placed in charge of the newly created Office of the Chief Information Officer. Also, in 2015, the civilian Chief Technology Officer was placed in charge of the Office of Technical Development and Mission Support, and the Secret Service hired a civilian from outside of the agency to become Chief Human Capital Officer—a position that was formerly held by a special agent. In 2016, the agency created two additional senior, civilian positions: Chief Strategy Officer and Director of Communications. Secret Service officials stated that the agency is currently developing a training course to instruct senior special agents in the agency budget process. PMP Recommendation 10 Present a zero-based or mission-based budget that will provide sufficient resources to accomplish its mission, beginning immediately by working within DHS to adopt a workforce staffing model. According to the PMP, “The Service must build a new budget from the ground up by defining its mission, determining what it will take to achieve it, and asking for that. The mission is important enough to justify that approach.” Status: Implementation in progress. Actions Taken by the Secret Service Summary: The Secret Service has incorporated principles of mission- based budgeting in its budget formulation process. According to Secret Service officials, modeling staffing needs is a key part of mission-based budgeting, and personnel costs accounted for about 71 percent of the Secret Service’s fiscal year 2018 budget. The Secret Service developed—and continues to refine—four staffing models that use internal and external data to establish the optimal staffing levels across the agency. Under mission-based budgeting, also known as zero-base budgeting, the agency is to rebuild the budget by clearly defining its mission and desired outcomes and determining what funding level is needed to obtain those outcomes. This process is in contrast to making incremental changes from the prior year’s budget. budgeting process into its overall budget formulation. The Director annually issues priority memos to guide the development of the Secret Service Resource Allocation Plan submissions to DHS. In The Secret Service has worked to incorporate a mission‐based 2016, the Office of the Chief Financial Officer introduced a mission‐ based budgeting approach for developing the FY 2018 – FY 2022 Resource Allocation Plan submission. Further, the Secret Service’s Resource Allocation Plan prioritizes the agency’s needs for inclusion in DHS’s annual budget request to Congress. Part of the Secret Service’s mission-based budgeting approach involved assessing human capital needs. The Secret Service developed four workforce staffing models that provided a basis to identify valid baseline staffing levels for the agency, a key component to the mission-based budgeting process. According to Secret Service officials, these staffing models are designed to ensure that the agency is staffed in such a way that its personnel are properly trained, overtime is minimized, and proper support personnel are in place so that it is fully prepared to meet mission demands. The Secret Service used the results of the staffing models to develop the Secret Service FY 2018–FY 2025 Human Capital Strategic Plan, published in May 2017, which detailed the agency’s plan to increase the workforce to 9,595 total employees by the end of fiscal year 2025. Secret Service officials acknowledge that they still have to strengthen their budget processes. Specifically, they would like to make the budget process more analytical and data-driven. For example, agency officials want to make better use of budget data to support planning and budget requests, such as by combining financial data with programmatic information to better inform budget decisions. Agency officials want to hire one or more individuals who can better interpret and use those data. Additional action(s) Secret Service plans to take: Secret Service officials said they plan to continue to hone the staffing models. For example, the agency plans to include annual leave and increased training levels in the next iterations of the models. PMP Recommendation 11 Create more opportunities for officers and agents to provide input on their mission and train its mid- and lower-level managers to encourage, value and respond to such feedback. According to the PMP, “Leadership and, even more critically, mid- and lower-level managers, need to make clear that their mission requires that they get things right—and thus that the agency values information out of sync with the status quo or the leadership’s views.” Status: Implemented; ongoing work may be required to ensure recommendation is sustained. Actions Taken by the Secret Service Summary: To improve communication between the workforce and senior leaders, the Secret Service created a platform on its intranet known as Spark! that encourages employees to submit ideas to senior leaders on how to improve the agency’s performance. In October 2015, the Secret Service deployed the Spark! platform, which allows all employees to share ideas and submit suggestions on how to improve the agency’s performance and efficiency. The platform allows two-way communication between leadership and the workforce. The Secret Service reported that in 2017, 96 percent of employees have contributed to a discussion on the site. The agency also reported that, as of June 2018, 51 workforce-generated ideas had been implemented or were being implemented. According to Secret Service officials, several ideas that originated from employees have prompted changes at the agency. These developments include the formation of a new category of employees, known as Technical Law Enforcement in 2018; the introduction of a chaplaincy program in 2017; and the development of the Administrative, Technical, and Professional Career Track. Since 2015, the Secret Service offered a training course on workplace communication called “Enhancing Workplace Communication”. According to agency data, 72 employees took the course in fiscal year 2017 and 2018. From November 2014 to December 2015, the Secret Service contracted with Eagle Hill Consulting to conduct an independent assessment of quality-of-life issues at the Secret Service. The agency workforce was able to provide input through 47 focus groups and an agency-wide anonymous survey. In its final report dated August 22, 2016, Eagle Hill Consulting provided Secret Service management with 22 recommendations to improve quality of life for agency employees. Additional action(s) Secret Service plans to take: The Secret Service plans to introduce additional leadership courses for personnel at all levels. It also plans to remove potential barriers to communication between employees and supervisors by revising merit promotion processes for Uniformed Division officers and for special agents. PMP Recommendation 12 Lead the federal protective force community. According to the PMP, “Collaboration with protective forces like the Federal Protective Service, the Pentagon Force Protection Agency, the FBI Police, and the State Department’s Bureau of Diplomatic Security and other agencies, especially on technology, could significantly increase opportunities for innovation.” Status: Implemented; ongoing work may be required to ensure recommendation is sustained. Actions Taken by the Secret Service Summary: The Secret Service has engaged with other protective forces, across the federal government, such as the Federal Protective Service, through various mechanisms, including a leadership symposium and technology sharing efforts. According to Secret Service officials, the agency has provided assessments and assistance to other government partners in this area. In 2015, the Secret Service chaired a leadership symposium with other federal agencies to discuss roles, responsibilities, and procedures in the event of a critical incident in the National Capital Region and specifically at the White House. Secret Service officials stated that the agency often consults with federal peers to benchmark capabilities and organizational structures to identify possibilities for improvement. In addition, officials stated that the Secret Service partners with other agencies, leading to technological developments. Developments to date include deployment of a fixed-site sub-sonic detection capability with the Metropolitan Police Department of the District of Columbia; a fixed- site super-sonic detection capability with the U.S. Army; and a vehicle motorcade detection capability with the Department of Defense. The Secret Service’s Protective Intelligence and Assessment Division participates in the International Security Events Group when a Secret Service protectee will be traveling to a high-profile international event, such as the Olympics Games. The International Security Events Group is a working group for over 20 federal security and law enforcement agencies and is managed by the Department of State. According to Secret Service officials, at the request of the Office of Management and Budget, the Secret Service has shared expertise with other agencies, including the U.S. Marshals Service, to help to standardize the protection of cabinet-level government officials. Also according to agency officials, the Secret Service leads the federal force protective community in many areas. For example, the agency’s Hazardous Agent Mitigation Medical Emergency Response team has the capability to detect, mitigate, and response to chemical, biological, radiological, and nuclear attacks on protectees, and the team consults with other agencies on these issues, according to agency officials. Additional action(s) Secret Service plans to take: None. PMP Recommendation 13 Receive periodic, outside assessments of the threats to and strategies for protecting the White House compound. According to the PMP, “The Secret Service should engage other federal agencies to evaluate the threats that the agency faces and its ways of doing business.” Status: Implemented; ongoing work may be required to ensure recommendation is sustained. Actions Taken by the Secret Service Summary: The Secret Service regularly engages with outside partner(s) to have the threats to and strategies for protecting the White House complex assessed. The Secret Service has a memorandum of agreement in place with partner agencies to ensure that the outside assessments continue. Between 2015 and 2018, non-Secret Service partner(s) assessed the threats to and strategies for protecting the White house compound between two and four times per year. PMP Recommendation 14 Resume participation in international fora with comparable protective services of friendly nations. According to the PMP, “While most national protective forces do not compare to the Secret Service, those of certain nations are much more similar than they are different.” Status: Implemented; ongoing work may be required to ensure recommendation is sustained. Actions Taken by the Secret Service Summary: The Secret Service, through the Office of Investigations and the Office of Strategic Intelligence and Information, maintains relationships with international partners to share information. In 2018, the Secret Service, among other things, provided training at International Law Enforcement Academies, which are administered by the Department of State, and provided threat assessments to European partners. The Office of Investigations’ International Programs Division provides training to foreign law enforcement organizations through the Department of State’s International Law Enforcement Academies. The Secret Service provides training at all five of the academy’s locations: Bangkok, Thailand; Budapest, Hungary; Gaborone, Botswana; San Salvador, El Salvador; and Roswell, New Mexico. According to agency officials, the Secret Service provided instruction on the agency’s protection methods to over 900 personnel in 2018. The Office of Strategic Intelligence and Information maintains international partnerships to enable the sharing of information and best practices. For example, in 2018, agency officials presented threat assessments to European partners on at least two occasions. Secret Service officials noted that the agency hosts groups of foreign law enforcement personnel for dignitary protection seminars. The seminars are intended to, among other things, encourage future cooperation. Additional action(s) Secret Service plans to take: The Secret Service plans to establish a process for developing proposals to enhance intelligence and operational activities with foreign partners. The agency also plans to formalize a process for senior executive approval of these proposals. PMP Recommendation 15 Give leadership’s priorities and reform the organization’s sustained attention and hold the agency accountable through to their completion. According to the PMP, “Following through on reforms and recommendations has been an issue for the Secret Service in the past.” Status: Implemented; ongoing work may be required to ensure recommendation is sustained. Actions Taken by the Secret Service Summary: To ensure that the agency implemented the PMP’s recommendations, the Office of Strategic Planning and Policy (OSP) was tasked with overseeing and tracking the PMP’s recommendations. This office also coordinated the development of key strategy documents to guide the agency’s efforts. Secret Service executive leaders tasked OSP to oversee and track the implementation of the PMP’s recommendations. The Office of Strategic Planning and Policy coordinated the development of three key strategy documents: the FY 2018 – FY 2022 Strategic Plan, the FY 2018 – FY 2025 Training Strategic Plan, and the FY 2018 – FY 2025 Human Capital Strategic Plan. Additionally, the OSP refined existing performance measures and is developing additional mechanisms to enhance reporting on performance goals to senior leadership. This includes monthly and quarterly reports on key performance metrics and indicators. In January 2017, the agency revived its Enterprise Governance Council, a deliberative body made up of the deputies from each Secret Service directorate. The Enterprise Governance Council oversees agency-wide priorities, including managing the Resource Allocation Plan process, which prioritizes the Secret Service’s needs for inclusion in the annual budget request. PMP Recommendation 16 Implement a disciplinary system in a consistent manner that demonstrates zero tolerance for failures that are incompatible with its zero-failure mission. According to the PMP, “It is clear that the rank-and-file—and even very senior current and former members of the Secret Service—do not have confidence that discipline is imposed in a fair and consistent manner.” Status: Implemented; ongoing work may be required to ensure recommendation is sustained. Actions Taken by the Secret Service Summary: The Secret Service established the Office of Integrity in 2013 to centralize and standardize the disciplinary system across the Secret Service. According to agency officials, for each substantiated incident of employee misconduct, the Chief Integrity Officer and Deputy Chief Integrity Officer determine what formal disciplinary action, if any, is warranted. Further, the Discipline Review Board, composed of senior representatives from each directorate, oversees the discipline system and hears appeals from most personnel. (A separate process is in place for members of the Senior Executive Service.) Disciplinary outcomes are detailed in an annual report so as to increase transparency within the agency. The Office of Integrity was established in 2013 to centralize the disciplinary system across the agency. Previously, Special Agents in Charge of field offices had the responsibility of addressing employee misconduct and determining the penalty, according to an agency official. In 2015, the Office of Integrity began publishing an annual discipline report which provides an overview of disciplinary actions taken by deciding officials and analyzes misconduct trends. Additional action(s) Secret Service plans to take: The Secret Service plans to conduct a formal review and periodic analysis of the Office of Integrity to ensure that it is fulfilling its intended purpose. PMP Recommendation 17 Hold forces accountable for performance by using front-line supervisors to constantly test readiness. According to the PMP, “To be ready for a job where quick reactions and reflexes are critical, supervisors need to drive home to their officers and agents that the front line is constantly being tested.” Status: Implementation in progress. Actions Taken by the Secret Service Summary: The Secret Service introduced new policies and plans to study whether to introduce a “random check” program to test employees’ readiness. In addition, according to Secret Service officials, special agents assigned to PPD and VPD run biweekly drills and training scenarios at the White House complex and at the Vice President’s Residence, while Uniformed Division officers are briefed on emergency actions and responsibilities at shift changes. This recommendation is in progress because it requires additional tests for readiness. In December 2017, the Secret Service conducted scenario readiness exercise involving multiple units at the White House. According to Secret Service officials, the agency runs drills and training scenarios about every 2 weeks. These drills have taken place at the White House, the Naval Observatory (the Vice President’s Residence), and at the Department of the Treasury building. Agents assigned to PPD and VPD are the primary training participants, but Uniformed Division officers are often involved as well. More recently, the training scenarios have included other groups if the practice incident is off, or near the edge, of the White House complex. For example, training scenarios may include U.S. Park Police or Secret Service personnel assigned to the Washington Field Office. The Uniformed Division conducts daily personnel shift briefs, which cover emergency actions and responsibilities. The Office of Protective Operations instituted training classes for personnel assigned to the Joint Operations Center. The Joint Operations Center is located away from the White House and is responsible for managing day-to-day Secret Service operations and coordinating emergency response. Additional action(s) Secret Service plans to take: The Secret Service plans to study whether to introduce a “random check” program to test employees on their responsibilities at operational posts. PMP Recommendation 18 The next director of the Secret Service should be a strong leader from outside the agency who has a protective, law enforcement, or military background and who can drive cultural change in the organization and move the Secret Service forward into a new era. According to the PMP, “The need to change, reinvigorate, and question long-held assumptions—from within the agency itself—is too critical right now for the next director to be an insider.” Status: Implemented. Actions Taken by the Secret Service Summary: Randolph “Tex” Alles was appointed the Secret Service director in 2017 and was in that position until 2019. He had not worked at the Secret Service prior to taking on this role, but he served previously as Acting Deputy Commissioner of U.S. Customs and Border Protection and in the U.S. Marine Corps. This recommendation was implemented through presidential action, as the Secret Service does not select its own director. PMP Recommendation 19 Establish a leadership development system that identifies and trains the agency’s future managers and leaders. According to the PMP, “To promote from within and move the agency forward, however, the Secret Service needs to do a better job of identifying future leaders and preparing them for the role.” Status: Implementation in progress. Actions Taken by the Secret Service Summary: The Secret Service has taken steps to improve its leadership development system and has provided leadership training at different levels with a focus on first-line supervisors. The agency is also developing the “Framework for Leadership Development,” which is to identify courses and training opportunities to promote leadership skills at all levels. The Secret Service adopted the DHS Leadership Development Program in 2015 to encourage leadership development at all levels. The agency has provided leadership training at different levels. For example, 409 employees have attended the Seminar for First Line Supervisors since 2015. Since 2016, 178 employees attended the Antietam Leadership Experience, which is a course for supervisors, managers, and senior team leads that focuses on leadership development and capabilities. Since 2018, 41 employees attended the Building Leaders Training Course, which is designed for non- supervisory team leads. The Office of Human Resources, in coordination with the Rowley Training Center, has begun to develop a “Framework for Leadership Development” program to craft effective courses and training requirements tailored to individuals throughout their careers. The Secret Service put on a number of events to emphasize the importance of leadership within the agency in fiscal year 2018 as part of the DHS’s “Leadership Year,” a department-wide effort to promote leadership skills. An intranet site on Leadership Year Resources was also created to centralize information on leadership development resources. In addition, the Director of the Secret Service recorded a video message on leadership that was posted on the agency’s intranet. The agency also established a peer-to-peer award recognition program. The agency established the Leadership Development Council in March 2018, with representatives from each of the four occupation groups (special agents; Uniformed Division officers; Administrative, Professional, and Technical staff; and Technical Law Enforcement staff) and across all grade levels. Additional action(s) Secret Service plans to take: Finalize the Framework for Leadership Development and roll out the program to the agency. Complete development of the Strategic Leadership Course for Managers, which is designated to be a two-week course to promote leadership and strategic planning. Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Nathan Anderson, (202) 512-3841 or andersonn@gao.gov. Staff Acknowledgments In addition to the contact above, Joseph P. Cruz (Assistant Director), Kisha Clark (Analyst-in-Charge), Willie Commons III, Elizabeth Dretsch, Eric Hauswirth, and Eric Warren made key contributions to this report.
Why GAO Did This Study The Secret Service, a component of the Department of Homeland Security (DHS), is responsible for protecting the President, the Vice President, and their families, as well as the White House complex. In October 2014, following several security lapses, the Secretary of Homeland Security established the Panel, an independent panel of experts, to review White House security and other aspects of Secret Service operations. The Secret Service Recruitment and Retention Act of 2018 contains a provision for GAO to report on the progress made by the Secret Service in implementing the Panel's recommendations. This report addresses the extent to which the Secret Service has implemented the recommendations in the Panel's 2014 report. GAO reviewed Secret Service documents, analyzed agency training and labor-distribution data from fiscal years 2014 through 2018, and interviewed agency officials and Panel members. What GAO Found The U.S. Secret Service (Secret Service) has made progress implementing the 19 recommendations related to training and personnel; technology, perimeter security, and operations; and leadership made by the U.S. Secret Service Protective Mission Panel (Panel). The Secret Service fully implemented 11 of the recommendations. For example, the agency increased the number of agents and officers in the divisions that protect the President and White House and secured approval to build a new fence around the White House complex. The Secret Service is in the process of implementing the remaining eight recommendations. The Panel found that the security incident of September 19, 2014, when an intruder jumped the north fence and entered the White House, arose from a “catastrophic failure of training.” The Panel recommended, and the Secret Service agreed, that the Presidential and Vice Presidential Protective Divisions train for 25 percent of their work time. However, the Secret Service has not met this target and lacks a plan for achieving it. In fiscal year 2018, special agents assigned to these divisions trained for about 6 percent and 3 percent, respectively, of their regular work hours (see figure). In commenting on a draft of this report in May 2019, the Secret Service stated that it no longer agrees with the training target and plans to reevaluate it. Developing and implementing a plan for ensuring that the established training target is met given current and planned staffing levels would better ensure that agents assigned to the Presidential and Vice Presidential Protective Divisions are prepared to carry out Secret Service's protection priority. In addition, the Secret Service does not have a policy with a documented process for collecting complete and appropriate (i.e., protection-related) training hour data for Uniformed Division officers. Implementing such a policy will better position the Secret Service to assess the training data and make informed decisions about whether and how training needs are being met. What GAO Recommends GAO is making recommendations to the Secret Service: (1) develop and implement a plan to ensure that special agents assigned to the Presidential and Vice Presidential Protective Divisions reach annual training targets, and (2) develop and implement a policy that documents the process for collecting complete and appropriate data on Uniformed Division officer training. DHS concurred with the two recommendations.
gao_GAO-19-391
gao_GAO-19-391_0
Background Federal Roles and Responsibilities Related to FLW Vary In 2015, the President signed Executive Order 13693, “Planning for Federal Sustainability in the Next Decade.” This executive order called for federal agencies to, among other things, advance waste prevention and pollution prevention in federal facilities, operations, and vehicles by diverting at least 50 percent of nonhazardous solid waste, including food and compostable material, but not construction and demolition debris, in their internal operations annually, and pursue opportunities for net-zero waste or additional diversion opportunities. In May 2018, Executive Order 13693 was revoked and replaced by Executive Order 13834, “Efficient Federal Operations,” which directed federal agencies to implement waste prevention and recycling measures but no longer included the specific direction to divert at least 50 percent of nonhazardous solid waste, including food and compostable material, annually or to pursue opportunities for net-zero waste or additional diversion opportunities. CEQ and OMB are responsible for implementing and tracking progress for these executive orders. Among its duties, EPA oversees municipal solid-waste management. For example, EPA regulates the management of household, industrial, and manufacturing solid and hazardous wastes under the Resource Conservation and Recovery Act. The objectives of the act include protecting the United States from the hazards of waste disposal, to conserve energy and natural resources by recycling and recovery, and to minimize the generation of hazardous waste. However, the management of nonhazardous solid waste, such as food waste, is left primarily to the states and local governments. Under the act, EPA established solid- waste management guidelines for municipalities that encouraged recycling, including composting food and yard waste. EPA’s Sustainable Materials Management Program, including the Sustainable Management of Food strategic priority area, seeks to reduce the environmental impact of materials through their entire life cycle. Furthermore, available landfill space is decreasing in various parts of the United States; EPA’s FLW activities may help extend the life of those existing landfills and provide opportunities for energy generation. According to USDA officials, USDA has developed a broad range of programs and policies to reduce FLW as a means to support its overarching objectives related to reducing food insecurity, improving food safety, increasing market efficiencies, and enhancing farmer income and rural development. USDA also conducts education and outreach through its network of state and local offices, the Cooperative Extension Service, state Departments of Agriculture, land-grant university partners, and nongovernmental, nonprofit, community, and faith-based organizations. Additionally, the 2018 Farm Bill requires the Secretary of Agriculture to take a number of actions to address FLW: (1) create a FLW Reduction Liaison to coordinate federal, state, local, and nongovernmental programs, and other efforts, to measure and reduce the incidence of FLW; (2) conduct a study on food waste in consultation with the FLW Reduction Liaison and report data collected on food waste and efforts to reduce and prevent such waste; (3) issue guidance outlining the best practices to minimize food waste for donated commodities; (4) enter into cooperative agreements with local or municipal governments to develop and test strategies for planning and implementing municipal compost plans and food waste reduction plans; (5) establish a milk donation program to encourage the donation of milk products produced and processed in the United States, assist individuals in low-income groups, and reduce food waste; and (6) establish a Local Agriculture Market Program to, among other things, promote new business opportunities and marketing strategies to reduce on-farm food waste. Finally, FDA, which is responsible for, among other things, overseeing the safety of about 80 percent of the nation’s food supply, has a limited mission related to FLW. FDA was not involved with establishing the national FLW reduction goal in 2015 but, according to agency officials, has become more engaged in consumer education and outreach to the food industry, hunger relief and food rescue organizations, state and local governments, academia, and other stakeholders on issues related to FLW. By signing the 2018 formal agreement on collaboration and coordination with EPA and USDA, FDA has committed to taking further actions to reduce FLW. Varying Definitions Inform Methodologies for Measuring Food Loss and Waste Definitions of FLW vary among the various organizations, including federal agencies, working in this area, which inform different methodologies for measuring and reporting FLW. For example, consistent with its focus on advancing the sustainable use of materials, including food, throughout their life cycle to minimize waste and environmental impacts, EPA uses the term “wasted food” instead of “food waste” for food that is not used for its intended purpose because it conveys that a resource with value is being wasted, whereas “food waste” implies that the food no longer has value and needs to be managed as waste. EPA states that “wasted food” is managed in a variety of ways, including through donations to food banks, conversion to animal feed, composting, anaerobic digestion, or sending it to landfills. In contrast, USDA’s Economic Research Service (ERS) defines food loss as edible food that is available for human consumption but that is not eaten. According to ERS, food losses may occur for any number of reasons, including cooking loss and natural shrinkage; loss from mold, pests, or inadequate climate control; and plate waste, which refers to edible food that is served but discarded. In addition, ERS defines food waste as a component of food loss that refers to food discarded by retailers and consumers due to quality concerns, such as blemished food. ERS takes this approach in support of its effort to estimate the nation’s available food supplies, which it adjusts to account for nonedible parts of foods and losses throughout the food supply chain. USDA has noted that definitions of FLW vary worldwide. For example, FAO differentiates food loss from food waste based on the stage of the food supply chain in which the amount of edible food decreased. FAO refers to food loss as the decrease in edible food that occurs throughout the production and processing stages of the food supply chain, whereas food waste occurs at retail and consumer stages of the food supply chain. These varying definitions have led to different methodologies for measuring and reporting FLW. For example, EPA estimates the amount of food from residences, commercial establishments (such as grocery stores), and institutional establishments (such as schools) that is disposed of through landfills or converted to energy, while ERS estimates the amount, value, and calories of postharvest food losses at the retail and consumer stages of the food supply chain. In addition, ERS and FAO FLW estimates do not include inedible parts, whereas, with its focus on materials management, EPA’s estimates do. Food Loss and Waste Occurs throughout the Food Supply Chain, and Options to Reduce It Vary FLW can occur across the entire food supply chain, occur at more than one stage (e.g., spoilage), or be unique to a specific stage, as seen in figure 1 below. However, the share of total FLW due to each of these causes is currently unknown, according to a USDA report. EPA’s Food Recovery Hierarchy, shown in figure 2 below, focuses on different options for reducing FLW. According to EPA, the top levels of the hierarchy are the best ways to reduce FLW because they create the most benefits for the environment, society, and the economy. Source reduction is the preferred option for reducing FLW because it provides the greatest benefits in terms of environmental sustainability. This is because growing food requires resources, such as land, water, fertilizer, and pesticides. In contrast, food that is sent to landfills generates greenhouse gases, such as methane. Prevention refers to reducing the amount of surplus food generated at any stage of the food supply chain. For example, businesses, such as restaurants, may prevent FLW through better planning and food preparation techniques. Diversion includes recovering food by donating edible food to feed hungry people or sending food scraps to feed animals. Diversion also includes recycling food scraps for industrial uses, such as waste-to- energy generation or anaerobic digestion, or for composting. Disposal refers to food that is sent to landfills, incinerators, or washed into sewers. According to USDA, some FLW is inevitable and, therefore, entirely eliminating FLW is unrealistic. For example, USDA ERS reports that there is a practical limit to how much FLW in the United States could be reduced, given different factors, such as food safety concerns, the perishability of foods, storage and temperature considerations, and risk management for production and marketing uncertainties, and resource constraints to recover uneaten food for another use, among others. According to USDA officials, to be successful, FLW reduction strategies should consider the economic incentives and disincentives faced by stakeholders across the food supply chain. Nonfederal Stakeholders Cited Various Challenges to Reducing Food Loss and Waste in the United States Nonfederal stakeholders we interviewed cited various challenges that exist to reducing FLW in the United States. Through our analysis of those interviews, we identified three key areas: (1) limited data and information about the amounts and causes of FLW; (2) a lack of awareness and education about FLW; and (3) limited infrastructure and capacity, which can hamper efforts to reduce FLW. In some instances, the nonfederal stakeholders also provided their views for ways federal agencies could potentially address the identified challenge areas. Nonfederal Stakeholders Said Data and Information about Amounts and Causes of FLW Are Limited Through interviews with nonfederal stakeholders, we identified limited data and information about the amounts and causes of FLW as a challenge to reducing FLW in the United States. For example, several stakeholders told us that data gaps associated with the different food supply-chain stages make it challenging to estimate FLW. For example: An international organization published a study in 2011 that included estimates of FLW by different regions and different stages of the food supply chain. The organization reported in its study that there were major data gaps in the knowledge of global FLW, such as the causes of FLW. Representatives of this organization told us that a challenge to measuring and estimating FLW is the lack of data on the various stages of the food supply chain. They are proposing a new methodology intended to help countries measure FLW along the supply chain in a cost-effective manner and monitor progress in reducing FLW. Researchers from two academic institutions told us there are challenges to estimating farm-production food losses. For example, researchers from one of these academic institutions told us that farm- production losses fluctuate from year to year based on changes in markets and growing conditions, such as weather, which can make estimating FLW more challenging. In addition, these researchers told us more information is needed about how different economic factors, such as the existence of secondary (alternative) markets to sell excess food, or changes in farming costs such as increases in labor costs, may influence FLW. One nonprofit organization reported that data at the farm-production stage of the food supply chain are limited, including data on what happens to some food at that stage. For example, there are limited data about whether produce that goes unsold is tilled back into the farmland, composted, or sent to a landfill. This nonprofit organization reported the limitations of its estimate of FLW across the food supply chain in the United States. For example, the nonprofit organization documented in its FLW estimate methodology that its farm-production FLW data analysis focused on estimating imperfect-produce rates, but noted that FLW may occur at this stage for a variety of reasons, including inclement weather, pests, or overproduction. It also documented that future research efforts could assess actual produce imperfection and loss rates for each produce type using geographical differences to improve estimate accuracy. Representatives from another nonprofit organization that has published an estimate of FLW told us there are data gaps about FLW along the food supply chain. For example, this nonprofit organization reported in 2017 that improved research is needed regarding farm- production data and FLW estimates of the consumer stage of the food supply stage. In addition, this nonprofit organization reported that one challenge is the absence of standardized measurement methodologies and common metrics to help entities representing all food supply-chain stages accurately estimate FLW, develop strategies to reduce FLW, and measure progress. In this report, they noted that federal agencies’ efforts to develop a mechanism to aggregate and disseminate FLW information as it is gathered by businesses and institutions, among others, would be beneficial to all stakeholders. Representatives of a third nonprofit organization stated that FLW measurement methodologies need to be tailored to the particular stages of the food supply chain and that the strategies to reduce FLW need to respond to the conditions associated with specific foods. Nonfederal Stakeholders Identified a Lack of Education and Awareness about FLW Nonfederal stakeholders identified a lack of education and awareness about FLW as a challenge to reducing FLW. For example, an official from one state told us that there is a lack of awareness among various organizations about the benefits of preventing FLW. Specifically: One state official told us that there is a lack of awareness among food producers, businesses, and consumers about the benefits of preventing FLW. According to this official, to address this challenge the state developed a strategic action plan that prioritizes focusing upstream in the food supply chain to prevent FLW, as opposed to the more traditional focus on increasing FLW diversion, such as through composting. This official also told us that implementing organic waste (e.g., food waste or other plant and animal materials) bans, which prohibit specified waste generators from sending food waste to landfills, as several states are doing to reduce FLW, tends to promote FLW reduction activities further down on EPA’s Food Recovery Hierarchy. As a result, organic waste bans may contribute relatively little to reducing FLW or maximizing the benefits of such reductions. This official emphasized that additional steps are needed to increase awareness about the benefits of prioritizing prevention through shifts in supply chains, purchasing, and consumption patterns to reduce FLW. Officials from two states told us there is a lack of resources to support efforts to educate consumers about FLW. For example, one state official told us that the state agency has insufficient staff resources to do effective outreach regarding FLW, and current staff members do not yet have the expertise to fully educate and assist consumers and businesses about all options available to reduce FLW. Another state official told us that the state would like to do a state-wide social marketing campaign to disseminate education and information about FLW to household consumers, but the state lacks sufficient resources to launch such an effort. Representatives of a nonprofit food donation organization identified a lack of education and awareness about date labeling as one of the challenges to reducing FLW. For example, the representatives told us that consumer confusion about date labels may be an impediment to reducing FLW among consumers. However, in these representatives’ view, reducing date labeling confusion is unlikely to lead to additional food donations. In addition, an academic institution representative, in collaboration with other authors, reported that a driver of household FLW is consumer confusion over date labels and conducted a survey to gain information about consumer perceptions of date labels. They concluded from their research that increasing consumer education on the meaning of date labels can help to reduce FLW. A representative of a nonprofit food donation organization told us that education and awareness about liability protections and compliance are lacking for various potential food donors and may hinder some food producers from donating food and, by extension, reducing FLW. Nonfederal Stakeholders Said Limited Infrastructure and Capacity Can Hamper Efforts to Reduce FLW Through interviews with nonfederal stakeholders, we identified that limited infrastructure and capacity is a challenge that can hamper efforts to reduce FLW. For example: Representatives of a nonprofit food donation organization that receives food donations cited a lack of sufficient capacity and logistical support to collect and distribute available food. For example, representatives told us that food pantries may not have a sufficient volunteer workforce or enough food storage capacity to be able to distribute all donated food to needy people. Food industry representatives told us that businesses have infrastructure limitations, such as a lack of transportation options to deliver excess food to food pantries or composting facilities. For example, representatives told us that if such facilities were available, food scraps, such as produce peels, could be used as animal feed or composted. However, if the infrastructure to utilize these options is not available, the companies generating the FLW may opt to send it to landfills instead. An official from one state told us that the state does not have access to the infrastructure and capacity needed to separate contaminants in order to be able to divert FLW for other uses, such as animal feed, composting, or anaerobic digestion. For example, this state official told us that the state does not have access to the necessary equipment to separate plastic and other packaging materials from food waste in order to be able to process FLW through anaerobic digesters. Officials of another state provided a study stating that removing packaging from food waste can be an obstacle to successful FLW diversion and that separation of food waste for composting or other diversion can be costly. In addition, a representative of one international organization told us that federal agencies could facilitate a collaborative approach with industry stakeholders to develop voluntary industry standards on food packaging materials and food portion sizes to help reduce FLW in the United States. Officials from another state told us that a lack of food recycling infrastructure limits their ability to enforce the state’s organic waste ban and reduce FLW. A state official told us that the state has one anaerobic digester facility to process food waste, but additional recycling infrastructure would be needed statewide to enable food waste generators, such as hospitals or schools, to recycle their food waste instead of sending it to landfills. EPA and USDA Have Taken Initial Actions to Address Key Challenge Areas to Reducing Food Loss and Waste Since announcing the national FLW reduction goal in 2015, EPA and USDA have taken initial actions to address challenges in the three key areas that nonfederal stakeholders identified to reduce FLW. For example, EPA and USDA have taken actions to provide improved data and information about FLW in the United States; educate and increase awareness of FLW along the food supply chain; and expand the infrastructure and capacity to support efforts to reduce FLW. EPA and USDA Have Provided Some Data and Information on Food Loss and Waste in the United States EPA and USDA have provided some data and information about FLW in the United States. Specifically: In a 2018 report, EPA published trends of food waste materials generation, among other materials, and provided updated information about municipal solid waste being generated, recycled or composted, landfilled, and combusted with energy recovery using 2015 data from residential, commercial, and institutional sources. According to EPA, food waste represents the largest percentage of landfilled material in municipal solid waste, as seen in figure 3 below. EPA relies on gathering these data on food waste generation and management from studies conducted by other organizations, such as state and local governments and food waste generators. EPA measures certain FLW diversion activities (i.e., divert food to a destination other than landfill or incineration). For example, in September 2018 EPA completed an effort to quantify the number and capacity of anaerobic digestion facilities in the United States. EPA also aggregates and publishes data submitted by EPA’s Food Recovery Challenge program participants on recycling fats, oils, and grease, which may otherwise be disposed of through wastewater. For example, participating restaurants may submit data on the amount of fats from their fryer grease containers that they send for recycling through rendering, conversion to biofuels, or to an anaerobic digester. In addition, EPA develops estimates of food waste composting based on a review of state environmental agency websites, as well as published reports. EPA updates its FLW estimates annually. EPA officials stated that these annual estimates are the most comprehensive annual estimates of generated and managed FLW and that EPA plans to use these estimates to track progress. However, EPA officials acknowledged certain limitations in using these estimates to track annual progress against the 2030 goal. For example, EPA officials stated that data challenges include limited studies available for some sectors and the lack of geographic coverage, among others. EPA is taking steps to improve its FLW estimates. For example, officials stated that in 2017 EPA embarked on an effort to improve its food measurement methodology to reflect all potential FLW generating sectors for which there are data, and to characterize how food is being managed beyond composting and landfill. USDA has also provided some data and information about FLW at various stages of the food supply chain in the United States since 2015. Specifically: ERS is working on initiatives to refine and improve its data system in order to support its ongoing efforts to estimate FLW at the retail and consumer stages of the food supply chain. For example, USDA officials told us they are developing a proposal for an external expert panel to analyze food loss estimates at the consumer stage of the food supply chain and make recommendations for data updates. In addition, USDA officials told us that work is under way to update the retail-level loss estimates of selected foods. In addition, in December 2017, ERS initiated work on a study to identify gaps in information about farm-level FLW. According to ERS officials, as part of the study, ERS will describe the existing data- collection challenges and address the economic factors that influence farmers’ decisions as they relate to FLW at the farm level. For example, one factor could involve a farmer deciding to plow excess produce into the fields instead of harvesting or processing the crop if the potential additional labor or operations costs exceed the potential revenue. One senior ERS official told us that ERS expects to issue the study by the end of calendar year 2019. Additionally, ERS officials told us that USDA could use the final study to inform USDA’s policy approaches to reducing FLW. For example, the report may inform USDA’s efforts to assist farmers in implementing best practices in reducing FLW and expanding market opportunities for imperfect fruits and vegetables or excess harvest. USDA’s National Institute for Food and Agriculture has provided grant funding to projects related to FLW. For example, the institute awarded a grant in 2018 to an academic institution to study the effect of secondary markets as alternative channels for usable food. To advance the research mission of the agency, among other reasons, USDA has a memorandum of understanding with the Foundation for Food and Agriculture Research, an organization that Congress authorized as part of the 2014 Farm Bill. The Foundation for Food and Agriculture Research conducts research in six defined challenge areas, including one area that focuses research on inefficiencies in the food system, such as FLW. EPA and USDA Have Taken Some Actions to Educate and Build Awareness about Food Loss and Waste EPA and USDA have taken some actions to educate and build awareness about FLW in the United States since announcing the national FLW reduction goal in 2015. For example, EPA published its Sustainable Materials Management Program Strategic Plan, Fiscal Years 2017-2022 in October 2015. One of the plan’s three strategic priority areas is Sustainable Food Management, which includes an action area of promoting opportunities to reduce wasted food and the food’s associated effects over the entire food supply-chain life cycle with a preference for using approaches that are higher on the agency’s Food Recovery Hierarchy. EPA’s strategic plan describes delivering tools and education; working with states and local communities to help provide regional or sector-based support; and sharing best practices on wasted-food reduction efforts. In addition to the planned actions identified in the Sustainable Food Management area, EPA has also provided the following FLW education and awareness tools, among others: Food: Too Good to Waste. This community-based social marketing campaign, implementation guide, and toolkit aim to reduce wasteful household food management practices and keep FLW out of landfills. The toolkit is designed for community organizations, local governments, households, and others interested in reducing wasteful household food management practices. The implementation guide is designed to teach local governments and community organizations how to implement a Food Too Good to Waste campaign in their community using the toolkit. In a 2016 report, EPA listed 17 communities in various states, including Rhode Island and Vermont, that had implemented Food Too Good to Waste campaigns and, as part of this implementation, could use outreach and engagement tools adaptable to the needs of their communities based on their available resources. The campaigns focused on helping households make small shifts in how they shop, prepare, and store food to prevent it from being wasted. Waste Reduction Model. According to the agency’s website, EPA created this tool to help solid-waste planners and organizations track greenhouse gas emissions reductions from several different waste- management practices, including source reduction, recycling, anaerobic digestion, combustion, composting, and landfilling. For example, a food service establishment can use the tool to create an estimate of the greenhouse gas savings associated with decreasing the amount of bread and produce landfilled. Tip sheets. EPA developed tip sheets about reducing FLW for different sectors involved in the food supply chain, including manufacturers and restaurants, to emphasize FLW prevention options. EPA officials told us that they make these tip sheets available online on the agency’s website and attend conferences to disseminate information. For example, the officials said that they attended the Midwest Food Recovery Summit in September 2018 and provided these tip sheets at the EPA information booth during the conference. In addition, USDA has been involved in the following FLW reduction efforts to raise awareness and educate various stakeholders along the food supply chain: FLW roundtable meeting. In May 2018, the Secretary of Agriculture hosted a roundtable meeting with members of Congress, food industry representatives, and nonprofit groups to raise awareness about FLW and discuss potential solutions. FoodKeeper application. In 2015, USDA, in partnership with Cornell University and the Food Marketing Institute, launched the FoodKeeper application, a tool to provide consumers with specific storage advice, including storage timelines for the refrigerator and freezer for food and beverage items. USDA officials stated that the agency updated the application in October 2018 to include various features including searching for food and beverages in Spanish and increasing the number of food items with storage information. USDA has continued to highlight the FoodKeeper application as part of USDA and EPA’s Food Waste Challenge effort to help educate consumers to reduce FLW. Infographic. Also in 2015, the USDA Center for Nutrition Policy and Promotion issued an infographic, “Let’s Talk Trash,” to help inform American consumers about the benefits of reducing FLW, as shown in figure 4. USDA made the infographic available on its www.choosemyplate.gov website, which includes additional resources to help consumers think about the amount of FLW at home. Strategies for schools. In 2015, USDA’s Food and Nutrition Service issued a summary of strategies for schools to reduce FLW that included a list of resources to encourage FLW diversion by donating uneaten food to nonprofit institutions and information about composting. The Food and Nutrition Service also recommended that schools introduce “share tables” into cafeterias so that students could exchange unwanted but otherwise edible food items. In June 2016, USDA issued a memorandum to remind states’ Child Nutrition Program directors of the opportunities to use share tables to reduce FLW in a number of Child Nutrition Programs, such as the National School Lunch Program. In July 2016, the Food and Nutrition Service issued guidance directed at school staff members and students, among others, with tips to prevent FLW, including encouraging students to use share tables. To further provide information, raise awareness, and educate different stakeholders along the food supply chain, EPA and USDA have collaborated on the following FLW reduction efforts: A Guide to Conducting Student Food Waste Audits. In 2017, EPA, USDA, and the University of Arkansas collaborated to create this guide for students and school personnel about the amount of FLW in their cafeterias. The guide provides information on why and how to do a food waste audit and what to do with the data collected. It also offers FLW prevention ideas. Public/private partnerships. EPA and USDA support public/private partnerships to provide key information, solutions, and best practices to reduce FLW across the food supply chain. For example, EPA and USDA established the U.S. Food Loss and Waste 2030 Champions initiative in November 2016 as a way to increase efforts to meet the national FLW reduction goal. This 2030 Champions initiative recognizes organizations that have committed to cutting FLW in their own operations in half by 2030 and encourages Champions to report on their progress. In May 2018, EPA hosted a public webinar to highlight the actions of three 2030 Champions to share best practices, tools, and resources these organizations created to prevent food from going to waste. In March 2019, USDA officials told us that eight additional businesses have joined the 15 Champions involved in the initiative since its launch. In addition, EPA and USDA also support Further With Food, an online hub developed by EPA, USDA, and 10 other organizations that provides information and solutions to raise public awareness and reduce FLW. Participation in external conferences. EPA and USDA have conducted outreach, including through participation in conferences and seminars, and have disseminated resources related to FLW. For example, EPA and USDA each sent an official to attend and present at the National Academies of Science’s Reducing Food Loss and Waste: A Workshop on Impacts in October 2018. USDA officials told us they helped fund this workshop and helped develop the workshop’s objectives, which were to explore the effects of reducing FLW on food availability and other factors; to examine the role of governments, nongovernmental organizations, and the private sector in adopting best practices to improve the benefits and reduce the costs of reducing FLW; and to discuss opportunities for partnerships to address FLW. USDA has also collaborated with FDA to address FLW. For example, USDA and FDA are both on the Executive Board of the Conference for Food Protection, an organization that brings together representatives from the food industry, government, academia, and consumer organizations to identify and address emerging problems of food safety. In April 2016, this group released a Comprehensive Resource for Food Recovery Programs to reduce FLW through the recovery of consumable food. This report is intended to assist stakeholders involved in the recovery, distribution, or service of food to people who are food insecure. The report references the national food standards at the retail level, as expressed in the FDA Food Code, to minimize the occurrence of risk factors that contribute to foodborne illness. FDA contributed to the submission of an issue to the 2018 Biennial Meeting of the Conference for Food Protection that sought to promote uniformity in the way in which state and local governments regulate food donation and recovery operations in retail and foodservice establishments. In addition, FDA has disseminated information to the public about strategies to reduce FLW while maintaining food safety and has referred to USDA’s FoodKeeper application as a resource for learning how to store perishable food and employ safe storage practices. EPA and USDA Have Taken Some Actions to Increase Infrastructure and Capacity to Support Efforts to Reduce FLW EPA and USDA have each taken some actions to increase infrastructure and capacity to support efforts to reduce FLW in the United States. EPA has taken some actions to increase infrastructure and capacity to reduce FLW in the United States. For example: Technical assistance. EPA provides technical assistance to state and local governments in developing anaerobic digestion projects, a technology to process wasted food that is more desirable than landfilling or incineration, according to EPA’s Food Recovery Hierarchy. Excess Food Opportunities Map. EPA’s Excess Food Opportunities Map displays the locations of more than 500,000 industrial, commercial, and institutional food generators that may potentially produce excess food and more than 4,000 potential recipients of that excess food. The map also provides information at the specific establishment–level, including estimates of excess food generation that may help users identify alternatives to sending excess food to landfills. The map helps users identify potential infrastructure gaps for managing excess food, inform FLW management decisions at the local level, and identify potential sources of food for rescue and reuse, among other purposes. An EPA official told us that the communication plan for the launch of the Excess Food Opportunities Map included a webinar announcing the map in July 2018 and providing presentations about the map at various conferences, including during the National Academies Reducing Food Loss and Waste Workshop in October 2018. The official also stated that emails about the map were sent to over 13,000 people and approximately 700 people attended the webinar EPA hosted in July 2018. Recycling infrastructure. EPA’s Sustainable Materials Management program’s strategic plan describes EPA’s role in providing states, businesses, and other stakeholders with, among other things, tools, guidelines, and technical support to more effectively manage waste, including by helping increase recycling infrastructure. In May 2018, EPA cohosted a recycling infrastructure workshop to identify solutions for creating infrastructure for anaerobic digestion and composting. In addition, EPA officials told us that the agency is in the process of updating its recycling guide for state and local governments and they anticipate completing it by the end of 2020. USDA also has taken some actions to increase infrastructure and capacity to reduce FLW in the United States. For example: Food programs. USDA officials told us that USDA food programs, such as The Emergency Food Assistance Program, support efforts to feed people and to provide access to affordable and nutritious food. For example, food donation organizations that are recipients of program funds may use these funds to pay the direct expenses associated with the distribution of USDA foods, such as fruits, vegetables, and beans. New FLW-reduction technologies. USDA’s Agricultural Research Service has various research programs, including one to enhance the quality and utilization of agricultural products. Potential benefits listed as part of this research program are minimizing food product losses and reducing FLW through the development of farm production technologies, such as the development of an apple-sorting system that will help reduce apple harvest losses. According to USDA officials, most of the innovations of this research program involve creating value-added products from “ugly produce” or from food processing byproducts, such as orange peel or mushroom-stalk waste, or creating new technologies to prolong the shelf life of food products. Meat and poultry donation rules. USDA’s Food Safety and Inspection Service issued a directive that outlines procedures for donating certain meat and poultry products to nonprofit organizations. The Food Safety and Inspection Service has also begun, under certain circumstances, to recognize food banks as “retail-type” establishments, which allows food banks to break down bulk shipments of federally inspected meat or poultry products, wrap or rewrap those products, and label the products for distribution to consumers. In one case, this recognition enabled a nonprofit organization engaged in food donations to gain 2.6 million pounds of food donations from manufacturers in 2016, according to USDA documents. Grant funding. USDA’s Rural Utilities Service has provided some funding to support FLW reduction infrastructure in rural communities. For example, USDA awarded a 2016 USDA Rural Utilities Service Solid Waste Management grant to the University of Iowa’s Waste Reduction Center, which has worked toward addressing the issue of FLW disposal. More recently, in 2018, USDA awarded a solid-waste management grant to the Center for EcoTechnology, a nonprofit that provides technical assistance to implement FLW diversion programs. Low-interest loans. USDA’s Farm Storage Facility Loan Program provides low-interest loans for producers to store, handle, and transport the food they produce. The loans are designed to assist a diverse range of farming operations, including small and midsized businesses and operations supplying local food and farmers markets. The program helps keep food from being damaged by pests or inclement weather, among other things, so that more food can reach store shelves. Funding for renewable energy systems. USDA’s Rural Energy for America Program provides grants and loan guarantees to farmers, ranchers, and eligible small businesses to install renewable energy and energy-efficiency systems. For example, according to a Rural Energy for America Program Fact Sheet, funds may be used for the purchase, installation, and construction of renewable energy systems, such as anaerobic digesters. In a 2016 USDA Rural Development report, USDA provided examples of anaerobic digesters that use FLW to produce a biogas that is converted into energy. EPA, USDA, and FDA Have Done Some Initial Planning toward Achieving the National FLW Reduction Goal EPA and USDA have each taken some actions to plan and organize their efforts toward achieving the national FLW reduction goal, such as issuing strategic plans and establishing working groups. Additionally, EPA, USDA, and FDA signed a joint agency formal agreement in October 2018 aimed at increasing collaboration and coordination among the agencies on FLW reduction efforts. EPA, USDA, and FDA only recently initiated their interagency collaboration on FLW reduction efforts toward achieving the national FLW reduction goal, but have not yet taken certain steps that align with key practices for interagency collaboration. EPA has taken actions to guide its own efforts toward achieving the national FLW reduction goal. For example, in 2015, EPA issued a strategic plan that included a strategic priority area of sustainable food management. Subsequently, EPA developed an internal planning document (U.S. EPA Sustainable Management of Food Strategy, Fiscal Year 2018-2022). This planning document established action areas, goals, and activities for reducing FLW to achieve the national FLW reduction goal. For example, the plan identified five action areas, including addressing data and measurement issues, collaboration and partnerships, technical assistance, infrastructure and capacity, and communication and outreach. According to EPA officials, the agency intends to use the plan to track its progress and measure results towards the national FLW reduction goal. USDA has also taken actions to guide its own efforts toward achieving the national FLW reduction goal. For example, according to USDA officials, the department established a FLW working group in 2015 that currently meets on a monthly basis. According to officials from the Office of the Chief Economist and ERS, the department also designated an individual within the Office of the Chief Economist to guide USDA’s FLW efforts. In addition, in March 2016, the National Institute for Food and Agriculture’s Pilot Science Outcome Committee on Environmental Sustainability identified FLW as a top science priority area to address environmental sustainability. According to the committee, FLW is an integral component of environmental sustainability, and mitigating FLW has the potential to create economic, environmental, and social benefits while contributing to food security, resource conservation, and the mitigation of climate change. Furthermore, EPA and USDA have contributed to the work of the Commission for Environmental Cooperation, an intergovernmental organization established by the governments of Canada, Mexico, and the United States to facilitate effective cooperation on the conservation, protection, and enhancement of the environment in their territories. The organization has an initiative to identify challenges, opportunities, and solutions related to increasing organic waste diversion and processing capacity in North America. This organization issued a report in 2017 about, among other things, the management of organic waste and best practices for reducing FLW and diverting other organic waste materials away from landfills. EPA is on the steering committee for this effort. According to an EPA announcement in March 2019, the commission issued a practical guide and technical report on FLW measurement. Moreover, in October 2018, the Secretary of Agriculture hosted a public meeting to promote FLW reduction. During this meeting, EPA, USDA, and FDA signed a formal interagency agreement referred to by the agencies as the Winning on Reducing Food Waste initiative. Under this 2-year agreement, the agencies committed to developing an interagency strategic plan to increase collaboration and coordination among the agencies on their FLW reduction efforts. According to the agreement, this additional collaboration is intended to strategically align each agency’s efforts to better educate Americans on the impacts of reducing FLW. The agencies also agreed to, where appropriate, educate actors throughout the supply chain on the best practices to reduce FLW in the growing, manufacturing, transporting, selling, and disposing of food and the handling, preparation, and storage of food, as well as creating new uses for excess food. The formal agreement mentions public-private partnerships and, according to EPA officials, the agencies intend to use the views of stakeholders in the public, private, and nonprofit sectors to inform their strategic plan. According to EPA officials, the agencies intend to discuss common goals and to identify additional initiatives as appropriate to achieve the national FLW reduction goal. In announcing this initiative, the Secretary of Agriculture affirmed the importance of reducing FLW by saying that “an unacceptable percentage of our food supply is lost or wasted” and that “as the world’s population continues to grow and the food systems continue to evolve, now is the time for action to educate consumers and businesses alike on the need for food waste reduction.” In addition, the FDA Commissioner stated that “by taking steps to address obstacles that food donation and recovery programs may face in giving unsold foods a second opportunity and helping food producers find ways to recondition their products so that they can be safely sold or donated, our aim is to both reduce food waste and nourish Americans in need.” In April 2019, the agencies held a public event to announce their Winning on Reducing Food Waste Federal Interagency Strategy. This strategic plan identified six prioritized action areas for activities to reduce FLW. For example, the agencies plan to, among other things, increase consumer education and outreach efforts; increase coordination and guidance on FLW measurement; and clarify and communicate information on food safety, food date labels, and food donations. In addition, the agencies signed a formal agreement with ReFED to, among other things, better evaluate and improve upon strategies to reduce FLW. For example, according to the 2019 agreement, the agencies and ReFED intend to leverage existing partnerships to advance data-collection and measurement activities related to FLW. Finally, EPA announced that it had selected three recipients to receive EPA funding to support infrastructure projects to help reduce FLW and divert FLW from landfills. In our prior work, we have found that key practices to enhance and sustain interagency collaboration include agreeing on roles and responsibilities and developing mechanisms to monitor, evaluate, and report on results. In addition, we have found that key practices for agency collaboration call for clearly defining short- and long-term outcomes. Furthermore, such interagency efforts benefit from identifying how leadership commitment will be sustained. Lastly, we identified a key practice that calls for ensuring that the relevant stakeholders have been included in the collaborative effort. This collaboration can include other federal agencies, state and local entities, and private and nonprofit organizations. According to the strategic plan, the agencies built on information from several sources, including prior GAO work on implementing interagency collaborative mechanisms, to develop the Winning on Reducing Food Waste Federal Interagency Strategy. However, this strategic plan does not align with certain key practices for interagency collaboration. For example, the first priority area identified in the strategic plan is to enhance interagency collaboration, and the strategic plan states that an interagency, collaborative mechanism will be established to reduce programmatic redundancies and leverage complementary activities. However, the strategic plan does not identify how this mechanism will be used to monitor, evaluate, or report on results, establish a time frame for developing this collaborative mechanism, or describe how the agencies will engage relevant stakeholders, such as other federal, state, and local agencies, nonprofit organizations, academic institutions, food industry entities, international organizations, and tribal organizations. In addition, several of the strategic plan’s priority areas address specific aspects of reducing FLW, such as encouraging FLW reduction by federal agencies in their respective facilities. However, the strategic plan does not identify the roles and responsibilities of the respective agencies for taking action in these areas and it does not clearly define what specific short- and long- term outcomes the agencies intend to achieve. Furthermore, the agencies have not identified how they intend to sustain leadership commitment to this goal. For example, the Winning on Reducing Food Waste formal interagency collaborative agreement is a 2-year agreement among the agencies, but the national FLW reduction goal calls for reducing FLW by half by 2030, which falls well beyond this 2-year time frame. According to a USDA official, the agencies do not have plans for how they will continue their interagency collaboration beyond the life of the current agreement. This official noted that the agencies do not intend to update the strategic plan for the duration of the 2-year agreement and that the agencies will release more information to the public about specific actions and timelines as it becomes available. By incorporating leading practices for interagency collaboration as they implement their interagency strategic plan, EPA, USDA, and FDA would have better assurance that they are effectively collaborating toward achieving the national FLW reduction goal. Conclusions Achieving the national FLW reduction goal could provide significant economic, environmental, and social benefits to the United States, such as helping to lower consumer expenses, reducing harmful greenhouse gas emissions, and providing additional meals to feed food-insecure people through increased food donations. This is an important issue that requires action across the food supply chain and collaboration among federal agencies and nonfederal stakeholders, such as states and businesses. EPA and USDA have taken steps to develop programs and policies that aim to reduce FLW and to collaborate on their various initiatives. In addition, EPA, USDA, and FDA have taken some actions to plan and organize their efforts toward achieving the national goal of reducing FLW by half by 2030, including announcing an interagency strategic plan to reduce FLW. However, this strategic plan does not align with key practices in interagency collaboration that we have identified, such as agreeing on roles and responsibilities; developing mechanisms to monitor, evaluate, and report on results; clearly defining short- and long- term outcomes; identifying how leadership commitment will be sustained; and ensuring that the relevant stakeholders have been included in the collaborative effort. By incorporating such leading practices for interagency collaboration as they implement their interagency strategic plan, EPA, USDA, and FDA would have better assurance that they are effectively collaborating toward achieving the national FLW reduction goal. Recommendations for Executive Action We are making three recommendations to the agencies in our review. Specifically: The Administrator of EPA should work with the Commissioner of FDA and Secretary of Agriculture to incorporate leading collaboration practices as they implement their interagency FLW reduction strategic plan, to include (1) agreeing on roles and responsibilities; (2) developing mechanisms to monitor, evaluate, and report on results; (3) clearly defining short- and long-term outcomes; (4) identifying how leadership commitment will be sustained; and (5) ensuring that the relevant stakeholders have been included in the collaborative effort. (Recommendation 1) The Commissioner of FDA should work with the Administrator of EPA and Secretary of Agriculture to incorporate leading collaboration practices as they implement their interagency FLW reduction strategic plan, to include (1) agreeing on roles and responsibilities; (2) developing mechanisms to monitor, evaluate, and report on results; (3) clearly defining short- and long-term outcomes; (4) identifying how leadership commitment will be sustained; and (5) ensuring that the relevant stakeholders have been included in the collaborative effort. (Recommendation 2) The Secretary of Agriculture should work with Administrator of EPA and Commissioner of FDA to incorporate leading collaboration practices as they implement their interagency FLW reduction strategic plan, to include (1) agreeing on roles and responsibilities; (2) developing mechanisms to monitor, evaluate, and report on results; (3) clearly defining short- and long-term outcomes; (4) identifying how leadership commitment will be sustained; and (5) ensuring that the relevant stakeholders have been included in the collaborative effort. (Recommendation 3) Agency Comments We provided a draft of this report to EPA, USDA, and the Department of Health and Human Services for review and comment. We also provided CEQ and OMB a draft of this report for review. In its comments, reproduced in appendix I, EPA agreed with our recommendation to the agency and described current and future actions to implement the recommendation. Similarly, in its comments, reproduced in appendix II, USDA agreed with our recommendation to it and described current and future actions to implement the recommendation. In addition, in its comments, reproduced in appendix III, the Department of Health and Human Services concurred with our recommendation to it and described current and future actions to implement the recommendation. USDA and CEQ provided technical comments, which we incorporated as appropriate. In response to our recommendations, EPA, USDA, and the Department of Health and Human Services said that they will work with each other to incorporate leading collaboration practices as they implement the interagency FLW reduction strategic plan. Both EPA and USDA also stated that they intend to complete implementation of their respective recommendations by October 2020, to align with the duration of the 2- year formal agreement between EPA, USDA, and FDA. The Department of Health and Human Services stated that FDA issued a letter to the food industry supporting the industry’s efforts to standardize voluntary quality date labeling. We are sending copies of this report to the appropriate congressional committees, the Administrator of EPA, the Secretary of Agriculture, the Secretary of Health and Human Services, the Director of OMB, the Chair of CEQ, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions concerning this report, please contact me at (202) 512-3841 or morriss@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Comments from the U.S. Environmental Protection Agency Appendix II: Comments from the U.S. Department of Agriculture Appendix III: Comments from the U.S. Department of Health and Human Services Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Anne K. Johnson (Assistant Director), Joseph Capuano (Analyst in Charge), David Bennett, Carol Bray, Tara Congdon, Juan Garay, Serena Lo, Greg Marchand, Jordan Mettica, Oliver Richard, Dan Royer, Marie Suding, Kiki Theodoropoulos, and Sarah Veale made key contributions to this report.
Why GAO Did This Study The Natural Resources Defense Council reported that in the United States up to 40 percent of the food supply goes uneaten. FLW has significant economic, environmental, and social effects on various stakeholders, including businesses and consumers. In 2015, EPA and USDA announced a national goal to reduce FLW in the United States by half by 2030. In 2018, FDA joined EPA and USDA in these efforts. GAO was asked to examine efforts by federal agencies to reduce FLW. This report (1) describes nonfederal stakeholder views on key challenges to reducing FLW in the United States, (2) describes actions EPA and USDA have taken to address key challenges to reducing FLW in the United States, and (3) examines federal planning efforts toward achieving the national FLW reduction goal. GAO reviewed federal reports on FLW; analyzed agency documents; interviewed officials from EPA, FDA, USDA, and states and representatives of nonfederal stakeholders, such as academic institutions, industry, international organizations, nonprofit organizations, and a tribal organization, based on their demonstrated expertise on FLW; and attended conferences on FLW. What GAO Found GAO identified three key areas in which challenges exist to reducing food loss and waste (FLW) in the United States: (1) limited data and information about FLW; (2) a lack of awareness and education about FLW; and (3) limited infrastructure and capacity. For example, the causes of FLW vary across the stages of the food supply chain (see figure), but the share of total FLW due to each of these causes is currently unknown, according to a U.S. Department of Agriculture (USDA) report. GAO identified these challenges through interviews with selected stakeholders. The Environmental Protection Agency (EPA) and USDA have taken initial actions to address key challenges to reducing FLW in the United States since announcing a national FLW reduction goal in 2015. These actions include conducting a study to identify gaps in information about farm-level FLW and building public awareness about ways to reduce FLW. EPA, USDA, and the U.S. Department of Health and Human Services' Food and Drug Administration (FDA) have taken some actions to plan and organize their efforts toward achieving the national FLW reduction goal. For example, EPA developed an internal plan that established action areas, goals, and activities for reducing FLW, and USDA designated an individual to guide USDA's FLW efforts. In October 2018, EPA, USDA, and FDA signed an interagency agreement committing them to developing a strategic plan to improve their collaboration and coordination in reducing FLW. In April 2019, the agencies announced an interagency strategic plan with prioritized action areas to reduce FLW, but this strategic plan does not address how it will incorporate key practices for interagency collaboration that GAO identified, including (1) agreeing on roles and responsibilities; (2) developing mechanisms to monitor, evaluate, and report on results; (3) clearly defining short- and long-term outcomes; (4) identifying how leadership commitment will be sustained; and (5) ensuring that the relevant stakeholders have been included in the collaborative effort. By incorporating such practices as they implement their interagency strategic plan, EPA, USDA, and FDA would have better assurance that they were effectively collaborating toward achieving the national FLW reduction goal. What GAO Recommends GAO is making three recommendations in this report. GAO is recommending that EPA, FDA, and USDA incorporate leading collaboration practices as they implement their interagency strategic plan to reduce FLW.
gao_GAO-20-63
gao_GAO-20-63_0
Background The GDPs were constructed in the 1940s and 1950s and were used to enrich uranium for the U.S. military as well as the nation’s domestic nuclear power industry. The GDPs are located near Oak Ridge, Tennessee; Paducah, Kentucky; and Portsmouth, Ohio (see fig. 1). The GDPs were rendered obsolete due to the emergence of newer, more efficient technologies and the globalization of the uranium enrichment market. All three GDPs eventually ceased uranium enrichment activities, with Paducah being the last to stop enriching by 2013. The GDP sites are similar in many ways. For example, the primary structures at each GDP are large buildings for uranium enrichment processing using the same gaseous diffusion technology. In addition, at each of the sites, these large buildings all housed similar equipment, such as compressors, converters, and other equipment necessary for enriching uranium. EM measures these buildings in acres rather than square feet (see fig. 2). For example, the five uranium enrichment processing buildings that once stood at Oak Ridge measured a total of 114 acres. Each GDP site also consists of hundreds of other similar buildings and facilities used to fabricate, service, repair, and clean machinery as well as additional infrastructure, such as electrical switchyards and cooling towers. Rescission of the USEC Fund The Energy Policy Act of 1992 created the United States Enrichment Corporation (USEC) as a government corporation authorized to, among other things, acquire, market, and enrich uranium. The 1992 Act also established a revolving fund in the U.S. Treasury—the USEC Fund—for carrying out USEC’s purposes. In 1996, Congress enacted the USEC Privatization Act authorizing establishment of a private, for-profit corporation. The act provided that “expenses of privatization” were to be paid from certain accounts, including the USEC Fund. One week before privatization, Congress enacted the “McConnell Act,” which reserved approximately $373 million from certain accounts, including the USEC Fund, for the disposition of depleted uranium stored at government-owned enrichment plants operated by USEC. USEC was privatized on July 28, 1998. After privatization, the USEC Fund balance of $1.2 billion was retained on the books of the Treasury. Since then, the balance of the USEC Fund has grown to an estimated $1.695 billion as of fiscal year 2020. In 2015, we found that the entire balance of the USEC Fund is available for permanent rescission since the two statutorily authorized uses for the USEC Fund have been fulfilled: (1) environmental clean-up expenses pursuant to the “McConnell Act,” and (2) expenses of privatization. In the fiscal year 2017 federal budget, the Administration proposed using the balances of the USEC Fund to carry out purposes authorized to be funded by the Uranium Enrichment Decontamination and Decommissioning Fund. This is not one of the authorized purposes of the USEC Fund. We have previously found that DOE’s effort to utilize USEC Fund monies instead of general fund appropriations to support efforts other than the authorized purpose of the USEC Fund would diminish transparency in budgeting. In May 2019, we highlighted this issue in our annual report on fragmentation, overlap, and duplication. As of September 2019, Congress had not passed legislation to permanently rescind the balance of the USEC Fund, as we suggested in April 2015. Rescission may increase the transparency of federal agencies' budget presentations and help Congress have a clear understanding of how new funding requests relate to funding decisions for existing projects with continuing resource needs. nuclear power reactors and enrichment continued until 1985. The Oak Ridge GDP permanently closed in 1987. Portsmouth. The Portsmouth GDP, a 3,778-acre site located north of Portsmouth, Ohio, operated from 1954 until 2001. The GDP enriched uranium for both commercial reactor fuel and military applications. The Portsmouth GDP includes three uranium enrichment processing buildings, as well as over 300 other buildings and facilities. Management of both Portsmouth and Paducah has changed over time. Specifically, the Energy Policy Act, as amended, established the United States Enrichment Corporation (USEC) as a government corporation to, among other things, provide uranium enrichment services and take over operations of the GDPs in Portsmouth and Paducah beginning in 1993 (see sidebar). By 1998, USEC was privatized under the USEC Privatization Act and became a subsidiary of the newly created USEC, Inc. USEC produced low-enriched uranium for commercial power plants until 2001, when it ceased operations at the Portsmouth GDP. Later that year, the plant was placed on cold standby—a dormant condition that would allow operations to be resumed within 18 to 24 months if needed—and USEC, under contract with DOE, maintained the site. In 2011, USEC returned the Portsmouth GDP to DOE and EM’s contractor initiated deactivation activities of the uranium enrichment processing buildings. Paducah. The Paducah GDP, located on 3,556 acres of land west of Paducah, Kentucky, initially produced enriched uranium for nuclear weapons from 1952 until 1993. From 1993 through 2013, USEC leased and operated the facilities to produce enriched uranium for the commercial nuclear power sector. Similar to the Portsmouth GDP site, management of the Paducah site has changed over time. The Paducah GDP has four uranium enrichment processing buildings as well as more than 500 other buildings and facilities. After shutting down operations in 2013, USEC returned the Paducah GDP to DOE in 2014. Table 1 provides additional detail on the GDPs, including the date when cleanup began, the site size, and the size of the contractor workforce performing the cleanup activities. The GDP Cleanup Process Cleanup of the GDPs is a complex process that involves multiple, coordinated activities: surveillance and maintenance, D&D, and site remediation. Throughout the cleanup process, EM must conduct surveillance and maintenance activities at the GDPs to ensure public and worker safety. This includes maintaining and repairing site infrastructure, such as buildings and facilities and electrical and water supplies. The D&D process involves the following activities: deactivation, decontamination, decommissioning, and demolition. According to the National Academies and DOE, these cleanup activities are encompassed within the detailed processes described below: Characterization and measurement of the contaminants present. During this process, cleanup workers determine the identities, forms, amounts, and locations of hazardous and radioactive substances. According to DOE, common contaminants found at the GDPs include radioisotopes stemming from the historical enrichment process (e.g., uranium and technetium-99); hazardous chemicals (e.g., trichloroethylene, polychlorinated biphenyls, and beryllium); asbestos, and other hazardous materials typical of industrial facilities. When the GDPs were in operation, workers used volatile organic compounds in large quantities to clean and degrease equipment, which resulted in the release of such compounds, specifically trichloroethylene, into the environment. These compounds contaminated soil, surface water, and groundwater when they were spilled, burned in pits, discharged in holding ponds, or placed in trenches for disposal. Removal of large uranium deposits. During this process cleanup workers remove large deposits of enriched uranium from the process equipment and piping. This step is necessary at some of the uranium processing buildings to reduce the possibility of nuclear criticality—an event in which an assemblage of enriched uranium produces a short- duration (millisecond) burst of heat and radiation. This step is also necessary to resolve security concerns regarding the protection and handling of special nuclear materials. Disassembly and decontamination of equipment and building structural components. Hundreds of large process equipment components, such as converters, compressors, and motors may need to be disassembled and decontaminated. In addition, the floors, walls, and other structural components of buildings that housed such equipment must be decontaminated. Demolition of buildings and facilities. Hundreds of structures— including analytical laboratories, electrical switch yards, and uranium enrichment processing buildings that are many acres in size—must be demolished at the GDP sites. Management or disposal of waste. The D&D process generates significant amounts of waste, including building materials and hazardous and radioactive waste removed from equipment and piping. Waste management activities include treatment, storage, transportation, and disposal of low-level radioactive waste, hazardous waste, mixed radioactive and hazardous waste, and sanitary waste. In addition to surveillance and maintenance activities and the D&D of buildings and facilities, remediation of contaminated soils, surface water, and groundwater is a part of GDP cleanup and is an important aspect of protecting human health and the environment. According to DOE, remediation of contaminated soils, surface water, and groundwater involves assessing the site, including subsurface soils and groundwater contaminated by past GDP operation, and addressing the sources of contamination. According to EM, the Paducah GDP has the most groundwater and soil contamination of the three GDPs, and the Portsmouth GDP has the least amount of contamination. EPA and State Regulators’ Roles in GDP Cleanup At each GDP site, EM is required to consult and reach agreement with federal and state regulatory agencies in determining cleanup requirements, strategies, and priorities. Federal laws, including the Resource Conservation and Recovery Act of 1976 (RCRA), as amended; the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA), as amended; and cleanup agreements with state regulatory agencies in Kentucky, Ohio, and Tennessee govern cleanup at the three GDPs. RCRA establishes the framework for the management of hazardous and non-hazardous solid waste. CERCLA authorizes the federal government to respond directly to releases or threatened releases of hazardous substances, pollutants, or contaminants that may endanger public health or the environment. CERCLA requires that EPA maintain a National Priorities List that includes some of the most seriously contaminated sites that EPA identifies for long-term cleanup of hazardous substances, pollutants, or contaminants throughout the United States and its territories. Federal sites on this list are required to have an interagency agreement for expeditious completion of all remedial action at the facility. The interagency agreement, termed a Federal Facility Agreement, guides the cleanup process and sets enforceable milestones for priority cleanup activities as agreed to by all the parties to the agreement. The Oak Ridge and Paducah GDPs are both included on EPA’s National Priorities List under CERCLA. As a result, both sites have negotiated tri- party Federal Facility Agreements signed by DOE, EPA, and the relevant state regulator. Under the terms of these agreements, DOE must reach agreement with EPA and Tennessee and Kentucky state regulators to establish cleanup priorities and schedules for work with enforceable milestones subject to the agreements’ dispute resolution procedures. In addition, the agreements state that DOE must consult with these regulators in making budget requests to Congress for the GDPs. The Portsmouth GDP is not listed on EPA’s National Priorities List due to an agreement among regulators and, therefore, does not have a Federal Facility Agreement. Instead, the Ohio regulator is responsible for overseeing cleanup under a State of Ohio Consent Decree under RCRA and an Ohio Environmental Protection Agency Directors Final Findings and Orders for Decontamination and Decommissioning, which guide the cleanup process at Portsmouth. Under Presidential Executive Order 12580, DOE is the lead federal agency for implementation of CERCLA at Portsmouth. According to DOE’s Fiscal Year 2020 Congressional Budget Justification, the Ohio regulator used the CERCLA framework in developing the Orders. According to EPA officials we interviewed, EPA is not involved in regulating the CERCLA or RCRA components of the cleanup at the Portsmouth GDP. The D&D Fund Decontamination and Decommissioning Fund: Uranium and Thorium Reimbursements Title X of the Energy Policy Act, as amended, authorizes the Decontamination and Decommissioning (D&D) Fund to reimburse licensees of uranium and thorium processing sites for their portion of D&D activities, reclamation efforts, and other cleanup costs attributable to the uranium and thorium materials they sold to the federal government. These sites became contaminated with radon and other decay products of uranium over time. According to a DOE report, as of 2017, there were ten sites that were continuing remedial activities and where DOE was continuing to provide reimbursements. According to the 2017 DOE report, DOE had at the time issued about $716 million in reimbursement payments since inception of the D&D Fund. The largest recipient is West Chicago Environmental Response Trust, with over $380 million in reimbursement payments through fiscal year 2017. As of fiscal year 2016, DOE estimates that the total remaining payouts to uranium and thorium producers will be approximately $164 million. In 1992, the Energy Policy Act established the D&D Fund to pay for the cleanup of the three GDPs. The act authorized $480 million in annual contributions to the D&D Fund (annually adjusted for inflation) for a period of 15 years—from fiscal years 1993 through 2007. According to the act, of the $480 million in annual contributions originally authorized, up to $150 million was to come from a special assessment collected from domestic utility companies that used the enriched uranium produced by the GDPs for nuclear power generation, and the remainder was authorized to be appropriated by the federal government for a period of 15 years. While domestic utility payments were discontinued in 2007, as prescribed by the 1992 Energy Policy Act, additional sums have continued to be appropriated for the D&D Fund. The act specified that any unused balances in the D&D Fund be invested in Treasury securities and any interest earned be made available to pay for activities covered under the D&D Fund. The act also authorizes reimbursements to uranium and thorium processing site licensees who provided raw materials to the GDPs for their cleanup costs (see sidebar). The Energy Policy Act, as amended, authorizes the D&D Fund to pay for the costs of all D&D and remediation activities at the GDPs. Specifically, according to EM officials, the D&D Fund is used to pay for the following cleanup activities: (1) D&D of inactive facilities either by cleaning up the facilities so they could be reused or by demolishing them; (2) remedial actions such as assessing and treating groundwater or soil contamination; (3) waste management, such as the transport and disposal of hazardous waste; (4) the surveillance and maintenance of the GDPs, such as general repairs to keep the buildings and facilities in a safe condition; (5) uranium and thorium licensee reimbursements; (6) training for contractor personnel who work on D&D activities; and (7) other activities, such as legal costs associated with the GDPs, funding to support site-specific advisory boards at Portsmouth and Paducah, and pension costs of workers involved in uranium enrichment or D&D. Other Funding Sources Used for Cleanup According to EM officials, there are additional cleanup-related activities taking place at the GDPs that are not covered by the D&D Fund, which include: (1) security—which provides services to protect nuclear materials, sensitive uranium enrichment technology, equipment, and facilities; (2) operation of the onsite waste disposal facility at Oak Ridge; and (3) conversion of depleted uranium hexafluoride—a byproduct of the enrichment process—into a more stable form, such as uranium oxide, that will require eventual disposal (see sidebar). Depleted uranium hexafluoride—referred to as depleted uranium “tails”—is a byproduct of the uranium enrichment process. The uranium enrichment process involves concentrating uranium-235, which is the isotope of uranium that undergoes fission to release enormous amounts of energy. Natural uranium contains 0.7 percent of the uranium-235 isotope, and tails contain less uranium-235 than natural uranium (i.e., less than 0.7 percent of uranium-235). Tails have historically been considered waste because the enrichment process required to extract the remaining useful quantities of uranium-235 is significant and can be costly. In addition, tails may be dangerous to human health and the environment and can form extremely corrosive and potentially lethal compounds when in contact with water. Therefore, the Department of Energy (DOE) has opted to convert its inventory of tails into a more stable chemical form, such as uranium oxide, that would allow for long-term storage and minimize environmental impacts and costs. The Portsmouth and Paducah gaseous diffusion plants (GDP) each store their inventories of tails in thousands of cylinders, and both GDPs have an onsite conversion facility. As of March 2018, DOE estimated that the combined tails stockpile at the Portsmouth and Paducah GDPs was estimated at 62,000 cylinders. DOE estimates the Portsmouth GDP will complete conversion of its tails inventory by 2034 and Paducah by 2047. Most of the tails inventory at the Oak Ridge GDP (approximately 7,200 cylinders) has been shipped to Portsmouth for conversion. According to DOE officials, the D&D Fund is not used to pay for conversion of the tails. To pay for these additional cleanup-related activities, EM officials reported that EM has used the Defense Environmental Cleanup and the Non-Defense Environmental Cleanup Appropriation Accounts. At Portsmouth, EM has also transferred natural uranium to site contractors in exchange for cleanup services—a practice EM refers to as “barter.” Additional details on this practice are discussed later in the report. Program Management As we reported in February 2019, effective program and project management are important to the success of efforts like the EM program. According to PMI, a program is defined as “related projects, subsidiary programs, and program activities managed in a coordinated way to obtain benefits not available from managing them individually.” According to a PMI conference paper, to reach the ultimate goal from a program—to obtain benefits not available from managing the related projects and program activities individually—a structured way of working has to be established. The Program Management Improvement Accountability Act requires the Office of Management and Budget (OMB) to adopt and oversee implementation of government-wide standards, policies, and guidelines for program and project management in executive branch agencies. In June 2018, OMB issued a memorandum on the implementation of this law that includes initial implementation guidance and calls for agencies to generally align their own program management standards to the management practices and principles found in the memorandum. The memorandum states that the act aims to improve program and project management practices within the federal government. The OMB memorandum also states that agencies may use program management leading practices developed by us, other agencies, and external voluntary consensus standard-setting bodies, such as PMI. EM Has Managed Cleanup of the GDPs as Three Individual Sites and Estimates That Cleanup at All Sites Will Not Be Completed Until 2070 at the Latest EM has managed cleanup of the GDPs as three individual sites, rather than as an integrated program, and has not managed the cleanup of the GDPs consistent with relevant program management leading practices. For over a decade, DOE has reported to Congress in its triennial reports that its intent is to manage the GDPs in an integrated manner but has not developed an integrated program management plan, integrated master schedule, and a reliable, integrated, comprehensive life-cycle cost estimate. In addition, EM estimates that cleanup of the Oak Ridge GDP is nearing completion, that Portsmouth will be completed by 2041, and that Paducah will be completed between 2065 and 2070. EM Has Managed Cleanup of the GDPs as Three Individual Sites The Energy Policy Act, as amended, establishes a single, shared D&D Fund to pay for the D&D costs of the GDP sites, such that EM must coordinate and make trade-offs in its use of limited resources among the three GDPs. In addition, since 2007, DOE has stated in its triennial reports to Congress that its intent is to manage the GDPs in an integrated manner. While neither EM nor DOE explicitly refers to the management of the GDP cleanup as a program, DOE’s stated intent is consistent with PMI’s definition of a program—”related projects, subsidiary programs, and program activities managed in a coordinated way to obtain benefits not available from managing them individually.” However, we compared EM’s management of the cleanup of the three GDPs to the three relevant PMI program management leading practices that we examined—those addressing planning, scheduling, and cost estimating—and found that EM is not managing the cleanup of the GDPs consistent with these practices: Planning—Having a program management plan. We found that EM does not have a GDP-wide program management plan. According to PMI, a program management plan formally expresses an organization’s concept, vision, mission, and expected benefits produced by the program; it also defines program-specific goals and objectives. In a 1996 report, the National Academies recommended that DOE develop a GDP-wide program management plan for cleanup of the three GDPs that would help coordinate decisions across the three GDPs. Representatives from the National Academies told us in December 2018 that they continue to believe this recommendation is valid. Furthermore, EPA and state regulators have criticized EM for not having a long-term vision for GDP cleanup. According to EM officials, EM developed site-level plans for each of the three GDPs over time as the GDPs ceased operating and became available for cleanup at different times—Oak Ridge ceased operating in 1987, Portsmouth in 2011, and Paducah in 2013. However, in reviewing what EM officials refer to as GDP program management plans, we found that the documents were created for different purposes and do not contain comparable information. For example, The Oak Ridge plan was created in 2017 as an update of a fiscal year 2014 through 2024 site-level plan for the three EM cleanup sites located at Oak Ridge reservation—the GDP, the Oak Ridge National Laboratory, and the Y-12 National Security Complex. This document presents a high-level picture of cleanup activities. EM officials told us that the Oak Ridge plan is intended to be high- level because cleanup of the Oak Ridge GDP is further along than cleanup of the Portsmouth and Paducah GDPs and because the Oak Ridge plan covers all three cleanup efforts at the Oak Ridge Reservation. EM officials also noted that other specific planning materials on the Oak Ridge GDP could be found in other documentation, but such documentation was not in the plan or in a usable form. The document EM provided as the Portsmouth plan contains a series of PowerPoint presentations for a March 2018 symposium on waste management. The PowerPoint slides were presented by both DOE officials and contractor representatives about different projects at the Paducah and Portsmouth sites. However, the slides contain contradictory information on when the Paducah GDP began deactivation—one slide indicates that deactivation began in 2014, but another shows deactivation will begin in 2035. EM officials at the Paducah GDP provided the 2015 site management plan for the Paducah GDP, which was signed by DOE and the contractor. This plan includes actions taken to date, site prioritization information (i.e., risk prioritization criteria), and key planning assumptions. The Paducah plan is the most comprehensive and detailed. The individual GDP plans differ in their level of detail; do not present comparable information, such as milestones that each GDP is to meet; and do not reference past, ongoing, or planned work at the other GDPs. As a result, they are not useful as plans for decision- making on the three GDPs in an integrated manner. Further, EM does not have a document that contains a concept, vision, mission, and expected benefits from GDP cleanup or that defines program-specific goals and objectives. By developing a GDP-wide program management plan, EM would have a comprehensive and consistent roadmap to achieve GDP cleanup and would be in a better position to leverage resources among the three GDPs. Scheduling—Having a reliable, integrated master schedule. We found that EM does not have an integrated master schedule for cleanup of the GDPs. According to PMI’s Program Management Standard, a program master schedule is the top-level program planning document that defines the individual component schedules and dependencies among program components (individual components and program-level activities) required to achieve the program goals. It should include those component milestones that represent an output to the program or share interdependency with other components. The program master schedule should also include activities that are unique to the program including, but not limited to, activities related to stakeholder engagement, program-level risk mitigation, and program-level reviews. The program master schedule determines the timing of individual components, enables the program manager to determine when benefits will be delivered by the program, and identifies external dependencies of the program. EM officials told us that the agency’s corporate database—the Integrated Planning, Accountability, and Budgeting System (IPABS)— contains the integrated master schedule for all of EM’s cleanup work, including the GDPs. The purpose of IPABS is to provide information on (1) changes to the life-cycle scope, cost, and schedule and (2) performance data such as earned value, performance metrics, and cleanup milestones. While IPABS provides a top-line planned completion date as well as other information, including cleanup milestones negotiated with regulators and performance metrics, it does not provide all of the information needed to build up to that date, including sequences clearly showing how related portions of work depend on one another. Without information such as sequences, it will not be possible for EM to identify the consequences of changes or possible managerial action to respond to them. An integrated master schedule makes it possible to help coordinate cleanup across the GDPs by establishing each GDP site’s schedule and identifying how related portions of work, such as funding profiles and workforce and equipment requirements that tie the sites together, depend on one another. For example, EM officials stated that certain demolition equipment, such as high-reach excavators, are in limited supply and may be shared among the three GDPs. By creating an integrated master schedule, EM would be in a better position to coordinate individual project activities across the three GDPs and thus help achieve program goals. Cost Estimating—Having a reliable, integrated, comprehensive life-cycle cost estimate. We found that EM does not have a reliable, integrated, comprehensive life-cycle cost estimate for cleanup of the GDPs consistent with PMI’s Program Management Standard, which calls for estimating a program’s full life-cycle costs. According to PMI, calculating full life-cycle costs and including transition and sustainment costs results in total cost of ownership. Total cost of ownership is considered to be relative to the expected benefit of one program against another to derive a funding decision. There are numerous estimating techniques to derive program cost estimates. Program cost estimates should also identify any critical assumptions upon which the estimates are made, as these assumptions may prove unfounded in the course of program delivery and require reconsideration of the program business case or revision of the program management plan. Finally, program cost estimation can support or guide cost estimation at the component level. Any prevailing program level cost estimation guidance intended for use at the component level should be documented and communicated to component managers. Instead, EM has, over time, developed separate cost estimates for each of the three GDPs that do not reference historic costs at the other GDPs. EM officials stated that IPABS contains the life-cycle cost estimate for EM’s cleanup work, including the GDPs. However, IPABS only provides a top-line cost estimate. It does not provide details on what information is included in developing that estimate, such as any critical assumptions upon which the estimates are made. Moreover, in February 2019 we reported that certain IPABS data, including expenditure data, were not reliable. By developing an integrated, comprehensive life-cycle cost estimate, EM management, Congress, and stakeholders would have information on total cleanup costs, including underlying costs, enabling more informed decision-making on funding and resource allocations from the shared D&D Fund across the three GDPs. EM officials acknowledged that cleanup work at the GDPs is managed independently by the three sites and not as an integrated program. However, the officials noted that the GDP cleanup work is managed as part of EM’s overall work to clean up radioactive and other hazardous waste that remains at 16 different sites across the nation, which they explained was all managed as one program. Further, according to EM officials, since the cleanup work is part of EM’s overall cleanup program it is able to make decisions at a high-level to support overall funding priorities, reduce the greatest risks, and effectively use taxpayer dollars. However, in February 2019, we reported on EM’s cleanup program and found that EM’s cleanup policy—which governs its cleanup work—does not follow any of the relevant program management leading practices related to a program’s management of scope, cost, schedule performance, and independent review of performance. The benefits of managing the work at the GDPs as a program have long been recognized. In 1996, the National Academies in its report to Congress recognized GDP cleanup as having the characteristics of a program noting that the repetitive and common design of the GDPs would allow for economies of scale in performing D&D. The report recommended that DOE develop a GDP-wide program management plan that integrates the D&D of the facilities and environmental remediation activities, as previously mentioned. According to the National Academies report, coordinating efforts across the GDPs at the complex level would help to ensure that D&D is integrated at the three sites and that resources, including disbursements from the shared D&D Fund, would be used effectively. Moreover, the report noted that delays would lead to substantial expenditures for surveillance and maintenance; deterioration of the facilities would exacerbate these costs; risks to individuals would increase; and the costs for safeguards and security for the sites would continue. In December 2018, representatives from the National Academies told us that they continue to believe that managing the GDPs as an integrated program would benefit cleanup efforts. By taking steps to manage the three GDPs as an integrated program and following relevant program management leading practices (developing a program management plan; an integrated master schedule; and a reliable, integrated, comprehensive life-cycle cost estimate), EM would have more reasonable assurance that it is taking every opportunity to increase the efficiency and effectiveness of its management activities. EM Estimates That Cleanup of All Three GDPs Will Not Be Completed Until 2070 at the Latest EM estimates that cleanup of the Oak Ridge GDP is nearing completion, that Portsmouth will be completed by 2041, and that Paducah will be completed between 2065 and 2070. Cleanup of the three GDPs— primarily remediation efforts—began in the late 1980s, and EM estimates that cleanup of the last GDP, Paducah, will be completed by 2070 at the latest. As figure 3 shows, based on DOE’s estimates, cleanup from start to completion will take 33 years at Oak Ridge, 52 years at Portsmouth, and 77 to 82 years at Paducah. Each GDP site still has varying levels of cleanup work remaining, mainly relating to when the site was closed. For example, the majority of cleanup work began at Portsmouth and Paducah after the contractor operating the GDPs—USEC—returned the site to DOE (in 2011 and 2014, respectively). The following provides a brief overview of the work remaining and estimated cleanup completion dates for each of the GDPs. See appendix II for a summary of the cleanup work completed as of June 2019. Oak Ridge. At Oak Ridge, the work remaining includes cleaning up surface and groundwater contamination, remediating soils on approximately 800 acres, and conducting D&D on more than 130 remaining facilities. DOE reported in its 2019 triennial report that it intends to complete cleanup of the Oak Ridge GDP by fiscal year 2022. However, according to EM documentation and officials, EPA officials, and state regulators, EM is unlikely to complete the cleanup by this date. In information provided to us in 2018 and in documentation supporting its cost estimate, EM cited fiscal year 2024 as the completion date for the Oak Ridge cleanup. In addition, in March 2019, EM officials said that all facilities at the Oak Ridge GDP will be demolished by fiscal year 2020 and remediation activities will be completed by fiscal year 2024, stating that the fiscal year 2022 date in the 2019 triennial report is based on outdated data. EPA and Tennessee regulators also told us they do not believe that EM’s current estimated completion date is realistic for the Oak Ridge GDP cleanup based on their understanding of the scope of remaining work, particularly cleanup of groundwater contamination. They said it is more realistic that cleanup of the Oak Ridge GDP will not be completed until the late 2020s and EPA believes cleanup completion could go out as far out as the 2040s, due to the lack of an agreed approach to address contaminated groundwater. The completion date for the Oak Ridge GDP has slipped in the past. Oak Ridge was previously scheduled to be completed in fiscal year 2009 and then in fiscal year 2012. Portsmouth. At Portsmouth, EM must complete D&D for three uranium enrichment processing buildings. Specifically, the first of three processing buildings is undergoing the final stages of deactivation, and the contractor is scheduled to begin demolition in fiscal year 2020. EM has started deactivation procedures at the second of the processing buildings, where EM is scheduled to start demolition in fiscal year 2024. At the third processing building, deactivation has yet to begin, and EM estimates the building will be ready for demolition in fiscal year 2031. In addition, EM must conduct D&D on hundreds of other support buildings and facilities. EM also plans to continue to remediate groundwater plumes at Portsmouth and to complete construction of an onsite waste disposal facility, which is scheduled to be operational by fiscal year 2020. According to the 2019 triennial report, cleanup of the Portsmouth GDP will be completed in 2041 based on scope and funding projections. However, in June 2019, EM officials told us that the Portsmouth cleanup will more likely be completed in 2043. Paducah. At Paducah, EM is focusing its near-term cleanup efforts on D&D of the C-400 building—a building that was used to clean machinery parts and test equipment and has been identified as the primary source of groundwater contamination at the site. After the demolition of this building, EM plans to dig up the slabs underneath the building to remove contaminants that EM believes are the source of the contamination, according to EM officials. According to EPA, EM is also focusing its near-term cleanup efforts on other activities, such as stabilization and deactivation of uranium enrichment and support buildings across the GDP, infrastructure optimization activities (including railroad upgrades for safe waste transport and downsizing the electrical power grid network), and new facility construction. According to an EM document and officials, deactivation of the processing buildings began in 2014, after USEC returned the site to DOE. In addition to the process buildings, EM will also need to conduct D&D on hundreds of other buildings and facilities. In addition, according to EM officials, EM has yet to decide on whether the waste produced from the GDP cleanup will be shipped offsite or if it will construct an onsite waste facility. EM estimates the cleanup of the Paducah GDP will be completed between fiscal years 2065 and 2070. The completion date for the Paducah GDP has slipped in the past. Paducah was previously scheduled to be completed in fiscal year 2040, and then in fiscal year 2047. EM’s Past Expenditure Data Are Limited, and Its Future Cost Estimates Are Unreliable EM reported it has spent at least $15.5 billion on GDP cleanup as of 2018, including approximately $5.1 billion on the Oak Ridge cleanup, approximately $6.7 billion on the Portsmouth cleanup, and approximately $3.7 billion on the Paducah cleanup. However, EM has limited detailed expenditure information on the cleanup activities carried out at the GDPs. Moreover, EM’s cost estimates for completing cleanup at the three GDPs are not reliable because they do not fully or substantially meet all of the characteristics of a high-quality, reliable cost estimate as described in our Cost Estimating Guide. EM Reported It Has Spent at Least $15.5 Billion on Cleanup of the Three GDPs as of Fiscal Year 2018 but Has Limited Detailed Expenditure Data Efforts to Supplement the Decontamination and Decommissioning Fund: Transfer of Natural Uranium for Cleanup As we reported in September 2011, from 2009 through 2011, the Department of Energy (DOE) used 1,473 metric tons of natural uranium to pay for $194 million in cleanup services performed by a contractor—the United States Enrichment Corporation (USEC)—at the Portsmouth gaseous diffusion plant (GDP). USEC then sold the natural uranium and retained the proceeds. The cleanup services provided by USEC included removing chemical and hazardous material from the GDP. DOE has in the past referred to this practice as “barter.” We found in our September 2011 report that DOE mischaracterized certain transactions with USEC as barters. From December 2009 through March 2011 DOE’s uranium transactions with USEC were sales authorized by the USEC Privatization Act, but they did not comply with federal fiscal law. The USEC Privatization Act requires that before a uranium sale, DOE must determine: the materials are surplus to national security needs; the department is receiving fair market value; and the sales will not adversely affect the domestic uranium mining, conversion, and enrichment industries. We found that DOE met these requirements. Nevertheless, by not depositing the value of the net proceeds from the sales of uranium into the Treasury, we found that DOE violated the miscellaneous receipts statute. This statute requires an official or agent of the government receiving money from any source on the government's behalf to deposit the money into the Treasury. By not depositing an amount equal to the value of the uranium into the Treasury, DOE inappropriately circumvented the power of the purse granted to Congress under the Constitution. DOE disagreed that its actions did not comply with federal fiscal law. We suggested that Congress consider authorizing DOE to, among other things, retain the proceeds of future uranium transactions. Pursuant to direction from Congress, in March 2018, DOE suspended this practice through fiscal year 2019. In its fiscal year 2020 budget request, DOE indicated that it would resume this practice to help pay for cleanup at Portsmouth. a practice EM refers to as “barter.” According to data provided by EM officials in 2018, from December 2009 through March 2018, EM transferred uranium valued at about $1.4 billion. According to an EM official, EM has used this transfer process exclusively at Portsmouth (see sidebar). Among other sources, the Non-Defense Environmental Cleanup Appropriation Account supplied over $1.2 billion in cleanup funding at Portsmouth for activities such as the operation of the depleted uranium hexafluoride conversion facility. Paducah. EM also reports that it has spent about $3.7 billion on the Paducah cleanup as of 2018. Similar to the Oak Ridge and Portsmouth GDPs, the D&D Fund paid for the majority of the cleanup costs at the Paducah GDP—approximately $2.7 billion. The remaining $1 billion in cleanup expenditures were funded by aforementioned appropriation accounts, including $138 million from the Defense Environmental Cleanup Appropriation Account on activities such as security and safeguards. EM tracks annual expenditures for cleanup activities at each GDP site in STARS, according to EM officials. However, EM does not track detailed expenditure information by GDP site on specific cleanup activities—such as remediation, waste management, or surveillance and maintenance—in that system. For example, EM officials provided data from STARS indicating that EM spent about $262 million on D&D at the Oak Ridge GDP in fiscal year 2007, but officials could not provide a breakdown of what specific cleanup activities the funds were used for, such as remediation or waste management. EM headquarters and site officials explained that they do not track detailed expenditure information of GDP cleanup activities in STARS because they are not required to do so. EM has previously provided a detailed breakdown of expenditures. For example, in our July 2004 report, in addition to expenditures on D&D, EM provided expenditures for the following categories: remedial actions, surveillance and maintenance, uranium and thorium reimbursements, waste management, and other activities. In addition, DOE’s 2007 triennial report has an appendix on GDP future costs that provided a similar breakout. However, EM officials could not provide current expenditure information similar to these prior reports. EM site officials told us that EM tracks more detailed expenditure data on certain categories by project, including demolition activities, and that these data were available in various project management systems maintained across the three sites. However, according to these officials, the various project management systems do not consistently track expenditures across the three GDP sites. EM headquarters officials stated that EM tracks more detailed expenditure data centrally in IPABS. However, in February 2019, we reported that the earned value management data in IPABS, which contain the expenditure data, were unreliable. Detailed expenditure data are important for developing reliable cost estimates, according to our Cost Estimating Guide. The Cost Estimating Guide states that it is always better to use actual costs rather than estimates as data sources, since actual costs represent the most accurate data available. EM officials told us that they used expenditure data at Oak Ridge, supplemented by other information, to help develop cost estimates at Portsmouth and Paducah. However, according to EM officials, EM does not track detailed expenditure data consistently across the three GDPs, therefore its ability to develop accurate and informed cost estimates for future work at the three GDP sites is limited. By tracking consistent and detailed expenditure information on cleanup activities across the GDPs, EM management would be better able to develop reliable cost estimates to plan for future work. EM’s Cost Estimates for Completing Cleanup of the Three GDPs Are Not Reliable EM’s cost estimates for cleanup of the three GDPs (about $28-$30 billion, according to DOE’s 2019 triennial report to Congress) are not reliable and likely underestimate the future cleanup costs. EM has developed individual cost estimates for each of the three GDPs over time and has presented those cost estimates in the triennial reports to Congress. EM prepared the latest cost estimate for Oak Ridge in 2013, for Portsmouth in 2014, and for Paducah in 2017. We assessed EM’s cost estimates for the three GDPs individually by comparing them with the best practices identified in our Cost Estimating Guide. The guide outlines best practices for developing a high-quality, reliable cost estimate and identifies four characteristics of such an estimate: comprehensive, well-documented, accurate, and credible (see fig. 5 for a depiction of the four characteristics and some of the best practices that underlie them). A cost estimate is considered reliable if the assessment for each of the four characteristics are substantially or fully met. If any of the characteristics are not met, minimally met, or partially met, then the cost estimate does not fully reflect the characteristics of a high-quality estimate and cannot be considered reliable. We found that the Portsmouth and Paducah cost estimates fully or substantially met some of the characteristics of a reliable cost estimate, but none of the three cost estimates fully or substantially met all of the characteristics, so EM’s cost estimates for completing cleanup of the three GDPs are not reliable. Specifically, EM’s cost estimate for Portsmouth fully met the comprehensive characteristic and substantially met the well-documented and accurate characteristics. EM’s cost estimate for Paducah fully met the accurate characteristic and substantially met the comprehensive characteristic. However, in all other instances, the cost estimates partially or minimally met the characteristics, with Oak Ridge obtaining the lowest scores. Figure 6 provides a summary of our assessment of the cost estimates for Oak Ridge, Portsmouth, and Paducah for each characteristic. Appendix III provides additional information on our assessment. We also found that the cost estimates likely underestimate the cleanup costs because of challenges in reaching consensus on cleanup decisions with regulators that we discuss later in this report. In commenting on our assessment of the GDPs’ cost estimates, EM officials stated that they disagreed with our findings. According to EM officials, the cost estimates for the three GDPs have been audited numerous times and contain thousands of pages of support. Officials also questioned how the cost estimate for Oak Ridge scored the lowest of the three sites, when the documentation supporting that cost estimate was prepared by the same contractor that prepared the Paducah cost estimate using the same processes, practices, and procedures. We use the same criteria—our Cost Estimating Guide—to assess cost estimates throughout the federal government, and we follow the same process for assessing cost estimates. As we do for all agencies, we provided EM the opportunity to review the detailed analysis that we prepared as part of our assessment and the opportunity to provide additional documentation that may fill gaps identified in that assessment. While EM had documentation for the Paducah GDP cost estimate, which included a project life-cycle summary schedule and life-cycle baseline work breakdown structure, EM did not include such documentation for the Oak Ridge GDP cost estimate. In addition, many of the documents EM officials provided to support the Oak Ridge cost estimate were more than 5 years older than the cost estimate itself, a point by which EM should have had actual expenditure data rather than proposed data to inform the estimate. Because these documents did not contain actual expenditure data, we determined they were out of date for Oak Ridge’s 2013 cost estimate. We met with EM officials a second time to discuss our assessment of the Oak Ridge GDP cost estimate and reviewed additional documents provided by officials and modified the assessment to reflect that additional information. However, this information did not change our overall assessment. Until EM ensures the site-specific life-cycle cost estimates for the cleanup of each of the GDPs fully incorporate best practices for cost estimation, EM, DOE, regulators, and Congress will not have the information needed to understand the level of resources required to achieve cleanup of the GDPs. EM Faces Estimated Cleanup Costs Exceeding the 2018 D&D Fund Balance by at Least $25 Billion and Challenges to the Sufficiency of the D&D Fund Under EM’s current cost estimates, remaining GDP cleanup costs exceed the balance of the D&D Fund by at least $25 billion, and EM faces challenges that could affect cleanup progress and the sufficiency of the fund. According to EPA and state regulatory officials from Kentucky and Tennessee, negotiations with EM regarding various cleanup decisions have strained relations between EM and the regulators and present challenges to the GDP cleanup progress that could affect cleanup progress and put additional demands on the D&D Fund. Finally, EM’s reporting to Congress on the sufficiency of the D&D Fund is based on old data and is not always complete or clear, which presents challenges to Congress’s ability to be fully informed in taking actions to address the sufficiency of the Fund. EM’s Estimated Costs to Complete Cleanup of the GDPs Exceed the 2018 Balance of the D&D Fund by at Least $25 Billion EM’s estimated costs of about $28 billion to $30 billion to complete cleanup of the GDPs—cited in DOE’s 2019 triennial report—exceed the $2.7 billion balance of the D&D Fund cited in a 2018 document agency officials provided. Most recently, in its 2019 triennial report, DOE stated that, as of September 2016, estimated cleanup costs exceeded the balance of the D&D Fund by $26.6 billion. DOE has therefore estimated that the D&D Fund would be exhausted by fiscal year 2020. Prior triennial reports have made similar estimations. However, according to EM data, this shortage is likely to be billions more. In 2017, EM prepared a revised cost estimate for Paducah, revising Paducah’s life-cycle cost estimate for completing cleanup to $34 billion from $15 to $16 billion in 2016 data. EM did not include this revision or note it in any way in the final 2019 triennial report provided to Congress. Based on this revision, EM’s estimated costs would be about $47 billion to $48 billion to complete cleanup of the GDPs. The sufficiency of the D&D Fund has been a long-standing issue. In July 2004, we reported that based on projected costs and revenues at the time, the D&D Fund would be insufficient to cover the cleanup activities at the three GDPs. To better ensure that the fund would be sufficient to cover the projected costs for authorized activities, we recommended that Congress consider reauthorizing the fund for an additional 3 years—to 2010—and require DOE to reassess the fund’s sufficiency before it expired in 2007 to determine if further extensions would be necessary beyond 2010. In November 2007, the U.S. Senate Committee on Energy and Natural Resources held a hearing on a bill which would have reauthorized the fund and required DOE to continue to assess the fund’s sufficiency. Although the committee did not take further action on that bill, Congress has continued providing appropriations to the D&D Fund. Negotiations with EPA and Regulators from Two States over Key Cleanup Decisions Present Challenges that Could Affect Cleanup Progress and Further Strain the Fund According to EPA and state regulatory officials from Kentucky and Tennessee, negotiations with EM regarding key cleanup decisions have strained relations between EM and the regulators and present challenges to the GDP cleanup progress. If EM is unable to reach agreement with the regulators on its preferred outcomes, there will likely be further delays and increases in GDP cleanup costs. The EPA and state regulatory officials said that their negotiations over pending cleanup decisions have raised concerns regarding EM’s priorities, cleanup remedies, and cost estimates. Because both the Oak Ridge and Paducah GDPs are included on EPA’s National Priorities List, both sites are required to have a Federal Facility Agreement—an agreement that guides the cleanup process and establishes cleanup priorities and schedules with enforceable milestones as agreed to by EM, EPA, and state regulators. Disagreements among the parties at both the Oak Ridge and Paducah GDPs present challenges to EM’s assumptions regarding the acceptance of its preferred cleanup strategy and will likely lead to delays and increases in EM’s estimated cleanup costs if that strategy is not followed. Disagreements over cleanup priorities. EPA and state regulatory officials disagree with EM’s cleanup priorities at Oak Ridge and Paducah. EM officials we interviewed told us their priority is characterizing, decontaminating, and demolishing buildings and facilities. EPA and state regulatory officials said that their priority is soil and groundwater remediation to address contamination. The Tennessee regulatory official said that the state agrees that the D&D of buildings is valuable and beneficial but that those operations must be followed by management and mitigation of soil and groundwater impacts. EPA officials also told us that EM needs to better balance D&D and remediation efforts by conducting more remediation activities. EM officials stated that at the Oak Ridge GDP, EM balances D&D with remediation activities, but they did not provide documentation about these efforts. The Tennessee regulatory officials added that EM has been reluctant to commit to milestones that regulators identify as a priority. In addition, EPA officials and the Kentucky state regulatory official said that EM reprioritizes the cleanup effort every few years. The Kentucky regulator added that this has led to delays in approving the site management plan. These issues have led to disputes, and strained relations at the Paducah GDP. Specifically, per the terms of their Federal Facility Agreement, EM, EPA, and the Kentucky regulator must annually agree to a site management plan that establishes enforceable milestones. However, the parties have not agreed to such a plan since 2015, and in its draft 2018 plan, EM changed its priorities from the 2015 plan by moving a number of enforceable milestones to non-enforceable planning dates. As of February 2019, these and other technical disputes between EM and EPA and state regulatory officials had delayed demolition of the C-400 building—the primary source of groundwater contamination at the Paducah site—by a year and led to cost increases. In commenting on a draft of this report, both DOE and EPA officials stated that disputes associated with the C-400 building demolition were resolved in a memorandum of agreement signed in August 2019. Differences in preferred cleanup remedies at Oak Ridge. The Oak Ridge Federal Facility Agreement requires EM to reach agreement with the regulators on cleanup remedies. According to EM, EPA, and Tennessee regulatory officials we interviewed, EM and the regulators differ in their choice of preferred cleanup remedies at the Oak Ridge GDP, an issue subject to dispute under the Federal Facility Agreement. At Oak Ridge, EM officials we interviewed said that their cost estimate for all of the groundwater cleanup assumes that regulators will agree to a waiver for active cleanup across the site, relying on a cleanup remedy called monitored natural attenuation— allowing natural processes to decrease or “attenuate” concentrations of contaminants in the groundwater and monitoring that progress over time. EM officials acknowledged that they have not reached agreement with regulators on groundwater cleanup remedies. The officials noted that their proposed approach is based on their analysis of what remedies are cost effective, technically practicable, technically feasible, fully protective, and likely to be agreed upon by the state. EM officials also noted that their cost estimates are developed following federal standards that require EM to assume the lowest cost remedy if no remedy is more likely than another. However, DOE’s preferred cleanup remedy may not be accepted by regulators. EPA and Tennessee regulators told us that while they may agree to a waiver for specific areas at Oak Ridge, they would not agree to a “blanket” waiver covering the entire site. They added that they would prefer that EM more actively address contamination, for example, by installing a pump-and-treat system at Oak Ridge. Without the blanket waiver included in their cost estimate, EM officials said that cleanup would likely be delayed by several years, and costs would likely increase by as much as hundreds of millions of dollars. EM officials later said that they are not seeking a blanket waiver and do not believe a blanket waiver will be required for all groundwater remediation requirements, but rather that focused waivers may be necessary for certain areas that cannot be restored by available technology. Notably, in reviewing EM’s most recent cost estimate, we found that the estimate continues to assume a waiver for the entire site. Concerns about EM’s cost estimation assumptions. EPA and the Kentucky and Tennessee state regulatory officials we interviewed told us that EM generally shares information under the terms of the Federal Facility Agreement. However, the officials said they were concerned that the assumptions behind EM’s cost estimates for GDP cleanup are not transparent and that EM has not worked with them to develop the estimates. EPA officials told us that EM does not adequately or transparently include EPA on technical scope and cleanup schedule considerations that underlie EM’s cost estimates. Tennessee regulatory officials added that EM’s cost estimates do not reflect the state’s assumptions about the technical scope and schedules for the remedies for soil and groundwater remediation. In commenting on a draft of this report, DOE officials stated that estimates for the Oak Ridge GDP reflect the technical scope and schedules to accomplish the end state remedies that the Tennessee regulator has agreed to for soil remediation. The officials added that they are working with the regulator on the remedy for groundwater remediation. Similarly, at the Paducah GDP, the Kentucky state regulatory official expressed concern that EM’s cost estimates were unrealistic—especially EM’s assumption that Paducah would receive over $1 billion in funding (in escalated dollars) for most years starting in 2036 and ending in 2050. Total enacted appropriations for Paducah in fiscal year 2019 were about $274 million; EM’s assumption would constitute a significant increase in Paducah’s funding. Without these increased funding levels, Paducah’s cleanup would likely extend beyond the 2065 to 2070 time frame, and EM’s estimates for completion and cleanup costs would likely increase. EM site officials at Oak Ridge disagreed that they have not been transparent with EPA and Tennessee state regulators, emphasizing that they have complied with all Federal Facility Agreement requirements regarding regulator participation in the budget process. At Paducah, the challenges between EM, EPA, and the Kentucky regulator are not new. In April 2004, we reported that EM, EPA, and the Kentucky regulator had difficulty agreeing on an overall cleanup approach as well as on the details of specific projects. Further, we found that over time, these disagreements had undermined trust and damaged the parties’ working relationship. We recommended that EM involve EPA and the Kentucky regulator early in the development of the annual site management plan and specific projects—before submitting formal cleanup proposals for regulatory approval—so that the parties can identify and resolve their concerns and reach consensus on cleanup decisions in a more timely manner. EM stated it believed at the time that it had been successful in fostering constructive relationships with its regulators and through its intent to involve regulators early in the decision-making process. In commenting on a draft of this report, DOE officials stated that every year DOE conducts scoping meetings with EPA and the Kentucky regulator to establish the strategy, planning schedules, and milestones for the annual site management plan prior to it being transmitted to the regulators in November. According to a September 2012 Memorandum on Environmental Collaboration and Conflict Resolution issued by OMB and the Council on Environmental Quality, departments and agencies should “increase the appropriate and effective use of third-party assisted environmental collaboration as well as environmental conflict resolution to resolve problems and conflicts that arise in the context of environmental, public lands, or natural resource issues, including matters related to energy, transportation, and water and land management.” Pursuant to the memorandum’s annual reporting requirement, DOE’s draft annual report from March 2018 presents information on the department’s use of third parties and other collaborative problem-solving approaches in fiscal year 2017. In that report, DOE cites the benefits of integrating third-party facilitation into DOE site and program office projects, including expanded and clearer communication that leads to smoother relationships with the regulators and the public. EM officials told us that they, in conjunction with the regulators, have used outside facilitators to help scope site management plans, work plans, and other project documents over the past few years. They said that they have engaged the services of a facilitator at Paducah on two significant efforts, and in both cases the facilitator added value and was effective. In addition, Tennessee state regulatory officials told us that they have used a mediator with EM at the Oak Ridge GDP site in the past, and they believe the process had a positive result. However, EM is currently not engaging the services of a facilitator at the three GDP sites to help the parties address differences in setting priorities, agreeing on remedies, and ensuring the cost estimates reflect regulator assumptions. By working with an independent, third-party facilitator to help resolve disagreements over cleanup priorities, cleanup remedies, and cost estimation assumptions, EM would be in a better position to achieve stakeholder concurrence on these issues and avoid future cleanup delays. Limitations in EM’s Reporting to Congress Present Challenges to Congress’s Ability to Take Actions to Address the Sufficiency of the D&D Fund EM’s reporting to Congress on the sufficiency of the D&D Fund is based on old data, incomplete information, and unclear scope, presenting challenges to Congress’s ability to be fully informed in taking actions to address the sufficiency of the fund. The Energy Policy Act, as amended, required the Secretary of Energy to report within 3 years of enactment, and at least once every 3 years thereafter, on the progress of the GDP cleanup effort. DOE has continued to prepare triennial reports on the status of the D&D Fund and GDP cleanup for Congress. However, DOE’s 2019 triennial report is based on outdated information, provides limited information on the challenges EM faces in reaching agreement with EPA and state regulators, and is not clear on the scope of work. These limitations reduce the quality of the information Congress receives for making decisions about allocating resources to the D&D Fund at the same time that Congress will have to address a continued need for resources for GDP cleanup given the fund is estimated to be exhausted by 2020. The 2019 triennial report is based on outdated information. The latest triennial report, issued in May 2019, is based on financial information as of September 2016 and on cost estimates prepared in 2013 (Oak Ridge) and 2014 (Portsmouth and Paducah). In addition, the report does not contain information on an updated cost estimate for the Paducah site. Specifically, for Paducah, the report cites a cost estimate—prepared in 2014—of $15 billion to $16 billion and a completion date of 2047. However, EM prepared a revised cost estimate in 2018 that estimated costs to be $34 billion and estimated completion dates ranging from 2065 to 2070. EM had initially included information from this 2018 estimate in a draft of the 2019 triennial report, but ultimately did not include this information or note it in any way in the final report provided to Congress. EM headquarters officials told us that they did not include the updated 2018 Paducah cost estimate in the final 2019 report because they had already completed an extensive field and headquarters review process of the 2019 triennial report and did not want to repeat that process. The 2019 triennial report does not discuss the challenges EM faces in reaching agreement with EPA and state regulators. The 2019 triennial report has a section on challenges and uncertainties for each GDP. For the Oak Ridge and Paducah GDPs, this section does not discuss the challenges EM faces in reaching agreement with regulators on cleanup remediation decisions. For example, the Oak Ridge challenges and uncertainties section of the 2019 triennial report mentions that some groundwater treatment may be required, but the report does not disclose EM’s assumption in its cost estimate that it will receive a waiver allowing it to avoid active groundwater remediation activities or that this is an area of disagreement with the regulators. Similarly, the report’s discussion of challenges and uncertainties at Paducah mentions that several CERCLA decisions regarding groundwater need to be made, but does not discuss disagreements with the regulators over priorities or the implications of those decisions on cost or schedule. Information in triennial reports is not always clear on scope of work. Some information in the triennial reports has not always been clear. For example, when reporting its cost estimates in its three most recent triennial reports (2010, 2016, and 2019), DOE reports only future costs for Oak Ridge; whereas for Portsmouth and Paducah it reports either total costs (past plus future estimated costs), or future costs, or does not clearly indicate if the cost estimate represents total or future costs. These differences make it difficult to make comparisons among the three GDPs. In addition, in six triennial reports, DOE reported similar estimated future costs for completing the Oak Ridge GDP cleanup—$1.2 billion in the 1998 report; $1.3 billion in 2001; $1.6 billion in 2007; $2.1 billion in 2010; $1.4 billion in 2016; and $950 million in 2019. Estimated costs to complete cleanup would likely be reduced over time as work scope is completed, unless the scope of work is increasing, costs for materials are increasing, or prior estimates were incorrect; however, DOE has not clearly explained the factors contributing to these similar future cost estimates in any of its reports since 2007 (2007, 2010, 2016, 2019). Standards for Internal Control in the Federal Government state that management should externally communicate the necessary quality information to achieve the entity’s objectives. Quality information is appropriate, current, complete, accurate, accessible, and provided on a timely basis. Given that DOE estimates the D&D Fund will be exhausted in 2020, there is an urgency for DOE to communicate current and accurate information on the fund on a timely basis to Congress. By regularly reporting on the status of the D&D Fund and cleanup efforts at the three GDPs with current information that contains details on challenges in reaching agreement with regulators and a clear scope of work, DOE will be able to provide better information for congressional decision-making on the sufficiency of the fund. Conclusions EM has made progress in cleaning up DOE’s three former GDPs— particularly at Oak Ridge where contractors have demolished all five uranium enrichment processing buildings measuring a combined 114 acres as well as most other supporting buildings and facilities—but future work remains. Although DOE has stated its intent to manage cleanup of the GDPs in an integrated manner, EM is not managing the cleanup as an integrated program, even though cleanup of the GDPs meets the definition of a program as defined by PMI and Congress established a single, shared D&D Fund to pay for the cleanup. By taking steps to manage the three GDPs as an integrated program and following relevant program management leading practices we examined (developing a program management plan, an integrated master schedule, and a reliable, integrated, comprehensive life-cycle cost estimate), EM would have more reasonable assurance that it is taking every opportunity to increase the efficiency and effectiveness of its management activities. Further, EM has limited expenditure data and its cost estimates for completing cleanup are not reliable. Detailed expenditure data are important for developing reliable cost estimates. However, according to EM officials, EM does not track detailed expenditure data consistently across the three GDPs. As a result, EM’s ability to develop accurate and informed cost estimates for future work at the three GDP sites is limited. By tracking consistent and detailed expenditure information on cleanup activities across the three GDPs, EM management will be better able to develop reliable cost estimates to plan for future work. Moreover, EM does not have reliable cost estimates for completing cleanup of the three GDPs. Until EM ensures the site-specific life-cycle cost estimates for the cleanup of each of the GDPs fully incorporate best practices for cost estimation, EM, DOE, regulators, and Congress will not have the information needed to understand the level of resources required to achieve cleanup of the GDPs. According to EPA and state regulatory officials from Kentucky and Tennessee, negotiations with EM regarding various cleanup decisions have strained relations between EM and regulators and present challenges to the GDP cleanup progress that will likely cause further delays and increase GDP cleanup costs if EM is unable to reach agreement on its preferred outcomes. EM officials said they have used third-party facilitators with the regulators in the past but are not currently engaging the services of a facilitator at the three GDP sites. By working with an independent, third-party facilitator to help resolve disagreements over cleanup priorities, cleanup remedies, and cost estimation assumptions, EM would be in a better position to achieve stakeholder concurrence on these issues and avoid future cleanup delays. Finally, DOE’s 2019 triennial report is based on outdated information, provides limited information on the challenges EM faces in reaching agreement with EPA and state regulators, and is not clear on the scope of work, thereby reducing the quality of the information Congress receives about the sufficiency of the fund. Given that DOE estimates the fund will be exhausted in 2020, there is an urgency for the department to communicate current information on the fund on a timely basis to Congress. By regularly reporting on the status of the D&D Fund and cleanup efforts at the three GDPs with current information that contains details on challenges in reaching agreement with regulators and a clear scope of work, DOE will be able to provide better information for congressional decision-making on the sufficiency of the fund. Recommendations for Executive Action We are making five recommendations to DOE: The Secretary of Energy should direct the Assistant Secretary of the Office of Environmental Management to take steps to manage the three GDPs as an integrated program and follow relevant program management leading practices (developing a GDP-wide program management plan; an integrated master schedule; and a reliable, integrated, comprehensive life-cycle cost estimate.) (Recommendation 1) The Secretary of Energy should direct the Assistant Secretary of the Office of Environmental Management to track consistent and detailed expenditure information on cleanup activities across the three GDPs. (Recommendation 2) The Secretary of Energy should direct the Assistant Secretary of the Office of Environmental Management to ensure the site-specific life- cycle cost estimates for the cleanup of each of the GDPs fully incorporate best practices for cost estimation. (Recommendation 3) The Secretary of Energy should direct the Assistant Secretary of the Office of Environmental Management to work—in conjunction with EPA and Kentucky and Tennessee state regulators—with an independent, third-party facilitator to help resolve disagreements over cleanup priorities, cleanup remedies, and cost estimation assumptions. (Recommendation 4) The Secretary of Energy should regularly report on the status of the D&D Fund and cleanup efforts at the three GDPs with current information that contains details on challenges in reaching agreement with regulators and a clear scope of work. (Recommendation 5) Agency Comments and Our Evaluation We provided a draft of this report to DOE and EPA for comment. In DOE’s comments, reproduced in appendix IV, the agency generally agreed with our findings and recommendations, and described actions that DOE intends to take in response to our recommendations. Specifically, of our five recommendations, DOE concurred with four and partially concurred with one. DOE also provided technical comments, which we incorporated as appropriate. EPA did not provide written comments but provided technical comments, which we incorporated as appropriate. DOE concurred with our first and second recommendations that the Secretary of Energy should direct the Assistant Secretary of the Office of Environmental Management to (1) take steps to manage the three GDPs as an integrated program and follow relevant program management leading practices and (2) track consistent and detailed expenditure information on cleanup activities across the three GDPs. In its response to the first recommendation, DOE stated that EM will develop a program management master plan, to include site integrated master schedules and life cycle costs for the remaining cleanup at the Portsmouth and Paducah GDPs, and that the plan will incorporate program management leading practices as appropriate. In response to the second recommendation, DOE stated that EM will assess and identify an appropriate mechanism for tracking expenditures for both the Portsmouth and Paducah GDPs, using a standardized approach with an Earned Value Management System reporting on, at a minimum, an annual basis. We appreciate DOE’s commitment to improve cleanup at the Portsmouth and Paducah sites; however, we emphasize that these two recommendations are directed at all three GDPs, including the Oak Ridge GDP. We reported that DOE intends to complete cleanup of the Oak Ridge GDP by fiscal year 2022, but according to EM documentation we reviewed and EM officials we interviewed, as well as EPA officials and state regulators we interviewed, EM is unlikely to complete the cleanup by this date. EPA officials and Tennessee regulators stated that it is more realistic that cleanup of the Oak Ridge GDP will not be completed until the late 2020s, and EPA officials told us that cleanup may not be completed until the 2040s. Given the potential for Oak Ridge cleanup to continue for at least another decade, we continue to believe it is important that DOE include Oak Ridge in its implementation of these two recommendations. DOE partially concurred with our third recommendation that the Secretary of Energy should direct the Assistant Secretary of the Office of Environmental Management to ensure the site-specific life-cycle cost estimates for the cleanup of each of the GDPs fully incorporate best practices for cost estimation. DOE stated that EM will direct the Portsmouth and Paducah sites to review and incorporate practices from our Cost Estimating Guide, as appropriate, into the next revisions of each site’s life-cycle cost baselines. DOE also stated that the remaining scope for the Oak Ridge GDP will become part of the performance baseline for the next Oak Ridge contractor. We appreciate DOE’s commitment to improve cost estimation for the Portsmouth and Paducah GDPs. However, we continue to believe that improving cost estimation for the Oak Ridge GDP is also important, given that cleanup of Oak Ridge may continue for at least another decade, as described above. As such, we continue to believe it is important that DOE include Oak Ridge in implementing this recommendation. DOE concurred with our fourth recommendation that the Secretary of Energy should direct the Assistant Secretary of the Office of Environmental Management to work—in conjunction with EPA, and Kentucky and Tennessee state regulators—with an independent, third- party facilitator to help resolve disagreements over cleanup priorities, cleanup remedies, and cost estimation assumptions. DOE stated that as disagreements over cleanup priorities, remedies, and cost estimation assumptions arise, EM will work with all parties to determine the feasibility and benefits of using a facilitator on a case by case basis to help resolve issues. DOE also concurred with our fifth recommendation that the Secretary of Energy should regularly report on the status of the D&D Fund and cleanup efforts at the three GDPs with current information that contains details on challenges in reaching agreement with regulators and a clear scope of work. DOE management stated that EM will produce its next triennial Uranium Enrichment Decontamination and Decommissioning Fund Report following closeout of fiscal year 2019, and release of the most recent environmental liability estimate associated with the remaining challenges and scope of cleanup at the GDPs. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Administrator of EPA, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to the report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology Our report examined: (1) the extent to which the Department of Energy’s (DOE) Office of Environmental Management (EM) has managed cleanup of the three gaseous diffusion plants (GDP) compared with relevant program management leading practices and the status of the cleanup effort; (2) what EM has spent on cleanup at the three GDPs and the extent to which EM’s cost estimates for completing GDP cleanup are reliable; and (3) the extent to which the Decontamination and Decommissioning (D&D) Fund is sufficient to cover EM’s estimated cleanup costs of the GDPs and challenges, if any, that could affect the sufficiency of the D&D Fund. To inform all three objectives, we reviewed the Energy Policy Act of 1992, as amended; DOE triennial reports to Congress on GDP cleanup efforts; and prior reports issued by us, DOE’s Office of Inspector General (both performance audits and financial statement audits on the D&D Fund), and the National Academies of Sciences, Engineering, and Medicine (National Academies). We also interviewed officials from DOE’s Office of Inspector General, the Environmental Protection Agency (EPA), and representatives from of the National Academies, regarding their knowledge of EM’s cleanup progress at the GDPs and any past, ongoing, or future work they have conducted or are planning on the GDP cleanup. We visited all three GDP sites to observe the cleanup work and meet with EM officials responsible for the cleanup, representatives of the DOE contractor responsible for D&D activities, state regulators working with EM on environmental compliance activities (from Kentucky, Ohio, and Tennessee), members of GDP site-specific advisory boards, and representatives of community reuse organizations. During our interviews, we discussed topics including funding for the GDP cleanup, cleanup progress to date, and any challenges facing the cleanup effort. We selected these interviewees because we determined, based on input from EM officials, that they would be the most knowledgeable about GDP cleanup status, funding, and challenges. Following these interviews, we conducted a content analysis of all responses to our interview questions to determine any key challenges that EM faces in completing cleanup of the GDPs. We then grouped, coded, and verified the content in our analysis and performed second-rater review. Through our content analysis, we found that stakeholders primarily cited three key challenges related to EM’s program management; relations between EM, EPA, and state regulators; and transitioning the local communities to cleanup completion. To examine the extent to which EM has managed the cleanup of the GDPs compared with relevant leading practices for program management, and the status of the cleanup effort, we reviewed documents, including site-specific GDP cleanup plans and GDP cleanup progress briefings, as well as reports issued by the National Academies, us, and DOE. We interviewed EM officials and contractor representatives on their past, present, and future plans for cleanup. We also interviewed EPA and state regulatory agency representatives at each of the GDPs regarding their role in the cleanup and interactions with EM. We assessed the information from these reviews and all interviews (content analysis from interview responses) and identified the relevant program management leading practices that aligned with the assessed information. We identified the three program management leading practices by reviewing our prior work and the Project Management Institute’s (PMI) The Standard for Program Management—Fourth Edition. The three leading practices were having (1) a program management plan, (2) an integrated master schedule, and (3) a reliable, integrated, comprehensive life-cycle cost estimate. We compared EM’s management of the GDPs with these leading practices. Specifically, during our interviews with EM, the DOE Office of Inspector General, and EPA officials; Kentucky, Ohio, and Tennessee regulators; representatives of the National Academies; and members of the site- specific advisory board from all three sites, we asked about challenges EM faces in completing cleanup of the three GDP sites. As discussed above, we conducted a content analysis of their responses to our interviews and found that stakeholders primarily cited three key challenges, including EM’s poor program management. Under poor program management, stakeholders cited three sub-challenges: (1) frequent changes in EM’s cleanup priorities and staff turnover, which most closely aligns with the program planning leading practice; (2) lack of integrated schedules across the GDPs, which most closely aligns with the scheduling leading practice; and (3) lack of transparency in EM’s cost estimation processes, which most closely aligns with the program cost estimating leading practice. As a result, we assessed the three leading practices that aligned with those issues: (1) program management plan, (2) integrated master schedule, and (3) integrated comprehensive life- cycle cost estimate. To examine the status of cleanup at the GDPs, we reviewed EM’s documentation of the work completed and the work remaining at each GDP. To examine what EM has spent on cleanup at the three GDP sites, and the extent to which EM’s cost estimates for completing GDP cleanup are reliable, we reviewed historical funding and cleanup expenditure data for all three sites for the period from fiscal year 1994 through 2018 and analyzed EM documentation supporting its cost estimates for each of the three GDPs. The data the sites provided include expenditures from the D&D Fund as well as from other funding sources including: the American Recovery and Reinvestment Act, Uranium Facilities Maintenance and Remediation funds, Environmental Management Waste Management Facility funds, and Technetium-99 cleanup funds. We reviewed financial statement audit reports issued on the D&D Fund for fiscal years 2005 to 2012 and met with relevant headquarters and field staff in financial management, budget, and planning. In addition, we assessed the reliability of the historical funding and expenditure data provided by EM. Specifically, we obtained from EM officials familiar with DOE’s financial management system responses to a series of data reliability questions such as data entry access, quality control procedures, and the accuracy and completeness of the data. During our review of the GDP expenditure data, we identified a number of inconsistencies between the data received from EM site officials and the data reported in DOE’s 2019 triennial report to Congress. EM officials were able to provide satisfactory responses and documentation to address the identified inconsistencies. We therefore found the data to be reliable for our purposes. To examine the reliability of EM’s cost estimates for completing cleanup at the three GDPs, we reviewed EM’s cost estimate documentation, interviewed EM site officials, and compared GDP cost estimates against characteristics of reliable cost estimates contained in our Cost Estimating Guide. Our review included documents that established the basis and assumptions for site contractors’ contributions to the cost estimate, documents that established the contractors’ work breakdown structures, and presentations on contractors’ cost estimating models. We interviewed EM site officials and contractor staff responsible for producing the cost estimates to understand the methods, assumptions, information, and data EM used to produce the estimates. Our cost estimation specialists assessed this information against the best practices for cost estimating found in our Cost Estimating Guide that we developed to establish a consistent methodology that can be used across the federal government to develop, manage, and evaluate capital program cost estimates. We shared our draft assessment for each GDP cost estimate with EM officials and then revised those assessments based on EM’s written comments and additional documentation they provided as appropriate. At EM’s request, we met with Oak Ridge officials a second time to discuss our assessment of the Oak Ridge GDP cost estimate and reviewed additional documents provided by officials, and we reflected that additional information into our assessment of the Oak Ridge cost estimate. To examine the extent to which the D&D Fund is sufficient to cover EM’s estimated cleanup costs of the GDPs and challenges, if any, that could affect the sufficiency of the D&D Fund, we reviewed information on the balance of the D&D Fund and compared it to EM cost estimate information, past reports that describe the balance of the fund, and our prior report on the fund. Despite our findings that the three cost estimates were unreliable, we were able to report on the cost estimates provided in DOE’s 2019 Triennial Report by presenting an “at least” cost estimate. In addition, we interviewed key stakeholders, including officials from EM, the DOE Office of Inspector General, and EPA; regulators from the states of Kentucky, Ohio, and Tennessee; representatives of the National Academies; and members of the site-specific advisory boards and representatives of the community reuse organizations from all three sites, regarding challenges EM faces in completing cleanup of the three GDP sites and challenges that could affect the sufficiency of the D&D Fund. As noted above, we conducted a content analysis of their response and found that stakeholders primarily cited three challenges that could affect cleanup progress and further strain the D&D Fund, including challenges with negotiations with EPA and state regulators. We also reviewed DOE’s triennial reports from 1996 to 2019 and compared information included in each of these triennial reports to determine the extent to which the information provided was presented consistently across reports and consistent with other documentation provided, such as site-specific plans and DOE’s cost estimates. We also interviewed DOE officials about the sufficiency of the D&D Fund and factors affecting the sufficiency of the fund. We conducted this performance audit from April 2018 to December 2019, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Information on Cleanup Work Completed at the Department of Energy’s Former Gaseous Diffusion Plants as of June 2019 This appendix provides information on cleanup work completed at the Department of Energy’s (DOE) former gaseous diffusion plants (GDP) as of June 2019. DOE’s Office of Environmental Management (EM) is responsible for their cleanup. Oak Ridge EM began cleanup at Oak Ridge in 1989 and Decontamination and Decommissioning (D&D) of the uranium enrichment process buildings in 1998. Since that time, EM has characterized the levels and types of contamination for most of the site and conducted D&D on all five uranium enrichment process buildings. EM has also demolished over 390 additional buildings and facilities, including a fire water tower and the Central Neutralization Facility that was used to treat the site’s industrial wastewater. In addition, EM has remediated nearly 1,400 acres of contaminated soils and has used an onsite waste disposal facility to dispose of much of the waste generated from cleanup. Some specific cleanup work EM has completed at Oak Ridge includes: Removed slabs from two uranium enrichment process buildings and completed cleanup of contaminated soils beneath the slab, clearing the way for transition to industrial reuse. Excavated and disposed of approximately 100,000 cubic yards of contaminated materials from a burial ground. Remediated an area considered to be a primary source of organic contamination in area soils and groundwater and treated the resulting approximately 175 cubic meters of contaminated soil. Removed more than 48,000 tons of scrap metal from two scrap yards. EPA and Tennessee state regulators agree that the end use for the site will be a commercial industrial park, and several businesses are already leasing portions of former GDP lands. In addition, more than 3,000 acres of the former GDP lands have been cleared for conservation and recreational use. EM has partnered with the Community Reuse Organization of East Tennessee to attract businesses to operate on the available lands. According to a representative of the Community Reuse Organization of East Tennessee, EM has transferred over 1,000 acres of land and 14 buildings to the reuse organization, who has in turn sold over 300,000 square feet to the private sector. There are 20 private companies operating at the site. Portsmouth EM began cleanup at the Portsmouth GDP in 1989 and D&D of the uranium enrichment process buildings in 2011, after the contractor that operated the site—the Unites States Enrichment Corporation (USEC) returned the buildings to DOE in 2010. As of May 2019, EM is preparing the first of three uranium enrichment process buildings for demolition and is starting to characterize contamination in the second. EM is also conducting ongoing remediation activities and constructing an on-site waste disposal facility, where EM intends to dispose of D&D waste that meets the approved acceptance criteria of the disposal facility. Several site support facilities, including a large electric switchyard, have been demolished. Some specific cleanup work EM has completed at Portsmouth includes: Completed sampling and removal for off-site disposal of all 7,020 uranium enrichment components (converters, compressors, and coolers) from one of the uranium enrichment process buildings. Closed five on-site landfills covering 60 acres. Removed more than 37,000 pounds of trichloroethylene—a solvent for degreasing metal that contaminated the groundwater at the site— through groundwater remediation. EM contractors at Portsmouth told us that they are cleaning up the site for future industrial use. Paducah EM began cleanup at the Paducah site in 1988. USEC officially returned the GDP to DOE in 2014 and according to an EM document and officials, deactivation of the uranium processing buildings began that same year. In January 2019, EM reached a milestone—deactivation of the C-400 building—by completing the cleanup of legacy materials in the building. C-400 was a cleaning facility used to clean machinery parts and test equipment and has been identified as the primary source of groundwater contamination at the site. According to EM officials, EM has primarily been using a pump-and-treat method to control the high concentration portion of the groundwater plumes at Paducah. EM officials stated that EM is focusing its cleanup efforts on D&D of the C-400 building and remediation from now until the early 2030’s. According to EM officials, EM is continuing to treat large contamination plumes and demolish inactive facilities. Some specific cleanup work EM has completed at Paducah includes: Demolished and removed 43 inactive facilities including a 210,000 square foot uranium hexafluoride feed plant and a 60,000 square foot metals plant. Treated over four billion gallons of contaminated groundwater from two operating pump-and-treat facilities and, as part of this treatment, removed approximately 3,700 gallons of trichloroethylene. Removed more than 850,000 cubic feet of low-level and mixed low- level legacy wastes and material storage area waste. Resurfaced 74 acres of roofs at the site and rerouted roof drains in order to reduce infiltration of water into the facilities. Officials at Paducah told us that they are cleaning up the site for future industrial use. Appendix III: Summary of GAO’s Assessment of DOE’s Cost Estimates for Cleanup of the GDPs Compared with Best Practices Appendix IV: Comments from the Department of Energy Appendix V: GAO Staff and Acknowledgements GAO Contact Staff Acknowledgments In addition to the individual named above, Amanda K. Kolling, Assistant Director; Luqman Abdullah; Mark Braza; Jennifer Echard; Emile Ettedgui; Juan C. Garay; Mark Keenan; Jennifer Leotta; Gregory Marchand; Kiki Theodoropoulos; and Lauren Woodard made key contributions to this report. Also contributing to this report were Alexandra Edwards; Keegan Maguigan; Anne Stevens; and Doris Yanger.
Why GAO Did This Study Cleaning up DOE's former uranium enrichment sites will cost billions of dollars and span decades. These sites, near Oak Ridge, Tennessee; Paducah, Kentucky; and Portsmouth, Ohio, are contaminated with radioactive and hazardous materials. EM is responsible for their cleanup. This report examines (1) the extent to which EM has managed cleanup of the GDPs compared with relevant program management leading practices and the status of the cleanup effort; (2) what EM has spent on cleanup at the GDPs, and the extent to which EM's cost estimates for completing GDP cleanup are reliable; and (3) the extent to which the D&D Fund is sufficient to cover EM's estimated cleanup costs of the GDPs and challenges, if any, that could affect the sufficiency of the fund. GAO reviewed relevant legislation and DOE reports to Congress on GDP cleanup; compared program management to relevant leading practices; assessed EM expenditure and cost estimation documents; and interviewed EM and state regulatory officials at the three GDPs. What GAO Found Since 2007, the Department of Energy (DOE) has stated in reports to Congress that it intends to manage its three former gaseous diffusion plants (GDP) in an integrated manner. Also, a Decontamination and Decommissioning (D&D) Fund was established by law to pay for the cleanup costs of the GDP sites, so that DOE's Office of Environmental Management (EM) must coordinate and make trade-offs in its use of resources among the three GDPs. However, EM has managed the cleanup of the three GDPs as three individual sites. In addition, EM is not following relevant leading practices GAO reviewed for managing the cleanup as a program (having a program management plan; a reliable integrated master schedule; and a reliable, integrated, comprehensive life-cycle cost estimate. By managing the three GDPs as an integrated program and following these program management leading practices, EM would have more reasonable assurance that it is taking every opportunity to increase the efficiency and effectiveness of its management activities. EM has reported spending a total of about $15.5 billion on GDP cleanup as of fiscal year 2018. However, EM's cost estimates for completing cleanup at the three sites are not reliable. GAO assessed EM's cost estimates for the GDPs individually by comparing them with best practices for developing high-quality, reliable cost estimates. EM's cost estimates for completing cleanup of the GDPs do not fully or substantially meet all of the characteristics of a reliable cost estimate Until EM ensures that its site-specific cost estimates fully incorporate best practices for cost estimation, EM, DOE, regulators, and Congress will not have the information needed to understand the level of resources required to achieve cleanup of the three GDPs. Under EM's current cost estimates, remaining GDP cleanup costs exceed the balance of the D&D Fund by at least $25 billion and EM faces challenges that could affect cleanup progress and the sufficiency of the fund. For example, DOE's reporting to Congress on the sufficiency of the D&D Fund is based on old financial data, incomplete information, and unclear scope. These limitations reduce the quality of the information Congress receives for making decisions about the sufficiency of the fund and allocating resources to the fund. For example, DOE reported to Congress on the status of the D&D fund and GDP cleanup in May 2019. The report was based on financial data as of September 2016 and on cost estimates prepared in 2013 for one GDP and in 2014 for the other two. Given that DOE estimates the fund will be exhausted in 2020, there is urgency for DOE to communicate current information on the fund on a timely basis to Congress. By regularly reporting on the status of the D&D Fund and cleanup efforts at the three GDPs with current information that contains details on challenges in reaching agreement with regulators and a clear scope of work, DOE will be able to provide better information for congressional decision-making on the sufficiency of the fund. What GAO Recommends GAO is making five recommendations, including that DOE (1) manage the cleanup of the three GDPs as an integrated program and follow program management leading practices, (2) ensure cost estimates fully incorporate cost estimating best practices, and (3) report regularly on the status of the D&D Fund and cleanup efforts at the three GDPs. DOE agreed with four of them and partially agreed with one. GAO believes all of the recommendations should be implemented at all three sites.
gao_GAO-20-97
gao_GAO-20-97_0
Background FAA’s Efforts to Integrate UAS Operations into the National Airspace System FAA is responsible for overseeing and authorizing any flight operations in the national airspace system for both manned and unmanned aircraft. FAA’s UAS Integration Office, located in the Office of Aviation Safety, seeks to integrate UAS operations into the national airspace system while ensuring the safety of the public and integrity of the airspace. In July 2018, FAA released the 2018 UAS Integration Roadmap, a second edition of the agency’s 5-year plan outlining its most current phased approach for integration, with each step toward full integration allowing UAS operations of increasing complexity. FAA’s vision for fully integrating UAS into the national airspace system entails UAS operating safely and routinely—i.e., without requiring prior approval for UAS flights—in the same airspace as manned aircraft. While safety is FAA’s paramount concern, the integration of UAS is important because of the potential economic benefits that progress in UAS integration could bring, including more investment in uses such as large passenger operations, as well as the potential safety benefits, such as more effective firefighting and other disaster response efforts. Currently, FAA only allows certain routine UAS operations under specific conditions while authorizing other UAS operations on a case-by-case basis. For example, since August 2016, operators of small UAS—defined as those UAS weighing less than 55 pounds, including any attachments— who have obtained a remote pilot certificate have generally been allowed to operate without prior FAA approval in certain airspace during the day, under 400 feet, and not over people or beyond an operator’s line of sight, among other requirements under FAA’s Part 107 rule. Small UAS operators may seek a waiver of certain FAA operational requirements (referred to as a Part 107 waiver) from the agency on a case-by-case basis, such as a waiver that would allow an operator to fly drones above 400 feet. In contrast, no routine operations—meaning those that can occur without any prior authorization—are currently allowed for large UAS (55 pounds and over) for any purpose (see fig. 1 for examples of small and large UAS). Rather, operators of large UAS must seek authorization from FAA to fly the aircraft on a case-by-case basis, and the processes for accessing the airspace vary. More specifically, civil large UAS operators must, in most cases, obtain a Certificate of Waiver or Authorization (COA) that demonstrates FAA’s approval of airspace access, and may also require approval for the aircraft itself. A COA allows any certificate holder to fly UAS outside of generally allowable operations, such as at certain altitudes, locations, or airspace classes (e.g., near airports). FAA grants this approval to an entity for a specific activity and time period, and sometimes for a specific make and model of UAS. Public entities—which include federal, state and local governments, public academic institutions, and law enforcement agencies—may apply for a COA in order to obtain authorized access to fly in the national airspace for when they are conducting governmental operations, as defined by statute. In such cases, the COA allows for the certificate holder to operate UAS in ways that would otherwise not comply with airspace requirements, such as operating the drone beyond the pilot’s line of sight. In its 2018 UAS Integration Roadmap, FAA outlined some key topics and operational capabilities to be researched that are associated with specific UAS integration phases (see fig. 2). For example, both government and industry entities have research and testing of technologies underway to provide UAS the capability to detect obstacles in midair, such as other aircraft, and automatically maneuver to avoid collision; this capability is commonly referred to as “detect and avoid.” FAA officials have stated that this key capability is necessary before allowing certain UAS operations on a routine basis, such as flights beyond the operator’s line of sight. According to FAA, the agency plans to use data from several UAS research programs—including the test site program—and from other sources to inform its future decisions regarding UAS integration. FAA’s Test Site Program In 2012, FAA was required by statute to establish a program to integrate UAS into the national airspace system and to establish six UAS test sites in order to develop a process for allowing research to occur at these test sites, among other requirements. In response to Congress’ mandate, in 2013 FAA selected six public entities to be designated as test sites based on a number of factors, including geography, climate, and the respective institutions’ expertise, and added another entity in response to legislation in 2016 for a total of seven designated test sites. According to FAA officials, the test site program was intended to enable industry stakeholders to test complex UAS operations and conduct research on the corresponding technologies. Each test site is a public entity, such as a public academic institution or branch of the state government, which FAA authorizes to conduct various UAS operations through the COA process. UAS stakeholders, including manufacturers or entities seeking to use UAS for various purposes, can pay to work with any of the seven FAA- designated test sites to conduct test flights or receive training on UAS operations and regulations, among other activities, based on the test site staff’s expertise. FAA has not directly funded the test sites’ general operations, so the sites have had to rely on other funding sources, such as revenues generated from users, state funds, federal research grants, and commercial investment. Congress recently appropriated $6 million to FAA to provide matching funds to qualified commercial entities seeking to test UAS technologies at FAA designated test sites. FAA manages the test site program using formal agreements and by providing support to test site staff. The test sites signed individual Other Transaction Agreements (OTA) with FAA that establish their agreement to meet specific requirements aimed to support FAA’s UAS integration efforts. For example, these agreements lay out that test sites must follow safety processes and data procedures, as well as provide certain deliverables to FAA. Specifically, the agreements outline that the test sites will provide FAA certain operations and safety-related data for specific test flights, which FAA stores in a database it created specifically for test site data. In addition, once the test sites were operational, FAA designated an official to serve as the test site program manager for all seven sites who, among other duties, facilitates regular meetings with test site representatives to discuss ongoing issues and regularly communicates with other FAA lines of business to keep them informed about key efforts underway at test sites. UAS Flight Testing through Test Sites According to FAA, the designated test sites have the equipment and infrastructure to support UAS flight testing, such as UAS pilots, launch pads, command centers, and, if required, chase aircraft (see fig. 3). Test site staff can facilitate UAS flight operations under a test site’s COA or by complying with the Part 107 rule. Since 2015, the test sites have held a “blanket” COA that allows them to conduct government functions for small UAS in Class G (uncontrolled) airspace anywhere in the United States except within restricted or prohibited areas. In addition, test sites have applied for and been granted COAs to operate UAS of different sizes in locations (referred to as “test ranges”) outside their state, and in a variety of airspaces at various elevations (see fig. 4 for a sample of test site COAs). For example, as of October 2019, the Alaska test site had COAs for test ranges in many states including Alaska, Hawaii, Tennessee, and Oregon—one of which allows operations up to 15,000 feet above mean sea level within three classes of airspace around Pendleton, Oregon. Some test ranges are located at airports, such as Griffiss International Airport in New York, which can help facilitate the testing of UAS that may require runways for take-off and landing, as well as testing of UAS flying in areas with manned aircraft. However, stakeholders, such as UAS manufacturers or companies interested in using UAS for various purposes, are not required to use an FAA-designated test site for UAS flight testing. In addition to seeking authorization directly from FAA to conduct their own flights or flying according to current rules such as Part 107, UAS stakeholders can work with other entities—such as military airports, public academic institutions or other public test sites—to which FAA has granted COAs to conduct complex UAS operations. For many stakeholders, however, working with a designated test site may provide quicker access to testing than seeking their own authorization from the FAA. For example, a UAS manufacturer might work with a test site to test the company’s UAS prototype at a certain elevation under a test site’s existing COA (following all applicable COA guidelines, such as performing a government function with the operation) because the test site already had that authorization in place. Additionally, it may be beneficial for a UAS manufacturer or operator to work with a test site because the test site has experience in obtaining authorizations or waivers from FAA for similar types of operations or aircraft. Test Sites Have Facilitated Thousands of UAS Test Flights for a Wide Range of Research and Activities Test Sites Have Facilitated about 15,000 UAS Test Flights According to FAA’s MLS data, the test sites facilitated about 15,000 total UAS test flights occurring under test site COAs from April 2015 through December 2018 (see table 1). However, according to test site representatives, staff at these sites facilitated more UAS flights during this time frame than is reflected in the MLS data, because additional flights were conducted using different allowances than COAs, such as under the Part 107 rule that allows certain routine small UAS operations. According to FAA officials, the decrease—starting in 2017—in the annual number of reported test flights by the test sites, as reflected in table 1 above, is due in part to a change in regulations. Specifically, when FAA’s Part 107 rule took effect in August 2016, it provided a new avenue for small UAS operators, including test site staff and other airspace users, to test certain small UAS operations without requiring a COA or other authorization, effectively reducing the number of test flights logged into FAA’s MLS. Agency officials also told us that Part 107 changed the type of research users request from the test sites, which may have reduced the number of test flights facilitated through the test sites. While there have been fewer flight tests, according to some test site representatives and users we spoke to, recent testing has been for more complex research. For example, one test site representative stated that now the site’s users have bigger, more extensive research projects involving more tasks than just test flights, such as developing the operational models, performing testing on various technologies, and installing equipment to support complex UAS operations. Test Sites Have Supported UAS Stakeholders in Conducting Research in Preparation for Varied UAS Activities, from Inspecting Utilities to Carrying Passengers Research conducted at the test sites has provided information to FAA that, according to agency officials, supports its efforts to integrate UAS into the national airspace system. Test site representatives told us that they have supported over 440 public and private users to conduct research and development on UAS to be used for a variety of UAS activities. While FAA officials told us that they cannot direct specific types of research to be conducted at the test sites unless the agency funds that research, we found that users have nevertheless conducted UAS research and development activities that FAA has identified as important for UAS integration. For example, users conducted research on the safety risks of UAS, such as concussion collision studies, and have tested UAS capabilities, such as the ability to carry loads of varying weights. Also, based on our analysis, we found that users have tested UAS technologies at the test sites that align with some of the key capabilities identified by FAA as necessary for the upcoming phases of UAS integration (see table 2). Test site users also reported benefits from working with test sites. According to the users we interviewed, the test sites have provided them an opportunity to explore and improve UAS technologies, and to learn more about how they could use UAS for their own purposes in the national airspace. For example, one user of the New York test site had tested communication equipment and detect-and-avoid capabilities on large UAS that they manufacture and sell to other entities for conducting surveillance activities, such as drug interdiction. Many of the test site users (11 of 18) we spoke to stated that using a test site provided a significant benefit for advancing their entity’s UAS research and development efforts. In addition, according to 9 of the 18 users we spoke to, test sites provided them with direct and immediate access to tools that helped them test their technologies. For example, users stated that it was beneficial that test sites have specific authorities from FAA for certain types of testing under a COA as well as infrastructure to allow for advanced UAS research. Some activities the test site users we spoke to plan to conduct with UAS are already regularly occurring—meaning FAA either allows these to occur on a routine basis or has allowed them to occur through additional authorization on a regular basis. Others are not yet occurring on a regular or routine basis due either to legal restrictions, such as restrictions on operating UAS beyond the operator’s visual line of sight or needed technological advancements, but FAA expects them to occur routinely in the future (see table 3). Some users we spoke to have also worked with a test site to conduct extensive hazard and risk mitigation testing to build safety cases and get approval from FAA to conduct complex UAS operations. FAA generally requires safety cases when a user is seeking approval to deviate significantly from current UAS requirements, such as when seeking to conduct beyond-visual-line-of-sight operations using a small UAS. For example, according to representatives from an insurance company we spoke to, they worked with the Virginia test site for over a year to build a safety case to prove that the company could safely operate its small UAS beyond the operator’s line of sight and over people. According to test site representatives, this risk mitigation testing entailed dozens of experiments, including how to address the risk of an UAS abruptly losing power. For instance, if a UAS operating over a house for an insurance inspection loses power, it could fall, potentially causing damage to the building as well as injuring someone standing on the ground below. In November 2018, FAA granted approval for the company to fly its fleet of UAS over people and beyond the operator’s line of sight in sparsely populated communities nationwide for insurance claim inspections. Test Sites Have Also Participated in Federal UAS Research Projects Intended to Inform UAS Integration All test sites have competed for and were selected by federal agencies to participate, to varying degrees, in additional UAS research efforts designed to inform aspects of FAA’s integration plans. The projects include: The Department of Transportation’s (DOT) UAS Integration Pilot Program (IPP): In May 2018, DOT selected 10 project teams—which included the Alaska, North Dakota, and Virginia test sites—to participate in this program aimed at evaluating different concepts for certain UAS operations in specific communities. According to DOT, the IPP is an opportunity for state, local, and tribal government agencies to partner with private sector entities, such as UAS operators or manufacturers, to, among other things, accelerate the approval of operations that currently require case-by-case authorizations. Two key intended outcomes of the IPP are to assess the respective communities’ acceptance of low-altitude UAS operations, and to balance national and local interests in furthering UAS integration. For example, the Alaska test site is a member of the University of Alaska Fairbanks IPP team, with a primary focus of enabling complex UAS technology for pipeline inspections in the area’s harsh climatic conditions through testing technologies, such as using detect and avoid technology at night. While project awardees do not receive any federal funding for this program, FAA officials told us they are collecting data from IPP efforts to inform future decision- making. FAA’s Center of Excellence for UAS: In May 2015, FAA selected a team of 15 research institutions, including the Alaska and New Mexico test sites, called the Alliance for System Safety of UAS through Research Excellence (ASSURE), to serve as FAA’s Center of Excellence for Unmanned Aircraft Systems and to conduct academic research critical to safe and successful UAS integration. Congress has appropriated funds to ASSURE since fiscal year 2014 to pay for operational expenses and research, and according to FAA officials, ASSURE institutions are eligible to receive grant funding from FAA’s Research, Engineering, and Development appropriations. ASSURE institutions receive federal grants to conduct research to assess specific technologies or risks with the intent to inform FAA regulations and policies. For example, ASSURE institutions have received grants from FAA to study UAS noise certification, ground and airborne collision severity and impacts, and UAS detect and avoid technologies. According to FAA, funding from non-federal entities, such as international civil aviation authorities can be applied to ASSURE. Some of ASSURE’s research has been peer reviewed and published. According to an ASSURE representative we spoke to, all of the research conducted through ASSURE is in alignment with FAA’s plans for UAS integration as outlined in the 2018 UAS Integration Roadmap. FAA’s and NASA’s UAS Traffic Management (UTM): The UTM program is a collaborative effort of FAA and NASA to design a system with a similar concept as FAA’s air-traffic-control system for manned aviation that would enable small UAS to operate safely at low altitudes around other aircraft. NASA is leading the research, development, and testing of various technologies that would comprise the system, and plans to transfer the results of the research to FAA to determine next steps. NASA selected six test sites—Alaska, Nevada, New York, North Dakota, Texas, and Virginia—to participate, to varying degrees, in the four different phases of this project. NASA has provided funding to the six test sites through contracts for their participation in testing the system. UTM research is divided into four phases, called technology capability levels, each with specific technical goals. For example, technology capability level three entailed testing technologies that maintain a safe distance between two UAS flying over moderately populated areas. All six sites participated in the first three phases, which according to NASA officials brought in about 35 industry partners for this research effort. The Nevada and Texas test sites are currently participating in the fourth and final phase, which— as of October 2019— NASA expected to complete in 2019. In addition, FAA selected the North Dakota, Nevada, and Virginia test sites to participate in its UTM Pilot Program. The program’s goals are to develop, demonstrate, and provide services that will support the implementation of UTM operations. NASA’s UAS Integration in the National Airspace System: Beginning in 2015, NASA provided funding to the New York and Virginia test sites, among other entities, for this project, which is intended to demonstrate solutions to technical challenges to inform FAA’s development of operational standards for UAS. For example, through this project, NASA intends to test detect and avoid technologies by assessing UAS performance during a variety of scenarios, and then by recommending a minimum set of performance standards to FAA for consideration. According to NASA officials, the agency has completed work at the New York test site related to developing standards for routine operations by large UAS. As of October 2019, NASA had ongoing research at the Virginia test site on command and control communications that officials expected to complete in 2019. FAA’s UAS Detection at Airports: According to FAA, six test sites— Nevada, New Mexico, New York, North Dakota, Texas, and Virginia— participated in this program alongside various industry partners to evaluate technologies that can be used to safely detect UAS near airports. Funded by FAA, this research project included evaluating the capabilities of various UAS detection technologies by different manufacturers at four U.S. airports in 2016 and 2017. This research was used to inform minimum performance standards for UAS detection systems deployed at airports. FAA Has Improved Collaboration and Taken Other Steps to Address Challenges to Test Sites Conducting UAS Research As the Program Has Matured, FAA Has Taken Some Steps to Address Management Challenges All test site representatives stated that FAA has improved both its management of the UAS test sites and collaboration with representatives in recent years as the program has matured. According to test site representatives, initially, as the program began, there was considerable turnover among FAA test site managers, which made it more difficult for the staff at the test sites to collaborate with FAA officials to undertake research efforts. FAA officials acknowledged that because they had not established test sites before, it took time to determine the best approach for managing this program. However, according to most representatives, in the last few years, FAA has begun to better collaborate with the test sites. Specifically, FAA has solicited input from test site representatives on various issues related to UAS integration and helped facilitate information sharing between the test sites and various FAA lines of business. For example, agency officials told us that they invited air traffic specialists from a regional FAA office to participate in a recent UAS Test Site program semi-annual meeting. Through this meeting, these FAA regional staff learned about the test sites’ initiatives and about unique aspects of the test sites’ COAs, which, as previously noted, they use to conduct flight tests. According to FAA officials, with the better understanding about test sites’ operations gained at the meeting, these regional FAA staff will be able to process the test sites’ COA requests more efficiently. Most test site representatives also told us that FAA’s current UAS test site program manager and other FAA staff are responsive to, for example, questions or requests for guidance on a particular issue. Further, based on our interviews with test site representatives and our analysis of test sites’ reports submitted to FAA, the agency has taken steps to address some challenges from the past. In our March 2015 testimony and July 2015 report on FAA’s progress in integrating UAS into the national airspace, we outlined initial challenges that stakeholders most frequently cited as affecting test sites’ ability to attract users and to generate sufficient revenue to remain in operation during their first year. Since 2015, FAA has taken several steps to address these challenges, by providing additional guidance, streamlining the COA process for test sites, and improving the agency’s collaboration with and management of the test sites (see table 4). However, based on our analysis of interviews conducted for this review with test site representatives and users, these previously identified challenges persist. Lack of FAA guidance on priority research: Most test site representatives reported that while FAA has improved its management of the program, available FAA guidance still lacks the needed detail about research areas to prioritize in order to promote overall UAS integration efforts. For example, some test site representatives told us that the 2018 UAS Integration Roadmap should provide more information about the agency’s planned timeframes for implementing various steps to achieve full UAS integration, such as how and when FAA plans to integrate large UAS. Without such details, representatives say they cannot fully inform potential users when it might be possible to routinely use some complex UAS operations that are in demand by industry but currently only allowed on a case-by-case basis, such as the ability to fly small UAS beyond the operator’s line of sight or over people. Several representatives told us they are concerned that some potential test site users may postpone their research or conduct it abroad because of this lack of detail on when FAA plans to routinely allow such complex UAS operations. According to FAA officials and as noted in table 4 above, the agency has issued strategic plans and provided briefings to test site representatives and stakeholders on relevant research needed to achieve UAS integration. However, FAA officials told us that there are limitations on how much guidance they can provide the test sites. They said that the Anti-Deficiency Act prevents FAA from directing specific test site activities and obtaining research data, other than the operations and safety data required by the COA, without providing compensation. Officials also noted that until standards and regulations are developed—an effort for which the agency has not set a targeted completion date—a case-by-case approval basis will be needed for allowing complex UAS operations. With regard to the concern that some potential test site users may be conducting research abroad, FAA officials told us that testing abroad will not provide these stakeholders the same experience as testing in the United States, given that the U.S. national airspace system is more complex than those abroad in terms of traffic and congestion. Complex and lengthy COA process: Most test site representatives and users we interviewed told us that FAA should implement a less complex and time-consuming COA process for the test sites. According to test site representatives, FAA’s actions have decreased the time it takes to obtain simple COAs and Part 107 waivers, but for applications to conduct more complex research activities, FAA’s process remains lengthy and uncertain. This challenge makes it more difficult for test sites to meet users’ needs, according to representatives, and can subsequently lead companies to conduct UAS research in other countries. For example, some representatives told us that one test site’s request for a waiver to fly UAS beyond visual line of sight had taken 3 years for FAA to approve, and they could not understand why. Representatives also told us that for COA applications involving requests to research complex UAS operations, it was not always clear why FAA denied their requests, leading to uncertainty. According to FAA officials, the waiver that took 3 years to approve was an outlier and the agency’s processing of such waivers usually takes 90 days or less. However, in January 2018, DOT’s OIG similarly reported that FAA has had difficulty keeping pace with the volume of Part 107 waiver requests received and, in particular, has been slow to approve complex UAS waivers—such as requests to operate beyond the operator’s visual line of sight. In this report, the DOT OIG made recommendations related to improving the waiver process, which FAA is working to address. Generating sufficient revenue to maintain test site operations: Most test site representatives told us that securing sufficient funding to develop future capabilities and infrastructure in order to attract industry users and partners, remains a major challenge that they predict will continue. Some test site representatives told us that their respective contracts with NASA for projects such as UTM have been their largest single revenue source. Another representative mentioned that the U.S. Coast Guard has been a test site user, which has helped the site to generate revenue. Test sites have attempted to generate revenue in other ways, for example by obtaining state and local government funds to build infrastructure to attract users, applying for competitively awarded research contracts, and consulting and conducting research with potential users in different locations. FAA officials acknowledged that the test sites will need to continue to generate sufficient revenues to support their operations, but noted that, whenever possible, the agency provides the test sites with opportunities to compete to participate in funded research efforts, such as those related to the UTM program. FAA Has Taken Steps to Address Technology- Related Challenges, Which Are Complex Most test site representatives and users we interviewed also identified technology-related challenges affecting test sites’ ability to conduct research as continuing issues. These mostly relate to technology-related capabilities that will be vital for achieving full UAS integration, but which are currently still in development (see fig. 5). As we have previously reported, integrating UAS into the national airspace will require FAA to address key technology-related challenges to enable routine UAS operations with manned aircraft. For example, in our July 2015 report, we identified such challenges affecting test sites, in addition to the management-related challenges discussed above. According to test site representatives and FAA officials, these key technology challenges and concerns could affect broader UAS integration and research efforts, and thus impact the pace of or stop the progress toward full integration into the national airspace system. Such key technology-related challenges and related efforts to address them include: Availability of dedicated radio-frequency spectrum: Radio- frequency spectrum provides communication links between a UAS and its control station or operator. According to FAA, dedicated radio- frequency spectrum is important to ensure UAS safety and security in order to operate in the national airspace. For example, radio- frequency spectrum is needed for command and control, detect and avoid, and beyond visual-line-of-sight capabilities of UAS. Without a dedicated radio-frequency spectrum, the intentional or unintended interference of radio transmissions could sever the UAS means of control because other consumer products also use radio frequencies that could cause interference. FAA officials and test site representatives told us this spectrum-availability problem is the one challenge that has the potential to bring UAS research efforts to a halt if not addressed. Representatives from five of seven test sites indicated that availability of spectrum affects their ability to conduct their research operations and, more broadly, also affects the progress of other efforts contributing to UAS integration. Similarly, some test site users told us that when deciding on a potential test site to contract with for conducting their research, they asked about whether the test site faced any radio frequency interference. According to FAA officials, the agency is assisting test sites in addressing this challenge by collaborating with the Federal Communications Commission (FCC), which is responsible for allocating spectrum to nonfederal users for various purposes and assigning spectrum licenses. FAA’s Spectrum Office is a participant in the regularly occurring meetings between FAA officials and test site representatives. These representatives said they have been communicating with FAA to clarify guidance on the different frequency bands to use at various operating altitudes related to an FCC rule. Nevertheless, according to FAA officials, in the near future, more issues will likely surface related to spectrum because of the industry’s interest in conducting flights beyond visual line of sight for both small and large UAS. FAA officials told us spectrum reserved for aviation safety communications are limited. Therefore, the officials are investigating how to get the maximum UAS capacity in the national airspace by efficient management of the current allocated spectrum. Furthermore, FAA is preparing a report for Congress that covers the use of spectrum allocated for possible UAS activities. FAA officials told us that the report will not delay or prohibit the use of any licensed spectrum for UAS. FAA expects to submit its report to Congress in April 2020. Limitations to conducting counter-UAS detection and research: Counter-UAS activities involve using technology to help detect, track, and defend against illegal or unauthorized activities. Pursuant to federal law, it is illegal to damage or destroy aircraft, and this statute may apply to UAS. Other provisions of federal law may prohibit the use of certain detection systems and mitigation systems. FAA does not support the use of counter-UAS systems, which includes interdiction capabilities, by any entities other than the federal agencies with explicit statutory authority to use these technologies, including for the testing and evaluation of such systems. In addition, FAA has limited authority for testing UAS detection and mitigation systems at airports. Federal agencies with the authority to mitigate risks of UAS under certain circumstances are the Departments of Defense, Energy, Justice, and Homeland Security. According to one test site representative, industry’s ability to conduct research on counter-UAS technologies is limited because it requires the participation of one of the four agencies listed above. FAA officials told us that these federal agencies have the authority to conduct counter-UAS operations. These agency officials noted that the test sites could support counter-UAS research activities, for example, by providing the expertise and any infrastructure needed for the test flights, such as a chase aircraft. Some test site representatives and users we spoke to suggested that it would be helpful if more counter-UAS research were allowed. For example, they said that further research is needed to understand how to address counter-UAS threats—such as someone illegally trying to interfere with the radio frequency of a UAS delivering a package. One test site representative told us that multiple users want to fly swarms of UAS (where one operator flies multiple UAS simultaneously in proximity) to conduct counter-UAS operation research, but it is a challenge to support users’ desired research because of current restrictions. However, some stakeholders pointed out that the available technology for conducting such research, such as detect and avoid technology, is not developed enough yet to allow for effective research in this area. FAA Collects Data from the Test Sites but Has Not Fully Leveraged the Data or the Program to Advance UAS Integration FAA Regularly Collects Information and Data from Test Sites, but Has Not Determined How to Use These Data to Advance UAS Integration FAA regularly gathers information from the test sites in the following ways: Meeting with test site representatives: In the previously described regular meetings between FAA and test sites—monthly by teleconference and semi-annually in-person—participants share information on experiences conducting research and challenges faced. According to FAA officials, the meetings are helpful in informing the agency about the types of UAS research that users are pursuing, among other things. Representatives of all seven test sites agreed that these meetings are helpful. For example, some representatives noted that such meetings facilitate information sharing about, for example, the status of other FAA-affiliated UAS research efforts— such as UTM and the IPP—and the status of other FAA initiatives underway, such as UAS rulemakings. Collecting data from test sites: Test sites have provided several types of data to FAA since 2015, including: Entering data on flight tests into the MLS—the system that FAA established for this purpose. MLS data include details about flight tests, such as duration, whether the test involved complex operations such as beyond the operator’s line of sight, and any accidents or incidences that occurred. According to FAA officials, MLS is used for collecting test site data—which will be used to, among other things, inform the final report to Congress that is required by statute. Submitting data into FAA’s aforementioned COA application processing system, which FAA uses to process COAs. Submitting quarterly and annual reports to FAA, which summarize activities completed by each test site, including research and development efforts for users, milestones met and the key challenges faced in undertaking activities. According to FAA officials, their efforts related to the UAS test site program have been primarily focused on meeting requirements such as those related to test sites outlined in the 2012 Act. Among other things, the 2012 Act required FAA to: Establish test sites to provide a way to access airspace to conduct research and development. Develop standards and requirements for UAS flight operations at test sites. At the end of the test site pilot program, submit a final report to Congress with findings and conclusions about projects facilitated through the program. In response to the 2012 Act’s requirements, as previously noted, FAA established the test sites and developed requirements for how test sites should conduct UAS flight testing. As FAA has been focused on collaborating with the test sites and meeting the 2012 Act’s and other requirements, agency officials have not prioritized determining how to use data gathered from the sites to advance UAS integration. To date, FAA has only used data from test sites in a few cases to directly inform the agency’s UAS integration efforts. For example, in one case, FAA used data from an ASSURE project conducted at a test site to develop a noise certification standard; these data were not from MLS. In another example, FAA officials told us that— as of February 2019—they were planning to use MLS and other test site data to make a decision about an applicant that had submitted a request to conduct UAS package delivery operations. According to officials, FAA intends to use the data collected from test sites to a greater extent in the future to further integration, such as in the following ways: In November 2018, FAA asked ASSURE to review test site data to identify data FAA could use to approve safety cases. As previously noted, FAA generally requires safety cases to be submitted as part of any application to use a UAS operation that is not yet routinely allowed in the national airspace due to risk, such as flights beyond the operator’s line of sight. Safety cases include evidence of how the applicant will address any risks that the new complex UAS operation would introduce into the airspace, such as the risks of the UAS abruptly losing power. According to FAA officials, this research was initiated in December 2018 with a plan to complete it by March 2020. According to these officials, the results of this research should help the overall UAS integration effort. Specifically, the results may help FAA officials to more clearly define the information UAS operators should submit to demonstrate how the safety risks associated with their proposed operation will be mitigated. Officials indicated that FAA also intends to use MLS and other test site data to continue developing, evaluating, and validating the aforementioned UTM system. FAA officials told us that while they have not fully leveraged test site data, they are using other information from the test sites—such as information shared in meetings—to support the agency’s efforts to integrate UAS into the national airspace. According to FAA officials, the test site program supports UAS integration not only by providing industry stakeholders with an avenue for testing complex UAS operations and concepts, but also by helping FAA officials stay informed about issues related to integration. Specifically, these officials told us that the informal information sharing that occurs in regular meetings between FAA officials and test site representatives has been valuable. Through such informal exchanges, FAA officials keep abreast of the various types of research being requested by industry stakeholders and challenges faced by such stakeholders pursuing such research. For example, as noted previously, test site representatives have used these meetings to discuss challenges—such as related to dedicated spectrum—with FAA officials. In addition, based on what FAA officials have observed at test sites, the agency has been able to grant other airspace users more flexible authorizations, for example COAs covering larger geographical areas. Specifically, these agency officials told us that because they observed that the test sites have been able to maintain an acceptable level of safety after being allowed more flexibility in their aforementioned nationwide blanket COAs, the agency felt confident enough to give more flexibility to other airspace users with COAs for using complex UAS operations. FAA’s UAS integration plans specify the importance of not only collecting data but also using the data to inform strategic planning efforts. FAA’s publicly available plans state that FAA intended to use information from the test site program to inform its UAS integration efforts. Specifically, according to the 2018 UAS Integration Roadmap, the test site program plays a critical role in UAS integration as one of the program’s goals is to provide information so that FAA can determine technical and operational trends that could support safety-related decision making for integration, and develop policy and standards required to address new and novel aspects of UAS flight operations. In addition, FAA’s Unmanned Aircraft Systems Test Site Data Collection and Analysis document issued in 2016, indicates that by September 2016, FAA planned to analyze the data to determine operational trends, communicate them via dashboards, and share the collected and analyzed data with stakeholders. Further, federal internal control standards state that agencies should use quality information to achieve the agency’s objectives and support informed decisions. Specifically, agencies should first identify what data are needed to achieve the entity’s objectives, then obtain the needed data from internal and external sources in a timely manner, and finally process and evaluate the obtained data into quality information that supports the entity’s objectives. While FAA has indicated plans to analyze and use test site data in the future, it has not yet developed a data analysis plan to do so. FAA officials told us that having an analysis plan for MLS data could be useful and that—as of September 2019— they were considering creating such as plan but had not taken steps to do so. According to FAA officials and some test site representatives, and based on our review, some currently collected data could be useful in informing integration efforts. Specifically, FAA officials and two test site representatives told us that some MLS data—for example on accidents and lost control links—could be useful. For example, data on accidents and lost communication links could be combined with other MLS data on the respective test flights—such as the time of day, type of UAS being flown, and other factors—to determine whether certain conditions or UAS models are at a greater risk of a crash or other incident. According to FAA officials, this combined data could theoretically help the agency to measure risk and to determine if there are any factors that contribute to lost control links between the UAS and the remote pilot in the flight testing environment. The results of such a data analysis could help inform integration efforts, such as in developing operational standards for UAS. Without a plan for analyzing the data, FAA could miss opportunities to leverage what was intended to be a cornerstone of the test site program—information to help FAA move UAS further toward full integration into the national airspace. Having such an analysis plan could help FAA articulate how the agency will use test site data more in the future and identify other data that are within the agency’s authority to request from test sites that would help inform integration. Representatives from three test sites told us that their staff currently collects other data that FAA is not collecting but which could help to inform the agency’s UAS integration efforts. Based on our review of test sites’ annual reports to FAA, for instance, all test sites have been involved in facilitating test flights of UAS operations beyond the operator’s line of sight. FAA may be able to use data from such flight tests as it develops standards for allowing these types of UAS operations on a routine basis in the national airspace. Further, the National Academy of Sciences reported in 2018 that FAA has underutilized the test sites because it has not determined which test site data could inform the agency’s risk assessments for UAS (which FAA conducts before allowing any new complex UAS operation to be used on a routine basis) nor collected that specific data from test sites. FAA Is Publicly Sharing Limited Information about How the Test Site Program Informs the Agency’s UAS Integration Efforts FAA provides limited information to the public, including stakeholders and test site users, about how the research being conducted at test sites helps to inform FAA’s UAS integration efforts. FAA officials point to two main public efforts related to the test sites program: FAA’s 2018 UAS Integration Roadmap, described earlier, includes a high-level overview of how the test sites program informs the agency’s integration efforts. For example, it states that test sites provide information that FAA can use to determine technical and operational trends that could support safety-related decision making. However, it does not provide any information about, for example, how the research at test sites directly relates to FAA’s next planned phases of integration. FAA’s UAS Test Sites website is the agency’s main public outreach effort, and provides information such as links to the websites of the test sites. However, in examining the website, we found little description of how this program relates to FAA’s broader integration plans and no discussion of desired outcomes from the research under way at test sites. In contrast, the websites for two other UAS research efforts that FAA is involved in—the UTM program and DOT’s IPP— have program descriptions that include the purpose of the program, and some intended research outcomes. These two program descriptions make it relatively easy for the reader to understand how those programs fit into FAA’s broader UAS integration efforts. See figure 6, which shows the program descriptions on FAA’s respective websites for the test site program and the IPP. FAA also compiles some information on test sites that is not publicly available. For example, FAA staff annually compile information about the types of research conducted at test sites and present it in the Test Sites Fact Book, which links the information to key capabilities needed for the incremental integration of UAS into the national airspace. However, this document is only available to FAA staff and, according to officials, contains some data that test site users could deem proprietary. FAA officials told us that they also plan to submit the aforementioned final report to Congress on the test site program, which is currently due in late 2023. According to these officials, however, this report is not intended to be made public. All test site representatives and many users in our review (13 of 18) reported that publicly available information on research efforts underway at test sites is limited. Many users we spoke to (11 of 18) stated that FAA should include more information about the test sites on its website, and in FAA’s planning documents, such as the 2018 UAS Integration Roadmap. These representatives and users also told us improved FAA communication could increase the UAS stakeholders’ awareness of test sites’ capabilities, expertise, and services, and their understanding about how the program fits into FAA’s broader integration efforts. According to FAA, collaboration and cooperation across industry and government is important for UAS integration—a complex endeavor involving multiple stakeholders from different sectors. As FAA’s 2018 UAS Integration Roadmap states, given the large scale of the UAS integration effort, FAA must rely on crucial relationships across government and industry to ensure its integration efforts are harmonized and consistent. It further states that all the work needed to resolve collective challenges requires collaboration between partners at local, state, tribal, and national levels as well as with partners across the UAS stakeholder community. In addition, federal internal control standards and leading practices for reporting on research and development activities emphasize the importance of making the status of such activities transparent to stakeholders. Specifically the federal internal control standard for communicating information calls on federal agencies to externally communicate quality information so that external parties can help the entity achieve its respective objectives. Further, this standard suggests that agencies should select appropriate methods to communicate externally, taking into consideration factors such as the intended audience and the availability and ease of access to the information. In addition, as we have reported, leading practices for reporting on research and development efforts include clearly communicating the status of such efforts to the public and stakeholders. For example, in a 2017 report about FAA’s management of its aviation research and development portfolio—which includes UAS research efforts—we found that FAA could more fully adhere to leading practices if it provided more information for Congress and other stakeholders, such as on the status of various research and development activities. We noted that with more complete and transparent information, Congress and industry and other stakeholders are better able to make informed decisions. In another example, in several reports on FAA’s implementation of the Next Generation Air Transportation System— another complex endeavor involving coordination with industry and other stakeholders—we emphasized the importance of sharing information about the status of various projects with stakeholders whose participation will be essential to the progress of the overall effort. FAA officials told us that they were wary of providing more public information about the test sites, based on concerns about potentially being perceived to be promoting the designated test sites and concerns about sharing data that could be proprietary. For example, officials told us that when potential test site clients approach FAA, they simply direct these potential clients to the FAA’s UAS Test Sites website. The officials told us that they do not wish to be seen as promoting or advertising one of the FAA-designated UAS test sites over the others, because such promotion would conflict with FAA’s role as a regulator. They also said that FAA wants to avoid suggesting that operators seeking to research complex UAS operations are required to contract with a designated test site. They noted that the decision about whether or not to use a designated test site should be left to the potential client. In addition, FAA officials expressed concerns about sharing any information that the test site users could deem to be proprietary, such as information about their research projects currently underway. For example, the officials noted that some test site users do not want to be identified as such. In our assessment, however, it would be possible for FAA to share more information publicly about how the test site program fits into the agency’s broader UAS integration effort without promoting any particular test site or sharing any proprietary information. For example, some context in the Test Sites Fact Book could be informative because it links research underway at test sites to FAA’s integration plans. This book includes a section on current test site research with examples that, if shared, could help increase stakeholders’ understanding of how FAA could use the research being conducted at test sites to inform its decisions. This section indicates that test sites are involved in research aimed at, for example: Advancing UAS standardization, meaning the FAA and all the test sites working together to advance the industry from a systems perspective to develop standardized UAS training, maintenance, and safety risk mitigation. Data from such research could help inform FAA decisions such as, for example, setting standards for drone spacing and mitigating risks. Using UAS for wildfire operations, including test sites and users— such as emergency response agencies—finding effective ways to use UAS to respond to such situations. Data from such research could help FAA improve, for example, its response time when an emergency COA is requested by such agencies. Such additional information, if shared, could help FAA to clearly demonstrate to the wider audience of UAS stakeholders that the agency is fostering research through test sites that directly relates to its UAS integration plans. As noted above, the test site users we interviewed told us they were conducting research at test sites related to FAA’s upcoming phases of its UAS integration plan, including research on large cargo and passenger operations. Although some UAS stakeholders—such as users of test sites—may currently be aware of the research underway at test sites, the audience for UAS integration is larger and includes others such as those from the information technology and agricultural industries, and local government agencies whose stakeholders may be less familiar with FAA’s efforts. Further, with more accessible information on how research at the test sites relates to FAA’s UAS integration efforts, more stakeholders may choose to use a test site to conduct their own research. Given that one of the primary goals of the test site program is to provide information to FAA to help the agency develop the policies and standards required to address new and novel aspects of UAS flight operations, having more test site users could help the agency achieve this goal by making more data available to FAA. As noted previously, many selected users we interviewed told us that using a test site provided a significant benefit for advancing their entity’s UAS research and development efforts. However, some UAS stakeholders who could benefit from a test site’s assistance— such as those outside of the aviation industry seeking to submit a safety case to FAA for approval of complex UAS operations—may not currently be aware of the option for conducting research through a test site. For instance, a stakeholder interested in conducting research involving, for example, using UAS for small package delivery, may be unaware that test sites have already helped to facilitate such research for their users. FAA officials told us that stakeholders outside of the aviation industry can particularly benefit from a test site’s expertise since they may be less familiar with FAA’s processes for approving UAS operations on a case- by-case basis. All test site representatives and some users in our review told us that if FAA communicated more clearly about the role of the test site program in the overall UAS integration effort, more stakeholders would likely leverage the test sites. Conclusions FAA’s designated UAS test sites provide significant benefits to the UAS industry, offering their users a variety of services, with minimal operating investment from FAA. Many users in our review told us that their decision to work with a test site proved invaluable in helping achieve their respective goals. As FAA proceeds with its plans to incrementally integrate UAS into the national airspace—a large effort requiring collaboration with many stakeholders—the agency could benefit from better leveraging all of its available resources. According to FAA, additional research and development work—including data on UAS operations—is needed to inform its decisions as it allows for more complex UAS operations to be routinely used in the national airspace. UAS stakeholders working with FAA test sites are testing complex UAS operations and various capabilities identified by FAA as needed to inform integration policies and rules moving forward. However, without a plan for analyzing the test site data, FAA could miss opportunities to better use the data to inform the overall UAS integration effort, such as by applying the data to inform UAS operational standards. Having such an analysis plan could help FAA articulate how the agency will use test site data more in the future and identify other data that are within the agency’s authority to request from test sites that would help inform integration. In addition, by sharing more information about how the program relates to FAA’s integration efforts, the broader community of UAS stakeholders may have a greater awareness of the types of research and testing being conducted at test sites and thus be better able to participate in the effort. Further, without more accessible information, such as examples of how research underway at test sites aligns with FAA’s planned phases of UAS integration, some UAS stakeholders may not be aware of their options for pursuing research through a test site, thus potentially limiting the usefulness of the test site program for UAS stakeholders and for FAA. Recommendations for Executive Action We are making the following two recommendations to FAA: The Administrator of FAA should develop a plan for analyzing currently-collected UAS test site data to determine how they could be used to advance UAS integration, and whether the collection of any additional test site data, within the agency’s authority to request, could be useful for informing integration. (Recommendation 1) The Administrator of FAA should publicly share more information on how the test site program informs integration while continuing to protect information deemed proprietary. This information could be shared, for example, on the agency’s UAS Test Sites website. (Recommendation 2) Agency Comments and our Evaluation We provided a draft of this report to DOT and NASA for their review and comment. In its written comments, reproduced in appendix II, DOT partially agreed with the first recommendation and agreed with the second recommendation. FAA also provided technical comments, which we incorporated as appropriate. NASA officials reviewed our draft, but did not have any comments. FAA partially agreed with the first recommendation to develop a plan for analyzing test site data, noting a concern about using such a plan to determine if the collection of any additional test site data could be useful for informing integration. Specifically, FAA noted that the agency cannot require test sites to share data from their privately contracted users, other than the data required for the test sites’ COAs or for their OTAs with FAA. FAA also noted a concern that our draft report incorrectly assumes that the data collected through the test site program are adequate to meet FAA’s UAS integration needs when this program is limited in the data that can be collected. However, our report states that the test site program is only one of several sources of data to inform FAA’s future decisions regarding UAS integration, and that a data analysis plan could help FAA determine whether any additional data could be useful for informing integration. To address FAA’s comments, we added language to our recommendation to clarify that the consideration of potential additional data would be for data that are within the agency’s authority to request from test sites, such as through the OTAs. We continue to believe that implementing this recommendation would enable the agency to better leverage test site research and data to inform its decisions related to UAS integration. FAA agreed with our second recommendation to share more information on how the test site program informs the agency’s UAS integration effort. However, FAA stated that the agency’s integration plans and Test Site Fact Book cannot be made publicly available due to future rulemaking and proprietary information contained in these documents. We acknowledge in our report that these documents could include information that test site users deem proprietary. We include in our recommendation that FAA should continue to protect any information deemed proprietary while making information about the test site program’s contribution to UAS integration publicly available. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Transportation, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or krauseh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: List of Unmanned Aircraft Systems (UAS) Test Sites and Stakeholders Whose Representatives GAO Interviewed Federal Aviation Administration Designated UAS Test Sites Entity Griffiss International Airport (New York) - Northeast UAS Airspace Integration Research Alliance New Mexico State University – Physical Science Laboratory North Dakota Department of Commerce – Northern Plains UAS Test Site State of Nevada – Nevada Institute for Autonomous Systems Texas A&M University-Corpus Christi – Lone Star UAS Center of Excellence and Innovation University of Alaska Fairbanks – Alaska Center for UAS Integration Virginia Polytechnic Institute and State University – Mid-Atlantic Aviation Partnership Agricultural Research Service, United States Department of Agriculture Alliance for System Safety of UAS through Research Excellence, Mississippi State University Association for Unmanned Vehicle Systems International JHW Unmanned Solutions, LLC Massachusetts Institute of Technology Lincoln Lab National Emergency Response and Recovery Training Center, Texas A&M Engineering Extension Service Project Vahana, Airbus A3 Vanilla Aircraft (now Vanilla Unmanned) Appendix II: Comments from the Department of Transportation Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Heather Krause at (202) 512-2834 or krauseh@gao.gov. Staff Acknowledgments In addition to the individual named above, Vashun Cole (Assistant Director); Jessica Bryant-Bertail (Analyst-in-Charge); Jon Felbinger; Camilo Flores; Richard Hung; Josh Ormond; Amy Rosewarne; Alexandra Rouse; Kelly Rubin; Marc Schwartz; and Larry Thomas made key contributions to this report.
Why GAO Did This Study UAS could provide significant economic and social benefits, for example by delivering packages or aiding in search and rescue missions. FAA is conducting a phased approach to incrementally integrate UAS safely into the national airspace. As directed by statute, FAA established UAS test sites to allow industry to assess the safety and feasibility of complex UAS operations, such as flying beyond an operator's line of sight. FAA has stated that this program provides research results and other data needed to reach full integration. GAO was asked to review FAA's management of the test sites. This report examines, among other things: (1) the research conducted at FAA's designated UAS test sites, and (2) how FAA is leveraging and sharing information from the test site program to advance integration. GAO reviewed relevant statutes and regulations, reports, and FAA guidance; analyzed test sites' efforts, including flight test data submitted to FAA from 2015 through 2018; and interviewed FAA officials, test site representatives from all 7 test sites, and 18 test site users, selected to include a range of perspectives. What GAO Found The Federal Aviation Administration's (FAA) seven designated test sites for unmanned aircraft systems (UAS) have facilitated about 15,000 UAS flight tests since 2015 and supported a wide range of research. Both public and private entities have used the test sites to test technologies in preparation for varied UAS activities, from inspecting utilities to carrying passengers. Research conducted at test sites provides data on the performance of various UAS capabilities and technologies; such data could support FAA's integration efforts. While FAA collects this data from test sites, it has not fully leveraged the data or the program to advance UAS integration. According to FAA's 2018 Roadmap for UAS Integration a key goal of this program is to provide data to support FAA's decisions on drone integration. FAA officials said the agency intends to use the data to a greater extent in the future to advance integration. Without an analysis plan, however, FAA could miss opportunities to better use the data to inform the overall integration effort, such as to inform UAS operational standards. Also, FAA reports limited public information about how test sites' research relates to the agency's integration plans. Agency officials told GAO they were wary of sharing more information about the test sites, citing concerns about, among other things, protecting test site users' proprietary data. All test site representatives and most users GAO interviewed, however, said that more information on test sites' research would be helpful for UAS stakeholders' research efforts. According to FAA plans, the agency must rely on relationships with stakeholders across government and industry to ensure that integration efforts are harmonized. By sharing more information publicly, FAA could demonstrate to such stakeholders how the agency is fostering and using research to inform and advance integration. Further, with more information, more stakeholders may opt to use a test site to conduct their own research, thus potentially increasing data available to FAA to inform its integration decisions. What GAO Recommends GAO recommends that FAA (1) develop a data analysis plan for test site data and 2) share more information on how this program informs integration, while protecting proprietary data. FAA partially agreed with the first recommendation and agreed with the second. GAO added language to the first recommendation to address the issue that FAA raised, as discussed in this report.
gao_GAO-19-616
gao_GAO-19-616_0
Background ESSA Provisions Related to the Educational Stability of Youth in Foster Care Enacted in December 2015, ESSA’s amendments to Title I, Part A (Title I) of the ESEA included a number of requirements for SEAs and school districts to ensure the educational stability of children in foster care. For the purposes of this report, we refer to these requirements collectively as the “ESSA educational stability provisions.” Specifically, SEAs are required to describe in their Title I state plans the steps they will take to ensure collaboration with the state child welfare agency to ensure the educational stability of children in foster care, including assurances that: Such children enroll or remain in their school of origin, unless a determination is made that it is not in the child’s best interest to attend the school of origin. This decision shall be based on all factors relating to the child’s best interest, including consideration of the appropriateness of the current educational setting and the proximity to the school in which the child is enrolled at the time of placement. When a determination is made that it is not in a child’s best interest to remain in the school of origin, the child is immediately enrolled in a new school, even if the child is unable to produce records normally required for enrollment. The enrolling school shall immediately contact the school last attended by the child to obtain relevant academic and other records. The SEA will designate an employee to serve as a point of contact for child welfare agencies and to oversee implementation of the above provisions. LEAs are required to provide in their Title I LEA plans assurances that they will collaborate with the state or local child welfare agency to: designate a point of contact, if the corresponding child welfare agency notifies the LEA in writing that the child welfare agency has designated a point of contact for the LEA; and develop and implement clear written procedures governing how transportation to maintain children in foster care in their school of origin when in their best interest will be provided, arranged, and funded for the duration of the time in foster care. The ESSA requirements described above were generally required to be implemented by December 10, 2016. In addition, SEAs and school districts are required to publicly report on the academic achievement and graduation rates of youth in foster care on their annual report cards. States and localities also have some flexibility in implementing the ESSA educational stability provisions. For example, ESSA does not prescribe a specific process for determining whether it is in a child’s best interest to remain in their school of origin. In making this determination, state and local agencies have flexibility in determining which factors should be considered when evaluating the appropriateness of a child’s current educational setting, as well as any additional factors that pertain to a child’s best interest. Similarly, school districts and child welfare agencies generally determine the transportation procedures to use, provided they meet the minimum statutory requirements. In addition, SEAs may choose various approaches to help LEAs implement the ESSA educational stability provisions. For example, SEAs may decide to independently, or with their state child welfare agency, issue policies or guidance, disseminate question and answer documents, or hold informational meetings and webinars. Federal Technical Assistance and Oversight Education and HHS collaborated to provide states with joint non- regulatory guidance specific to the ESSA educational stability provisions. In addition to this written guidance, Education provides technical assistance to states, such as through the State Support Network, one of its technical assistance providers. Each state also has a point of contact at Education for questions, according to Education officials. Education’s Office of School Support and Accountability oversees state implementation of Title I, Part A of the ESEA, including the amendments made by ESSA. Education’s oversight of SEAs includes reviewing state Title I plans that describe how states will follow a variety of federal requirements outlined in Title I, and periodic reviews of how each state is implementing Title I. These reviews occur every few years. HHS’s Children’s Bureau oversees state child welfare agencies’ implementation of Title IV-E, including the provisions in the Fostering Connections Act, and also provides related technical assistance. State and Local Officials Reported Several Challenges Related to Implementing the ESSA Educational Stability Provisions State and local officials reported facing several challenges related to implementing the ESSA educational stability provisions. Specifically, officials reported challenges with (1) turnover among local child welfare and educational agency staff, (2) obtaining school district input during the process for determining whether it is in a youth’s best interest to remain in their school of origin (referred to as best interest determinations), (3) providing and funding transportation, (4) ensuring accurate identification of youth in foster care, and (5) monitoring how school districts implement these provisions. In addition, while we did not ask on our survey about the requirement to immediately enroll youth in a new school if it is determined that remaining in the school of origin is not in their best interest, or about the requirement for the enrolling school to immediately contact the last school attended to obtain relevant records, education and child welfare officials we interviewed said they experienced challenges with immediate enrollment and records transfer for special populations of youth. Turnover among Local Child Welfare and Educational Agency Staff Turnover of local educational and child welfare agency officials was reported as a significant challenge that affects how many states and localities implement the ESSA educational stability provisions, according to our survey and interviews. Specifically, in our survey, 43 of 51 SEAs reported turnover of local child welfare agency points of contact as at least somewhat challenging. A similar number of respondents (39) reported facing challenges with turnover of school district points of contact (see fig. 1). During our discussion group, state child welfare agency officials highlighted turnover of local child welfare agency and school district staff as one of the most significant challenges their states face in ensuring educational stability for youth in foster care. In addition to turnover itself being a challenge, several other challenges reported by SEAs are related to staff turnover, according to officials we spoke with from four state and local educational and child welfare agencies. Specifically: Thirty-two SEA survey respondents identified maintaining an accurate list of school district foster care points of contact for their state as challenging, and officials from four state and local educational and child welfare agencies we spoke with stated turnover makes it difficult to keep these lists updated. One SEA point of contact said that when she sends emails to school district points of contact, she receives numerous responses each time from school district staff saying they are no longer the point of contact. Officials we interviewed at one school district noted that they tried to identify a new point of contact at another school district, but the list on the state website had not been updated. Thirty-eight SEAs reported on our survey that ensuring that school district points of contact are aware of their responsibilities is a challenge. Eight state and local educational and child welfare agency officials we interviewed echoed this observation and cited staff turnover as leading to a lack of awareness of responsibilities or protocols related to the ESSA educational stability provisions. Local staff being unaware of their responsibilities under ESSA can lead to conflicts, according to officials from two state and three local agencies we interviewed, and resolving conflicts between school districts and local child welfare agencies was a challenge reported by three-quarters (38) of SEA survey respondents. For example, officials at one local child welfare agency said they encountered school district officials who did not believe a youth in foster care could attend their current school, since their foster parent lived outside the school district. To resolve the conflict, the school district point of contact discussed the provisions with the school officials. To alleviate challenges related to turnover, SEA points of contact we surveyed and interviewed explained that they regularly provide information to local school district and child welfare agency officials on the ESSA educational stability provisions. To inform and remind local officials about the provisions, a few of these officials said they send emails to school district points of contact or provide training on the provisions at orientation for new staff at child welfare agencies. In all three states we visited, the SEAs and/or state child welfare agencies said they held joint presentations for both school districts and local child welfare agencies, and SEA officials in Georgia said they are considering holding regional collaborative meetings every four to six months. In addition, most SEAs reported on our survey that they work with their state child welfare agencies to provide or develop assistance, guidance, and sample documents or templates to facilitate implementation of the ESSA educational stability provisions at the local level. (See tables 1 and 2 in appendix II for more information on this assistance.) School District Input for Best Interest Determinations On our survey, 34 of 50 SEAs reported that ensuring school districts participate in best interest determinations is a challenge (see fig. 2). Two of five state child welfare agency officials in our discussion group also described challenges related to the lack of collaboration between child welfare agencies and schools on best interest determinations. While ESSA does not prescribe who should be involved in the best interest determination, the joint federal guidance encourages state and local child welfare and educational agencies, including school districts, to develop a process that involves all relevant parties. School district involvement, however, depends on child welfare agencies informing them when a child enters foster care or changes homes. Officials we interviewed at several child welfare agencies indicated they may not include school districts or schools in these determinations due to time constraints. Child welfare officials explained that removing a child from a home and placing them into foster care is a chaotic time and many steps need to be taken to quickly provide the child with a safe environment. During this time, caseworkers may lack the capacity to collaborate with school districts or schools. Child welfare agency officials at two local offices we visited explained that they prioritize a child’s health and safety when placing a child in a new foster home and that they place a greater focus on these issues than on educational stability. Some child welfare agency officials we spoke with said they do not always need school district input to make a best interest determination. For example, officials at two local child welfare agencies said that in some cases, the commute to a child’s current school may be so long that remaining there is clearly not in a child’s best interest. Officials from one state and two local child welfare agencies told us they assume it is in the best interest of the child to remain at their current school. Officials from the state child welfare agency said they do not believe they need to consult with school districts to make that decision. Officials at another local child welfare agency said it would not be helpful to collaborate with school districts on the best interest determination, since the child welfare officials do not believe where the child attends school is the highest priority. However, youth we spoke with in our discussion groups told us that changing schools can create several challenges (see text box). Officials from other state and local child welfare agencies told us they recognize the need to involve school districts and are taking steps to try to include them in best interest determinations. For example, one state child welfare agency we visited includes a line for the school district point of contact’s signature on the state’s best interest determination form; however, we heard from officials at a local child welfare agency that the school district point of contact may not be involved in making the best interest determination, and the form may not be consistently used. Officials at a local child welfare agency told us that they hold best interest determination meetings with the school district by phone because these meetings are faster to schedule than in-person meetings. Rather than speaking with school district staff, officials from four local child welfare agencies said they try to contact school staff that may be close to a child, such as a counselor or teacher, but officials from three of these agencies said they may not do so in every case. Thirty-seven of 50 SEAs reported on our survey that assisting school districts with identifying or arranging transportation is at least somewhat challenging (see fig. 3). To help school districts and local child welfare agencies identify transportation options, SEAs in two states we visited provide guidance or other documents to these agencies that describe potential transportation options. School district and local child welfare agency officials we spoke with reported using different approaches to transport youth, including having foster parents, school district or child welfare staff, or the youth drive to school; rerouting buses; hiring a taxi or other private transportation service; or using public transportation. Sometimes they reported combining these methods to transport youth to their current school. However, eight school district and local child welfare officials noted difficulties with their options, including limited options in rural areas and lack of appropriate transportation for younger youth and those with behavioral issues. For example, an Arizona local child welfare official explained that while they can use taxis to transport youth, they are not approved for use for children age 6 and younger. Foster parents and youth we spoke with shared challenges they have experienced with transportation to the school of origin (see text box). Experiences of Selected Foster Parents and Youth with Transportation to School of Origin Multiple foster parents in two states we visited shared that they were told by child welfare case workers that the foster parent(s) would have to transport children in their care to school for those children to remain in their current school. They told us that sometimes they could not drive the child due to distance or the needs of other youth in their care, and the child transferred to a new school. We also heard that other modes of transportation may be unreliable or cause difficulties for a child’s schedule. For example: One child in foster care in Arizona told us that she missed a week of school because the taxi provided by the child welfare agency failed to pick her up. A child in a foster care group home in Ohio said that despite being placed in a school which was in the same school district as her school of origin, her commute was long—she needed to take two public buses—and she sometimes missed dinner. On our survey, 30 of 50 SEAs reported that helping school districts determine how to fund the additional transportation costsdefined in the joint federal guidance as the difference between what a school district would otherwise spend transporting a student to their assigned school and the cost of transporting a child in foster care to their school of origin is also challenging. Among these 30 SEAs, 12 noted it was very or extremely challenging. Six school district and child welfare agency officials we interviewed also indicated that funding was a concern and some noted that transporting youth to their school of origin can result in extensive additional costs (see text box). Examples of Transportation Costs to Maintain Youth in Foster Care in Their School of Origin Over a school year, officials from a local child welfare agency said it spent $155,000 to transport students in one school district. According to officials at one school district, to transport one student, the school district had to hire a van at an estimated cost of up to $30,000 per year. In one month, another school district reported paying over $4,000 to transport five students. School district and child welfare officials said that they can rely on multiple funding streams—local, state, and/or federal—to cover these additional costs. Districts and local child welfare agencies reported that they sometimes split these costs, depending on their state’s policies. (See fig. 9 in appendix II for state-specific cost-sharing requirements reported in our survey.) For example, in Arizona, one agency transports the child to school and the other transports the child home and each pays for the cost of their one-way trip. To assist localities with funding additional transportation costs, nine SEAs said their state provides funding that partially or fully covers these costs. While educational and child welfare agencies may use federal funding through Title I or Title IV-E for the additional transportation costs, some SEA, school district, and child welfare agency officials we interviewed noted that they do not use these funds. Officials at a few school districts said they use Title I funding for other needs, while some child welfare agency officials explained their agency does not use Title IV-E funds because they did not have state “matching” funding, did not understand how to use the funds to reimburse schools for their costs, or had some youth who are not Title IV-E eligible. Ensuring Accurate Identification of Youth in Foster Care Thirty-two SEA survey respondents reported that ensuring school districts can accurately identify youth in foster care is at least somewhat challenging (see fig. 4). School district officials we spoke with expressed similar concerns. Officials we interviewed in nine of 10 districts stated they are not consistently aware of which students in their district are in foster care, and seven explained that there is no systematic way for school districts to be notified when a child enters or leaves care. Similarly, officials in four local child welfare agencies said they have no systematic way to inform schools when youth in foster care leave care or when their status in foster care changes. Officials from two school districts also stated their data systems have no way to indicate that a student is in foster care, so even if the child welfare agency notifies them of a youth’s status, they may not easily track the information. Officials from two school districts said not knowing the status of youth in foster care in their district impedes their ability to effectively implement the ESSA educational stability provisions. For example, one district official stated they would probably be transporting more youth to their school of origin if they knew which students were in foster care. In addition, two school district officials said that if they do not know which students are in foster care, they cannot provide additional supports that may be available to these youth, such as tutoring, financial assistance, or mental health services. The ability of school districts to accurately identify youth in foster care can also affect the accuracy of state and local report cards. Nine SEAs reported on our survey that they rely exclusively on school districts’ identification of youth in foster care for their state report cards. Of those nine, seven reported that ensuring that school districts accurately identify these youth is a challenge, which may affect the accuracy of the additional report card data required by ESSA. Some states and localities we visited had different ways to inform school districts when a youth’s foster care status changes, but officials noted varying degrees of consistency in notifying the districts of changes. Officials at two state child welfare agencies we visited told us they require the person enrolling the youth in school to present an official document that shows the youth is in state custody; however, they said schools are not informed when a child leaves foster care. One county and one state we visited had electronic data sharing agreements between child welfare and educational agencies for the purposes of updating school district records when a child enters and leaves foster care. Specifically, in that county, once a child enters foster care under the custody of the county child welfare agency, the school district’s database automatically receives pertinent information from the child welfare agency, according to officials. School and child welfare agency officials meet monthly to ensure data accuracy. In Georgia, officials from the state educational agency said they signed a data sharing agreement in spring 2018 with the state child welfare agency to allow information about youth in foster care to be provided to school districts. The previous data sharing agreement prevented the SEA from sharing the data with the school districts, according to officials. In Idaho (a state that participated in our discussion group), state officials said they ensure school districts are aware of youth in foster care by using an automated letter (see text box). Idaho’s iCARE System for Youth in Foster Care When a youth enters foster care or changes placements, Idaho’s iCARE system produces an automated letter that provides an initial communication from a child welfare social worker to the school district, SEA foster care points of contact, and the school principal. When the youth’s school of origin is entered into the system, the letter automatically populates the email addresses of the appropriate school district point of contact, SEA point of contact, and school principal. The letter contains the social worker’s initial best interest determination, and indicates if the student will need transportation to attend their school of origin, which the school district point of contact is responsible for coordinating. The school district point of contact has three days to provide input on the best interest determination when school is in session and 14 days during the summer months. The school district foster care point of contact and the child welfare social worker both must sign off on the plan identified within the electronic letter. Monitoring School Districts’ Efforts to Implement ESSA Educational Stability Provisions Under federal grant regulations, SEAs, which subgrant Title I funds to school districts, are required to conduct regular monitoring and oversight to ensure appropriate implementation of Title I by their school districts, and 43 SEA survey respondents reported that their states used one of the methods asked about in our survey to monitor how school districts implement at least one of the ESSA educational stability provisions. For example, over half (33) of SEAs reported that the Title I plans they receive from school districts include an assurance related to at least one of the ESSA educational stability provisions we asked about on the survey. More than two-thirds (36) of SEAs reported on our survey that effectively monitoring school districts’ implementation of the provisions is a challenge (see fig. 5). In their survey comments, eight SEA points of contact said limited state resources hinder their ability to ensure that the hundreds of school districts in their states properly execute the provisions. Officials we interviewed from all three SEAs in our site visits told us their states incorporate the educational stability provisions into their existing procedures for overseeing implementation of federal education programs. For example, SEA officials in Georgia told us that during one of their state reviews, they look for evidence of local agency collaboration, such as meeting agendas or emails. In Arizona, the SEA point of contact said he examines school district transportation procedures during on-site reviews. These on-site reviews occur for one- sixth of school districts in the state every year. (See table 3 in appendix II for more information on SEA monitoring of school districts.) Ensuring Immediate Enrollment and Obtaining Records While we did not ask on our survey about challenges related to immediate enrollment or obtaining records, seven state or local officials we spoke with noted difficulties with enrolling or obtaining records for students with disabilities who have individualized education programs, or students who previously attended juvenile justice or residential treatment facilities. Officials at a local child welfare agency and two school districts said that if an individualized education program is missing from a child’s records, they cannot know which services or classes a child might need and it may delay the child’s enrollment in the school or require switching classes again. Officials from Georgia’s SEA said they mitigate this challenge by providing school districts the option to share individualized education programs electronically, which enables other school districts that need the records to more easily obtain them. Education Could Take Steps to Improve Access to Technical Assistance and Plans to Begin Monitoring of the ESSA Educational Stability Provisions Education Provided Technical Assistance, At Times Collaborating with HHS, but Could Improve Access to Information Education has provided technical assistance to states, at times in collaboration with HHS, to help states implement the ESSA educational stability provisions. Education’s technical assistance included written guidance, webinars, and in-person meetings, according to Education officials. Written guidance: Education and HHS jointly issued non-regulatory guidance on June 23, 2016 to help state and local educational agencies meet their obligations related to educational stability for youth in foster care under ESSA. On the same day, Education and HHS also issued a joint letter to chief state school officers and state child welfare directors that provided an overview of the ESSA educational stability provisions. Education sent an additional letter to chief state school officers on December 5, 2016, that provided information about the timelines for implementing the provisions. The letter also requested states to provide Education with their state foster care point of contact. Webinars: Education and HHS hosted several webinars for state educational and child welfare agencies that addressed a number of issues related to implementation of the ESSA educational stability provisions. In late summer 2016, Education and HHS hosted four webinars on the roles and responsibilities of educational and child welfare agency points of contact; best interest determinations and immediate enrollment; transportation; and effective collaboration. These webinars described the related ESSA requirements and featured selected states’ approaches to implementing the provisions. The State Support Network, one of Education’s technical assistance providers, facilitated another series of webinars that were offered in summer 2018 to address areas of implementation that states reported to be particularly problematic. HHS staff also participated in the webinar series, and topics included collaboration with child welfare agencies, data systems, transportation, and roles and responsibilities of points of contact. In-person and other assistance: Education provided additional assistance to state educational agencies through an in-person meeting and continuously provides assistance upon request. Education and HHS jointly held a session on sharing data to support students in foster care during its Combined Federal Programs Meeting for SEA officials in December 2018 in Washington, DC. At this meeting, Education also facilitated a session during which foster care points of contact networked with each other and subject matter experts, shared resources, and discussed outstanding implementation challenges. In addition, Education officials told us that they assign each state a point of contact at Education, and states can request technical assistance at any time through their assigned contact. This contact can work with the appropriate offices within Education to provide information requested by states and can facilitate further technical assistance through the State Support Network. Education officials said they respond to questions from states generally asking about expectations and requirements for the ESSA educational stability provisions. Thirty-seven SEAs reported on our survey that they would like additional federal assistance as they continue to implement the ESSA educational stability provisions. Our survey showed that most SEAs were interested in receiving additional guidance related to transportation cost sharing, transportation funding options, and arranging transportation; data privacy; and state monitoring of school districts’ efforts to implement these provisions, among other topics (see fig. 6). (Also see fig.10 in appendix II for all survey responses on these topics.) With respect to transportation issues, several state officials commented that they would like more information on how other states and localities are arranging and funding transportation. Regarding data privacy, a few other officials commented that they could use more information regarding privacy laws and what information can be shared across agencies. A few SEA officials noted that guidance on how they could monitor school district implementation would be useful. A majority of SEAs reported that opportunities for in-person and virtual meetings with a federal point of contact and their SEA and state child welfare agency counterparts, and a federally supported clearinghouse of information with sample documents from other states, would be moderately to extremely helpful (see fig. 7). (Also see fig. 11 in appendix II for all survey responses on this topic.) State educational and child welfare officials we interviewed explained that in-person and virtual meetings are helpful because they allow them to ask the federal contact questions and share and discuss issues with each other. Similarly, SEA officials in our discussion sessions said they would like federal agencies to organize more collaborative opportunities for SEA points of contact to interact with their peers to help identify best practices they can adapt in their state. Some states suggested Education could adopt methods it uses for other programs, such as the Education for Homeless Children and Youth program, to provide assistance and support to foster care points of contact, such as facilitating regional phone calls and identifying a point of contact specific to foster care at the federal level. According to Education officials, in June 2019 the agency selected a staff person to serve as the federal point of contact to work directly with SEA foster care points of contact, and they told us Education maintains a designated mailbox for all foster care-related correspondence (FosterCare@ed.gov). Education officials informed us that they plan to develop a community of practice for a small group of SEA foster care points of contact who will meet regularly for several months, which may facilitate more peer to peer interaction for a select number of states. Education plans to work with the Legal Center for Foster Care and Education to convene and facilitate the community of practice. According to Education officials, the community of practice will provide networking opportunities for participants to ask questions and obtain answers from their peers, and may include discussions of promising practices at the state and local level, among other areas. Officials said they will solicit interest from all SEAs about the opportunity to participate in the community of practice. However, they will limit the number of participants, depending on the level of interest, to 10 to 12 SEAs to promote discussion and sharing among states. Officials noted that if more states are interested in participating in the community of practice than they can accommodate, they will consider additional ways to support and share information with those additional states. Education officials also noted that they are exploring other types of technical assistance to facilitate more interaction and information exchange among states, such as a web portal where states can upload and share documents. Although Education is planning to develop a community of practice and is exploring other types of technical assistance, it may not have effective methods to reach all SEA points of contact to inform them of this assistance. In the course of our follow up on our survey, we determined that 22 of the current SEA points of contact were missing from Education’s email list. Education primarily disseminates information pertaining to the ESSA educational stability requirements to states through email. Twenty-three SEAs reported on our survey that they were not aware of webinars that Education offered in summer 2018. We discussed the email list with Education officials in June 2019 and they told us they had not conducted outreach to states to update the email list since they initially identified the SEA points of contact in 2016. Rather, officials said the email list was updated on an ad hoc basis, and Education depended on states to inform them when they want someone added to the email list. Subsequent to that discussion, in response to a recommendation included in a draft of this report which Education reviewed, Education officials told us they updated the email list in July and August 2019, and planned to update it quarterly moving forward. Education officials also acknowledged it could be useful to publicize the email list on its website. Education does not maintain information about its technical assistance webinars or other relevant materials in a centralized online location. Information relevant to implementing the ESSA educational stability provisions is located on multiple Education web pages, and the materials from the most recent 2018 webinars, including the recorded session and related sample documents shared by a number of states, are only available on a third party website for which there is no link from Education’s website. In our survey, SEA points of contact reported that they are interested in receiving additional information from other states. Thirty-seven SEAs reported in our survey that a clearinghouse of information with sample documents from other states would be helpful, and 22 of these 37 reported that this would be extremely helpful. One SEA official commented that it would be useful to have a clearinghouse that could be shared with school districts and other relevant parties nationwide. Federal standards for internal control maintain that management should select appropriate methods of communication, such as providing hard copy or electronic documents or conducting face-to- face meetings, and should periodically evaluate the methods of communication in order to communicate quality information on a timely basis. Without creating and maintaining a centralized online location for SEAs to access all related information, Education cannot ensure that all SEAs have access to technical assistance and guidance that could help them implement the ESSA educational stability provisions. Education Plans to Begin Monitoring Implementation of the ESSA Educational Stability Provisions in Fall 2020 Education officials told us that in 2020, they expect to fully implement the monitoring protocols for reviewing how states are implementing the ESSA educational stability provisions. Education officials said they plan to test draft protocols as part of a pilot by fall 2019 to determine necessary revisions and expect the final protocols to be implemented by fall 2020. According to Education officials, once the protocols are implemented, they plan to use a risk assessment approach to determine which states to review each year, and anticipate reviewing approximately nine states each year, depending on staff and resources. As part of their reviews, Education officials told us they plan to visit two school districts in each state under review to assess how the selected states are implementing the ESSA requirements, and to determine whether districts are getting appropriate support from the states. According to the draft monitoring protocols, during its state reviews, Education plans to obtain information on the following areas related to educational stability: SEA collaboration with the child welfare agency, best interest determinations, immediate enrollment, SEA foster care point of contact, and school district points of contact and transportation procedures. Education reviewed states’ plans for implementing Title I, however, Education officials said that the plans contain little information about the ESSA educational stability provisions. To receive Title I funds, states are required to submit state plans to the Secretary of Education, and the Secretary is required to approve the state plans if they meet the requirements in the law. While state plans are required to describe the steps the SEA will take to ensure collaboration with the state child welfare agency to ensure the educational stability of children in foster care, including various assurances, Education did not include specific instructions for information states should include on these provisions in the state plan template it developed for states. Conclusions Youth in foster care face enormous challenges in their everyday lives and school can offer a stabilizing environment. Maintaining connections with teachers and friends, in addition to remaining in a familiar academic environment, can enhance the chances that a student is academically successful. However, many children in foster care are at higher risk of frequently changing schools, which can affect their academic achievement. ESSA made changes to the Title I program to help improve the educational stability of children in foster care. In the years since ESSA was enacted, SEAs and school districts have taken different approaches to implement its educational stability provisions, including collaborating with their child welfare agency counterparts. Most SEAs we surveyed reported common challenges with staff turnover and assisting districts with arranging transportation, among others, which can affect the successful implementation of the educational stability provisions. In addition, SEA officials are seeking more opportunities to understand how other states and localities have implemented the provisions and learn from their peers. Despite the assistance Education has provided to SEAs on a range of topics, the mechanisms Education uses to inform states of assistance are limited. The email list it uses to notify SEA foster care points of contact had not been systematically updated until July 2019, and resources on educational stability are not housed in one space. Without improvements in areas like these, states will not have access to all of the available resources that can help them improve the educational stability of youth in foster care, and ultimately, their academic success. Recommendation for Executive Action The Secretary of Education should develop an online clearinghouse of sample documents from states and localities who wish to share them, past webinar recordings and their related documents, and links to other relevant resources that all SEAs can access. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this report to Education and HHS for review and comment. Education provided written comments, which are reproduced in appendix III, as well as technical comments, which we incorporated as appropriate. HHS did not have comments. We also provided relevant excerpts to states we visited and incorporated their technical comments as appropriate. In its written comments, Education agreed with our recommendation to develop an online clearinghouse and noted actions it plans to take to implement it. Specifically, Education said in fall 2019, its Office of Elementary and Secondary Education will restructure its entire website to better organize its information, and create a new web page to house all foster care-related information and resources. Additionally, Education said this office will launch a virtual portal through which SEA foster care points of contact may collaborate and share resources. In addition, in a draft report sent to Education in August 2019, we included a recommendation to Education to update its foster care point of contact email list, and develop a process to update it at regular intervals. Education noted in its comment letter that it had updated its email list and that it will solicit updates to the email list on a quarterly basis, so we subsequently removed this recommendation. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretaries of Education and Health and Human Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 617-788-0580 or nowickij@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Scope and Methodology This report examines (1) the challenges states and selected local educational agencies face implementing the requirements of the Every Student Succeeds Act (ESSA) related to educational stability for youth in foster care, and (2) how the Department of Education (Education) provided technical assistance and monitored states and localities to ensure compliance with these requirements, including collaborating with the Department of Health and Human Services (HHS). To address both objectives and obtain national information, we held three discussion groups with officials from state educational agencies and child welfare agencies and conducted a web-based survey of state educational agencies in the 50 states, the District of Columbia, and Puerto Rico. To obtain more in-depth information, we visited three states—Arizona, Georgia, and Ohio—where we interviewed officials from state and local educational agencies and child welfare agencies. We reviewed relevant federal laws and regulations, Education and HHS guidance to states, and other research publications. We also interviewed officials from Education and HHS’s Administration for Children and Families, and other organizations that carry out efforts related to education and child welfare, including the Legal Center for Foster Care and Education and Casey Family programs, regarding the provisions, federal requirements and guidance, and state and local implementation. State Educational and Child Welfare Agency Discussion Groups To learn about actions states have taken to implement the ESSA educational stability provisions and challenges they have encountered, we held three discussion groups, two with state educational agency (SEA) officials, and one with state child welfare agency officials, during a national meeting for SEA foster care points of contact and state child welfare agencies in Greensboro, North Carolina in October 2018. To solicit participants for these groups, we asked the meeting organizers to forward an invitation we drafted to all individuals who registered for the meeting to participate in our discussion groups, and also allowed individuals to sign up once they arrived at the conference. Meeting attendees self-selected to participate in the groups. Each of our discussion groups with SEA officials had seven participants, for a total of 14 state agency officials representing 14 states. Our discussion group of state child welfare agency officials had six participants representing five states. Discussion groups were guided by a GAO moderator using semi- structured interview protocols. These protocols included open-ended questions that encouraged participants to share their thoughts and experiences on implementing the ESSA educational stability provisions, including how they monitored local agencies, and whether any additional federal assistance is needed. To reach group consensus on the top challenges facing states as they implement the provisions, we used a nominal group technique. Officials from each state identified their state’s top three implementation challenges. The group then created a list from those named challenges and officials from each state used stickers to identify their top challenges from the list. Discussion groups are intended to generate in-depth information about the reasons for participants’ attitudes on specific topics and to offer insights into their concerns about and support for an issue. They are not designed to (1) demonstrate the extent of a problem or generalize results to a larger population, (2) develop a consensus to arrive at an agreed- upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. For these reasons, and because discussion group participants were self- selected volunteers, the results of our discussion groups are not generalizable. Survey of State Educational Agency Officials To learn about actions states have taken to implement the ESSA educational stability provisions and challenges they have encountered, we conducted a survey of SEA officials in the 50 states, the District of Columbia, and Puerto Rico. The survey was administered from January to March 2019 and we had a 98 percent response rate. The survey used a self-administered, web-based questionnaire, and state respondents received unique usernames and passwords. Our survey population was foster care points of contact at SEAs. We used multiple sources to create an initial list of points of contact, including a list provided by the Department of Education, SEA website pages related to foster care, and information from knowledgeable experts in the field. We reached out to each point of contact to ask them to confirm they were the foster care point of contact for their state or identify the appropriate point of contact. We instructed respondents to consult with others who were familiar with their state’s implementation of the provisions, if doing so would provide more accurate responses. Our survey included 20 fixed-choice and open-ended questions. We asked how SEAs collaborated with the state child welfare agency, how they assisted local educational and/or child welfare agencies, what challenges they encountered, and what assistance has been and would be helpful from the Department of Education in implementing the provisions. To draft the closed-ended questions and answer choices on the survey, we drew from recommended practices suggested in HHS and Education’s joint non-regulatory guidance to states, information shared during webinars sponsored by HHS and Education, and interviews with stakeholders, including our discussion groups with state educational and child welfare agencies. A draft of the survey questionnaire was reviewed by officials at Education, a knowledgeable stakeholder organization, and an independent GAO survey professional for completeness and accuracy. We made revisions based on their comments. We conducted three pretests—one by phone and two in-person—with SEA foster care points of contact from three different states to check that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on agency officials, (4) the information could feasibly be obtained, and (5) the survey was comprehensive and unbiased. To obtain our 98 percent response rate (51 out of 52 SEAs), we made multiple follow-up contacts by email and phone from January to March 2019 with points of contact who had not yet completed the survey. While 51 surveyed officials affirmatively checked “completed” at the end of the web-based survey, not all officials responded to every question or the sub-parts of every question. We conducted additional follow-up with a small number of respondents to verify key responses. Because this was not a sample survey, the survey has no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as non-sampling errors. For example, unwanted variability can result from differences in how a particular question is interpreted, the sources of information available to respondents, or how data from respondents are processed and analyzed. We tried to minimize these factors through our reviews, pre-tests, and follow-up efforts. In addition, the web-based survey allowed SEA foster care points of contact to enter their responses directly into an electronic instrument, which created an automatic record for each state in a data file. By using the electronic instrument, we eliminated the potential errors associated with a manual data entry process. Site Visits to Selected States To learn about actions states and localities have taken to implement the ESSA educational stability provisions and challenges they have encountered, we conducted site visits to three states to obtain information from state and local educational agency officials, state and local child welfare officials, foster parents, and current and former youth in foster care. We selected the three states—Arizona, Georgia, and Ohio—to represent a mix of factors, including type of child welfare agency (state or county administered), number of children in foster care, number of school districts, geographic dispersion, and variety in types of school districts (urban, suburban, rural). In each state we visited an urban, suburban, and rural school district, where we met with the school district officials responsible for implementing the ESSA educational stability provisions, and their primary child welfare agency counterparts. We also met with state educational and child welfare agency officials. We used a semi- structured interview protocol for these meetings. We held discussion groups with a total of 13 youth in foster care or formerly in foster care in three states, and in two states, we held discussion groups with a total of 14 foster parents, to obtain their perspectives on implementation of the provisions and educational stability generally. Although we cannot generalize our findings beyond these states and localities, these visits provided us with illustrative examples of how states and localities are implementing the ESSA educational stability requirements. We conducted this performance audit from June 2018 to September 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Additional Survey Data Appendix II: Additional Survey Data to the question: “How much of a challenge, if at all, is each of the following items in implementing the ESSA educational stability provisions?” The term “ESSA educational stability provisions” refers to the amendments made by the Every Student Succeeds Act (ESSA) to Title I, Part A of the Elementary and Secondary Education Act of 1965 that are related to the educational stability of youth in foster care. These provisions have been codified at 20 U.S.C. §§ 6311(g)(1)(E), 6311(h)(1)(C), and 6312(c)(5). Assistance Best interest determination documents, like meeting documentation templates, questions to consider during the meeting, or sample notices to inform parties of the decision Sample memorandum of understanding/agreement for data sharing between school districts and local child welfare agencies for the purposes of identifying youth in foster care for the report card reporting 17 10 school of origin when in their best interest will be provided, arranged, and funded for the duration of the time in foster care. This includes states that reported that they solely respond when alerted to issues and do not conduct any other systematic monitoring activities. Specifically, nine states reported responding when alerted to issues regarding the provisions on best interest determinations and immediate enrollment, and did not report conducting any other monitoring activities. Similarly, 14 states reported solely responding when alerted to issues regarding new enrolling schools immediately contacting schools of origin to obtain relevant academic and other records, and did not report conducting any other monitoring activities. Finally, seven states reported responding when alerted to issues related to the provision on transportation procedures, and did not report conducting any other monitoring activities or did not know if their state monitors LEAs in other ways. educational stability of youth in foster care. These provisions have been codified at 20 U.S.C. §§ 6311(g)(1)(E), 6311(h)(1)(C), and 6312(c)(5). Appendix III: Comments from the U.S. Department of Education Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, the following individuals made important contributions to this report: Elizabeth Morrison (Assistant Director), Kate Blumenreich (Analyst-in-Charge), Aimee Elivert, and Kelsey Kreider. Also contributing to this report were Steven Campbell, William Chatlos, Sarah Cornetto, Holly Dye, Jill Lacey, Jessica Orr, Catherine Roark, and Curtia Taylor.
Why GAO Did This Study Roughly 270,000 school-aged youth were in foster care at the end of fiscal year 2017. Youth in foster care may change schools frequently, which can negatively affect their academic achievement. ESSA, enacted in 2015, reauthorized the Elementary and Secondary Education Act of 1965 and included provisions to improve educational stability for youth in foster care. These included requiring state educational agencies to ensure youth placed into foster care stay in their current school, unless it is not in their best interest to do so. GAO was asked to review implementation of these provisions. This report examines (1) the challenges SEAs and selected school districts face implementing the ESSA educational stability provisions for youth in foster care, and (2) how Education provides technical assistance and monitors state implementation efforts. GAO surveyed SEA foster care points of contact in the 50 states, District of Columbia, and Puerto Rico and all but one state responded. In addition to interviewing federal officials, GAO interviewed selected state and local educational and child welfare agency officials, and held discussion groups with foster youth and parents, in three states selected by number of youth in foster care, among other factors. GAO also held discussion groups with officials from 14 SEAs and 5 state child welfare agencies, and reviewed relevant federal laws, regulations, guidance, and technical assistance. What GAO Found State educational agencies (SEAs) reported several challenges in implementing the provisions in the Every Student Succeeds Act (ESSA) related to educational stability for youth in foster care. In their responses to GAO's national survey, SEAs reported challenges, including high turnover among local educational and child welfare agency officials, and with identifying and arranging transportation to schools for students (see figure). Turnover of local staff can result in the loss of knowledge and experience needed to implement the provisions, according to SEA and local officials we interviewed. Regarding transportation, ESSA requires school districts to work with child welfare agencies to provide and fund transportation so that youth in foster care can remain in their current school when it is in their best interest. Six school district and child welfare agency officials we interviewed indicated that funding was a concern and some noted that transporting youth to their current school can result in extensive costs. The Department of Education (Education) provided technical assistance in the form of written guidance, webinars, and in-person meetings to help states implement the ESSA educational stability provisions. Education officials said they also plan to monitor state implementation of the provisions. Most SEA officials reported in GAO's survey that they would like additional assistance and more opportunities to interact with other state officials. Education plans to convene a community of practice for several states in which participants will meet regularly for several months, and is exploring other technical assistance efforts. To share information about implementing the ESSA educational stability provisions, Education maintains an email address list of SEA foster care points of contact. GAO found that the list was inaccurate and not regularly updated. Education updated the list in late summer 2019 and plans to do so quarterly. Education also provides information online, but the information is scattered across different web pages. Twenty-two SEA officials reported on GAO's survey that a clearinghouse of information would be extremely helpful. Federal standards for internal control require agencies to externally communicate necessary information in a manner that enables them to achieve their objectives. Without a dedicated web page about implementing the provisions, states may not receive the assistance they need to improve educational stability for youth in foster care. What GAO Recommends GAO recommends that Education develop an online clearinghouse of resources. Education agreed with GAO's recommendation.
gao_GAO-19-534
gao_GAO-19-534_0
Background Roles and Responsibilities CBP facilitates trade and travel, and enforces immigration and customs laws at the nation’s 167 land border crossings along the northern and southern border. CBP’s OFO is responsible for inspecting and processing pedestrians, passengers, cargo, and other items at all land border crossings. OFO has 20 Field Offices nationwide with nine that oversee the operations of all 110 land ports of entry—which may consist of one or more land border crossings—within their designated areas of responsibility. CBP OFAM manages CBP’s portfolio of owned and leased real property, including all 167 land border crossings. OFAM is responsible for capital planning at all land border crossings and for prioritizing capital projects across its portfolio based on need. GSA owns 101 (60 percent) of the 167 land border crossings, partially owns three, and leases 19 (11 percent). CBP owns 40 land border crossings (24 percent) and leases one directly from private owners. The National Park Service owns two and U.S. Forest Service owns one land border crossing. For the 101 land border crossings that GSA owns, it has occupancy agreements with CBP, which is the principal user of the facilities. GSA has responsibilities related to capital planning and construction at all 101 GSA-owned land border crossings. Since CBP’s operations depend heavily on the condition and functionality of infrastructure at land border crossings, GSA works closely with OFAM to plan, design, construct, and implement capital infrastructure improvements to accommodate ever-growing trade and travel at land border crossings. GSA-owned and leased land border crossings consist of large, medium, and small crossings along the northern and southern border. Land border crossings owned by other federal agencies—including CBP—tend to be small by comparison and are typically situated in remote locations along the northern border. See appendix I for more information on the nation’s portfolio of land border crossings. Infrastructure at U.S. Land Border Crossings Of the 167 land border crossings at which CBP operates, 120 are located along the northern border and 47 are located along the southern border. Land border crossings vary across the northern and southern border, but are generally designed to process some combination of pedestrian, passenger vehicle, and commercial traffic with separate facilities for each mode. Infrastructure and layout at each land border crossing may vary depending on a variety of factors including the modes of traffic CBP processes at that location, traffic volume, local climate, and area-specific threats, among others. Many large land border crossings, including GSA’s Otay Mesa land border crossing in California, are designed to process pedestrians, passenger vehicles, and commercial traffic and are equipped with distinct infrastructure for each mode of traffic. Other land border crossings are designed to process a single mode of traffic, such as San Luis II in Arizona, which processes only commercial trucks. In general, CBP’s inspection process at land border crossings follows a standard sequence that includes separate areas designated for preprimary inspection, primary inspection, and secondary inspection for each mode of traffic and a main building which houses administrative and operational support activities, which we describe below. Preprimary inspection: Upon proceeding to cross the border into the United States, pedestrians and vehicles enter the land border crossing and are directed to preprimary inspection, where initial screening takes place. Depending on availability, CBP may deploy officers with canines to walk among the vehicles in preprimary waiting to reach an inspection booth. Overhead signage may be present to help CBP actively manage traffic by directing travelers to different lanes according to the type of travel documents they have. For example, CBP may use signs to designate specific lanes for travelers with Radio Frequency Identification (RFID) or other machine readable documents (“Ready lanes”) or for trusted travelers. Infrastructure in the pedestrian preprimary area often includes a space for travelers to queue prior to entering primary inspection. Infrastructure in the preprimary area for passenger vehicle and commercial traffic includes lanes for traffic to queue and radiation portal monitors that are designed to detect radiation and help prevent the smuggling of nuclear material into the United States. The passenger vehicle preprimary area also often includes screening technologies, including license plate readers and RFID readers to capture information on vehicles and RFID-ready travel documents such as passport cards and border crossing cards. At some land border crossings, CBP may use RFID readers in the commercial preprimary inspection area to electronically transmit identification, manifest, and other information to CBP officers prior to entering primary inspection. See figure 1 for examples of preprimary infrastructure. Primary inspection: After preprimary inspection, pedestrians enter the primary inspection area, typically located within the main building. Infrastructure for pedestrian primary inspection may include one or more lanes and officer booths where CBP officers review traveler information. Passenger vehicles and commercial traffic enter a primary inspection area where CBP officers verify passenger identification and perform an initial inspection of the vehicle, which may include a visual inspection of vehicles’ exterior and interior. Infrastructure supporting vehicular primary inspection includes one or more lanes and officer booths. Each booth may be equipped with an HVAC system to keep dangerous vehicle emissions and other fumes from entering the workspace and maintain a safe work environment during extreme heat and cold. Primary inspection booths are designed to be bullet and blast resistant to ensure officer safety. See figure 2 for examples of primary inspection infrastructure. Secondary inspection: If a pedestrian, driver, passenger or vehicle gives reason for suspicion or if the CBP officer is unable to complete the inspection at primary inspection for any reason, the officer may refer them to secondary inspection. Infrastructure in the pedestrian secondary inspection area is typically located within the main building and may include a processing area and a separate secure room where CBP officers can perform more thorough inspections for travelers suspected of criminal activity. Infrastructure in the passenger vehicle secondary inspection area may include work areas where CBP officers can search vehicles, vehicle lifts, and non-intrusive inspection x-ray technologies to identify contraband hidden in concealed compartments. Passengers may wait in the pedestrian secondary inspection area while CBP officers inspect vehicles. Infrastructure in the commercial secondary inspection area may include a loading dock where CBP officers can manually examine cargo and use x-ray technologies to identify hidden contraband. In addition, CBP uses canines at some land border crossings to conduct secondary inspections in the pedestrian, passenger, and commercial environments. See figure 3 for examples of secondary inspection infrastructure. Main buildings: Land border crossings may have facilities that support various administrative and operational activities. Infrastructure at CBP’s main buildings may include agricultural labs, commercial facilities, traveler processing areas, holding rooms, staff work areas, and locker rooms, among other infrastructure. See figure 4 for examples of main building infrastructure. Outbound infrastructure: Pedestrians and vehicles leaving the United States at land border crossings exit through the outbound area. Outbound infrastructure in the passenger vehicle, bus, commercial, and pedestrian area typically consists of one or more exit lanes and may also include inspection booths, inspection technologies, a secondary inspection area and support facilities, among others, to process traffic leaving the United States. See figure 5 for examples of outbound infrastructure. Figure 6 depicts a generic layout of a land border crossing with all modes of traffic. Travel, Trade, and Law Enforcement at U.S. Land Border Crossings Travel: The volume of traffic at land border crossings varies across the northern and southern borders. At the nation’s busiest land border crossing—San Ysidro in California—CBP processed over 32 million entries in 2017. Conversely, at the Whitlash land border crossing in Montana—one of the smaller land border crossings—CBP processed 1,339 entries that same year. In total, CBP processed over 252 million entries in 2017 including 43 million pedestrian entries, 209 million passengers traveling to the United States in over 104 million passenger vehicle entries, 256,000 buses, and nearly 12 million commercial truck crossings. Figure 7 shows the largest northern and southern border U.S. land ports of entry by volume in 2017. Trade: In 2017, CBP processed and inspected nearly $721 billion in traded goods (imports and exports) through U.S. land ports of entry. As shown in figure 8, trade in goods transported via commercial truck through the largest northern and southern border land ports of entry impacted states across the country. Law Enforcement: Land border crossings serve a critical role in enabling CBP’s enforcement of immigration and customs laws. According to CBP, its officers encountered nearly 139,000 inadmissible individuals at land border crossings in fiscal year 2018. According to CBP, the lack of required travel documents, such as a visa, was the most common reason CBP officers determined individuals to be inadmissible. Further, according to the Drug Enforcement Administration, the nation’s land border crossings remain a target for exploitation by transnational criminal organizations. Specifically, the Drug Enforcement Administration’s 2018 National Drug Threat Assessment found that the most common smuggling method used by Mexican transnational criminal organizations involves transporting illicit drugs through U.S. land border crossings in passenger vehicles with concealed compartments or commingled with legitimate goods on tractor trailers. In fiscal year 2018, CBP seized 363,000 pounds of drugs at land border crossings, including approximately 265,000 pounds of marijuana, 70,000 pounds of methamphetamine, 20,000 pounds of cocaine, and 1,400 pounds of fentanyl. CBP and GSA Capital Planning and Project Development Process As part of its capital planning process, CBP is responsible for identifying land border crossing infrastructure needs and prioritizing capital projects across its portfolio of 167 land border crossings. At CBP-owned land border crossings, CBP generally funds these projects and hires a contractor to plan and execute capital infrastructure projects. At GSA- leased land border crossings, CBP and GSA typically work with the property owner to plan and execute capital projects. The owner of the land border crossing funds these projects, while CBP funds any alterations needed to fulfill its mission. At GSA-owned land border crossings, CBP typically works with GSA to complete a feasibility study and uses this information to prioritize infrastructure projects. According to GSA policy documents, feasibility studies are intended to determine the technical and economic viability of a project, define the project budget and scope, and establish an initial project design. GSA and CBP are to further refine land border crossing capital projects with a program development study, which updates project plans and budgets and provides the necessary information to pursue project funding. Each year, the Office of Management and Budget reviews each project included in GSA’s budget request and Congress authorizes projects and appropriates project funds as part of the federal budget cycle. GSA typically includes CBP’s top priority land border crossing capital infrastructure projects in its annual budget submission. GSA may pursue project funding for design and construction in separate budget requests or in a single appropriation, depending on the contract vehicle used. Once funded, GSA hires one or more contractors to design and execute the project. Figure 9 identifies funding for CBP and GSA-owned land border crossings in fiscal years 2009 through 2019. CBP defines its general land border infrastructure requirements in its Land Port of Entry Design Standards, which describe various infrastructure at land border crossings and detail how this infrastructure should operate. According to CBP, it updates these standards every few years to ensure the standards reflect CBP’s changing mission, including new technologies and infrastructure requirements. CBP Identified Various Infrastructure Constraints at Land Border Crossings, but Does Not Have Complete Information on Infrastructure Condition at All Crossings CBP’s Reported Infrastructure Constraints at Land Border Crossings Include Limited Capacity and Technology Challenges CBP officers we spoke with at 16 land border crossings and OFO field offices that oversee land border crossings reported examples of land border crossing infrastructure constraints they face at each stage of the inspection process including preprimary, primary, and secondary inspections. CBP relies on infrastructure to fulfill its mission at land border crossings. Specifically, according to CBP, well-functioning infrastructure is a critical factor in its ability to effectively screen persons and cargo, and facilitate cross-border travel and trade. For example, CBP officials stated that the number of operational inspection lanes is a key variable that affects traffic wait times. These officers also identified land border crossing infrastructure challenges with office space and port security. Examples of infrastructure constraints identified by CBP officers include: Limited space in the preprimary inspection area. According to CBP officers, land border crossings with primary inspection booths located in close proximity to the border line with Mexico have restricted space for CBP to conduct operations in the preprimary area. Figure 10 below shows a photo of restricted space in the preprimary area at a land border crossing on the southern border. Non-functioning screening technology in the preprimary inspection area. CBP officers stated that vehicle inspection technologies may not always function correctly. For example, at a land border crossing on the southern border, license plate readers and radiation portal monitors are inoperable at least once a week during summer months due to overheating, according to CBP officials. Temperatures can exceed 120 degrees Fahrenheit and the technology is exposed to the sun. Figure 11 shows license plate readers and radiation portal monitors in the preprimary area exposed to the sun at a land border crossing on the southern border. Officer inspection booths in the primary inspection area in need of repair. CBP officers stated that officer inspection booths may be inadequately cooled or heated resulting in officers more frequently rotating out of the booths for health and safety reasons. At one land border crossing, officers stated that the booth windows provide limited visibility since the old bullet resistant glazing has deteriorated and clouds officers’ view. At another land border crossing we visited, we observed that the doors on the primary inspection booths do not have working locks. Officers stated that as a result, when the land border crossing closes overnight they are unable to secure the booths or the computer equipment inside. Inadequate holding facilities in the secondary inspection area. Holding facilities at several land border crossings we visited had holding rooms that did not meet current CBP safety requirements, according to CBP officers. Officers at two land border crossings stated that safety concerns included inadequate ventilation. Officers at another land border crossing identified exposed wiring in a holding room as a safety hazard. Other land border crossings we visited did not have holding rooms and officers stated they detain individuals in the lobby of the administration building as a result. Figure 12 shows examples of holding facilities at land border crossings on the northern and southern borders that CBP officials identified as not meeting CBP requirements. Lack of availability of non-intrusive inspection (NII) technology in the secondary inspection area. CBP officers stated that the availability of NII technology improves their ability to conduct inspections. However, NII technology is not always available because it may need maintenance or repair, or CBP may share the technology with multiple land border crossings. Officers stated they may perform manual inspections of vehicles when NII technology is not available, which they noted can be less effective. Inadequate facilities for canine inspection in the secondary inspection area. CBP officers provided examples of limited facilities for inspection canines. For example, officers at one land border crossing stated they do not have a dedicated area to exercise inspection canines. Officers at another land border crossing stated they recently converted a storage closet into a climate-controlled canine kennel within the secondary inspection building. Previously, the CBP officers at this land border crossing kept the canines in running vehicles with air conditioning to keep them cool. Impeded traffic flow within the land border crossing. CBP officers identified challenges with facilitating traffic flow within the land border crossing. For example, the layout at a commercial land border crossing on the southern border impedes the flow of traffic because it requires commercial trucks to make a series of sharp turns as they travel through the border crossing. In addition, commercial traffic referred for secondary inspection must cut across four primary egress lanes to enter and exit the secondary inspection area. According to CBP officers, commercial trucks proceeding toward the border crossing exit may need to stop or reverse direction to create space for the trucks entering or exiting the secondary inspection area which creates delays in processing commercial traffic. Figure 13 shows an aerial view of a land border crossing with a diagram of where CBP officers identified that the land border crossing layout impedes traffic flow. Insufficient capacity to accommodate the volume of traffic. CBP officers stated that the number of travelers can exceed the capacity of the facility. For example, CBP officers stated that insufficient number of inspection lanes can result in lengthy wait times for travelers. Limited administrative space. CBP officers stated that insufficient administrative office space can be a challenge at land border crossings. For example, one land border crossing we visited did not have sufficient space for officer lockers and as a result placed some lockers in the contraband seizure room. Figure 14 shows lockers located in the contraband seizure room at a land border crossing on the northern border due to insufficient administrative space. Port security limitations. CBP officers also described challenges with land border crossing security. For example, officers stated the lack of measures to prevent travelers from exiting the crossing without authorization, such as vehicle barriers and security gates, impedes CBP’s ability to stop drivers from fleeing the land border crossing and entering the United States without inspection. Figure 15 shows exit lanes constructed with temporary barriers to control the flow of traffic leaving the land border crossing and entering the United States. Lack of inspection facilities for outbound traffic. CBP officers at land border crossings without facilities to inspect outbound traffic can face difficulties when inspecting traffic exiting the United States. For example, at one land border crossing without outbound inspection facilities, officials stated they park CBP vehicles in the outbound traffic lanes to slow traffic so that CBP officers can stop and inspect vehicles exiting the United States. CBP Has Limited Information on the Current Condition of Infrastructure across Land Border Crossings CBP Has Assessed Facility Conditions at Some but Not All Land Border Crossings CBP collects information on the condition of infrastructure at some land border crossings through contracted Facility Condition Assessments (FCA), but has not assessed conditions at all land border crossings. FCAs are engineering inspections that evaluate the condition of the facility and identify repair and improvement needs. The output of an FCA is a report that describes infrastructure deficiencies at a facility and represents the condition of the land border crossing infrastructure at the time of the FCA. From 2016 through 2018, CBP and GSA assessed the condition of infrastructure at 95 of the 167 land border crossings. As of December 2018, CBP had conducted FCAs at 74 of the 167 land border crossings within the previous three years. In addition, according to CBP officials, in 2016 GSA funded and conducted Building Engineering Reviews at 21 land border crossings in response to conversations between CBP and GSA on how to improve GSA service delivery at land border crossings. CBP officials stated they use facility condition information from the 2016 Building Engineering Reviews because they contain information similar to what CBP collects through an FCA. According to GSA officials, GSA now rarely conducts Building Engineering Reviews because they are costly and their data quickly become obsolete. GSA now uses other tools to assess infrastructure condition and GSA officials were not aware of any reviews at land border crossings since 2016. See table 1 for a breakdown of the land border crossings that CBP and GSA have assessed. According to the assessments, the condition of infrastructure varies across land border crossings. The facility condition index—the ratio of the costs to correct facility infrastructure deficiencies to the total replacement value of the facility—ranges from 0 percent to 69 percent across the 95 FCAs and Building Engineering Reviews conducted between 2016 and 2018. These assessments identified approximately $140 million in estimated infrastructure deficiencies and the average facility condition index is 16 percent. See table 2 for the distribution of facility condition indices across land border crossings by ownership type. See appendix I for facility condition index scores across CBP’s land border crossing portfolio. CBP began conducting FCAs at CBP-owned land border crossings in 2008. OFAM officials stated they set a goal of conducting FCAs at each CBP-owned land border crossing on a three-year cycle, but have not always been able to do so due to resource constraints. Our analysis identified that CBP conducted FCAs at only four of the 40 CBP-owned land border crossings over three years—2016 to 2018—when its goal was to have conducted FCAs at all 40 facilities over this time frame (see table 1 above). CBP also began conducting FCAs at GSA-owned facilities in 2016, and at GSA-leased facilities in 2017. According to CBP officials, they plan to conduct several FCAs at selected GSA-owned facilities each year to obtain information on the condition of infrastructure at these facilities, though there is no required interval at which they must assess these facilities. CBP officials stated they prioritize GSA land border crossings in need of capital investment when selecting which facilities to assess. DHS Directive 119-02-004 “DHS Real Property Facility Condition Assessment” instructs each DHS component—including CBP—to implement and maintain a program to ensure that the condition of real property is assessed every three years and updated each fiscal year through FCAs beginning in fiscal year 2018. The Directive applies to land border crossings owned by CBP and is intended to ensure that CBP collects information on the condition of infrastructure across these facilities. Although CBP has a goal of conducting FCAs at CBP-owned land border crossings every three years, it has not met this goal in recent years as CBP assessed only four of the 40 land border crossings from 2016 through 2018. According to CBP officials, FCAs older than three years may not accurately reflect the current condition of infrastructure at land border crossings. According to OFAM officials, they have not developed a plan to ensure that CBP implements its program consistent with the Directive by conducting FCAs on a three-year cycle going forward due to limited resources to conduct the assessments. Specifically, CBP officials stated that CBP has not been able to fully fund the FCA program due to other competing facility priorities. However, developing a plan that accounts for the new requirements under the Directive could assist CBP in planning funding needs for the FCA program. Further, developing and implementing a plan to ensure CBP executes its FCA program consistent with Directive 119-02-004 would assist CBP in making resource decisions for this program. Implementing its FCA program consistent with DHS Directive 119-02-004 would enable CBP to collect more complete and current information on the condition of infrastructure at land border crossings it owns. CBP and GSA Have Not Routinely Shared Information with Each Other about Land Border Crossing Facility Conditions CBP and GSA conduct separate assessments of facility conditions at GSA-owned land border crossings; however, they do not routinely share or use the results of each other’s efforts to inform their assessments of facility condition. More specifically, separate from CBP’s process for assessing facility condition, GSA uses its Building Assessment Tool to assess the condition of infrastructure across its entire real property portfolio, including land border crossings. This process is intended to assist GSA in estimating its future costs for repairing and maintaining the buildings in its portfolio. Although the CBP FCA and the GSA Building Assessment Tool both assess elements of facility condition, these assessments have different methodologies, scopes, and purposes. We reviewed a 2018 CBP comparative analysis of the FCAs and Building Assessment Tool processes. CBP’s analysis showed that FCAs are detailed assessments of all building systems that CBP uses at a land border crossing. According to CBP officials, CBP uses FCAs to collect information on the condition of infrastructure at land border crossings and to inform land border crossing capital infrastructure projects. In comparison, GSA’s Building Assessment Tool is a standardized assessment used across GSA’s federal real property portfolio to identify, plan for, and prioritize repair and maintenance needs across GSA properties. As a result, while the two types of assessments may be related in some aspects, officials from each agency stated they could not use the other’s facility assessment in place of their own. GSA officials assessing land border crossing infrastructure condition are not required to consult with CBP officials who operate the port or review any existing CBP FCAs, according to GSA officials. CBP provides GSA with pre-assessment questionnaires prior to conducting FCAs at GSA- owned land border crossings. These questionnaires inquire about available GSA information on facility condition. However, CBP officials stated they do not specifically request GSA Building Assessment Tool data, and as a result, have not generally received these data prior to conducting a FCA. GSA officials stated that CBP FCAs and GSA Building Assessment Tool assessments differ in scope and as a result GSA does not use FCAs in place of their Building Assessment Tool assessments. However, FCAs identify infrastructure needs at land border crossings and the results could provide GSA with an understanding of infrastructure needs identified by CBP at land border crossings. Likewise, GSA’s Building Assessment Tool is used to identify infrastructure in need of repair and could provide CBP with an understanding of infrastructure needs identified by GSA. We have previously identified key practices for collaboration among federal agencies. Specifically, agencies can enhance and sustain their collaborative efforts by identifying and addressing needs by leveraging resources. According to CBP officials, improving information sharing on facility condition could help ensure that both CBP’s and GSA’s assessments are as accurate and complete as possible. Moreover, using each other’s facility condition information could enable CBP and GSA to improve the accuracy and completeness of their respective assessments of facility condition at land border crossings. CBP Does Not Maintain Reliable Information on the Current Condition of Land Border Crossing Infrastructure, but Is Taking Steps to Improve Its Reliability CBP uses a software system called TRIRIGA to manage its real property asset portfolio, but information in this system is not fully reliable. Among other functions, CBP uses TRIRIGA to track infrastructure needs and the condition of facilities at land border crossings. CBP identifies infrastructure needs through FCAs and records these data in TRIRIGA. CBP also identifies additional infrastructure needs as they arise and records these data in TRIRIGA. For example, an infrastructure need may arise at a building and be recorded in TRIRIGA in the months following a CBP FCA. CBP uses TRIRIGA data to calculate a score reflecting the overall current condition of infrastructure at a land border crossing. CBP uses this score on condition to inform internal planning and prioritization of maintenance and repair projects at the local level, according to CBP officials. In addition, CBP’s goals for facility condition data in TRIRIGA include making facility condition information available in real time, starting with TRIRIGA for responses to data calls and reporting, and using data in the system for more efficient planning and decision making. However, according to CBP officials, land border crossing facility condition data in TRIRIGA have not been consistently reliable because some data on infrastructure needs are duplicative, out of date, or incomplete. Duplicate Data: CBP officials stated that in the past, OFAM officials responsible for entering infrastructure needs into TRIRIGA created duplicate entries in some instances. For example, OFAM officials have identified, and entered into TRIRIGA, infrastructure needs at land border crossings that had already been identified and entered in the past. As a result, TRIRIGA double-counted the costs associated with these duplicate infrastructure needs which impacted the reliability of the calculation of the score on facility condition for the associated land border crossing. According to OFAM officials, they have taken several steps to improve the TRIRIGA data entry processes. During the course of our review, OFAM officials identified internal confusion regarding who had the authority to remove infrastructure needs from TRIRIGA. In response, in April 2019 OFAM developed new guidelines clarifying roles and responsibilities for accurately entering FCA data and removing infrastructure needs from TRIRIGA. OFAM officials stated they expect this process to avoid duplicative data entry in the future. Further, as described earlier, by conducting FCAs for each CBP-owned land border crossing every three years, updating them annually consistent with DHS Directive 119-02-004, and then entering the results into TRIRIGA in accordance with the new guidelines for reviewing existing infrastructure needs and removing them as needed, CBP would be positioned to more frequently review and validate these data in the system on an ongoing basis. Out of Date Data: Officials stated that FCA data for some land border crossings in TRIRIGA originate from as early as 2013, the last time CBP conducted an FCA at those border crossings. As a result, TRIRIGA does not accurately reflect the current condition of these facilities. Historically, CBP has updated TRIRIGA with facility condition information collected through FCAs. As described earlier, DHS Directive 119-02-004 directs CBP to conduct FCAs for each CBP- owned land border crossings every three years and update them annually. By developing and implementing a plan to complete more timely FCAs at CBP-owned land border crossings, CBP will be better positioned to ensure that TRIRIGA is updated to reflect more current condition information. In addition, as CBP continues to conduct FCAs at GSA owned and leased land border crossings, CBP can continue to update TRIRIGA with more current information on facility condition consistent with OFAM’s April 2019 guidance on TRIRIGA data entry. Incomplete Data: Officials stated that because CBP oversees maintenance and repair work at CBP-owned land border crossings, data in TRIRIGA are more reliable for these land border crossings than for GSA-owned land border crossings where GSA is responsible for planning and executing maintenance and repair work. CBP officials said that while they do identify infrastructure needs at GSA- owned land border crossings and enter related information into TRIRIGA, the information on these needs can be incomplete. CBP officials stated that for example, a past CBP FCA may have identified a building roof in need of repair. Following the FCA, CBP would then enter a record of this infrastructure need in TRIRIGA. If GSA repaired the roof during the following year as part of its planned maintenance work, but did not inform CBP headquarters, TRIRIGA would continue to identify a deficient roof at the land border crossing after GSA repaired it. CBP officials stated that GSA may conduct maintenance or repair work to address an infrastructure need without CBP’s knowledge because CBP and GSA did not have a process for GSA to notify CBP of maintenance and repair work the agency conducts at land border crossings. According to OFAM officials, GSA began sharing with OFAM monthly summary-level data on maintenance GSA performs at land border crossings. However, these data do not include the level of detail required to update condition data or close out deficiencies in TRIRIGA. We previously identified key practices for collaboration among federal agencies, including that agencies can enhance and sustain their collaborative efforts by identifying and addressing needs by leveraging resources. Sharing information on GSA maintenance and repair work at GSA-owned land border crossings at the level of detail necessary for CBP to update TRIRIGA would enable CBP to improve the completeness and accuracy of data in the system. As a result, CBP would have access to more complete and accurate data to use when planning and prioritizing infrastructure maintenance activities, improving the availability of real-time facility condition information, and responding to data calls and reporting. For example, more complete and accurate data in TRIRIGA would better position CBP to identify and report to Congress on improvements needed at land ports of entry. Specifically, the 2018 United States Ports of Entry Threat and Operational Review Act requires CBP to submit to Congress a threat and operational analysis that includes, among other elements, an assessment of current and potential threats due to security vulnerabilities and unlawful entry, and improvements needed at ports of entry to enhance travel and trade facilitation and reduce wait times. CBP officials stated they have not yet determined which data they will use to develop this report, but this reporting requirement is one potential example of how more reliable data from TRIRIGA could be used to effectively report on the condition of land border crossing infrastructure. CBP Prioritizes Infrastructure Projects in Its Annual Plans but Has Not Submitted the Plans on Time or Used a Consistent Methodology CBP Prioritizes Projects in Five-Year Capital Investment Plans but Has Not Consistently Submitted the Plans as Required CBP prioritizes prospective land border crossing projects within its annual Five-Year Land Port of Entry Capital Investment Plan (five-year plan). CBP is statutorily required to complete a detailed five-year plan each fiscal year and include it with its annual budget submission to Congress (i.e., President’s budget), which typically occurs in February. Each five- year plan is to cover all federal land border port of entry projects with a yearly update of total projected future funding needs delineated by land port. According to CBP officials, CBP generally completes an initial draft of the five-year plan in November or December each fiscal year and submits it to CBP and GSA leadership, DHS leadership, and the Office and Management and Budget for review and approval. However, our analysis of CBP’s five-year plans for fiscal years 2014 through 2018 identified that CBP completed its five-year plan after the annual budget submission in fiscal year 2016 and 2018 and did not complete a plan at all in fiscal year 2017. Specifically, CBP submitted its fiscal year 2016 five-year plan in July 2016—163 days after CBP’s annual budget submission—and its fiscal year 2018 plan in October 2018—235 days after CBP’s annual budget submission. Table 3 identifies the days between CBP’s submission of its five-year plan and budget to Congress in fiscal years 2014 through 2018. CBP officials stated they completed the five-year plans after the annual budget submission in fiscal years 2016 and 2018, and did not complete a five-year plan for Congress in fiscal year 2017, due to delays in the review and approval process. CBP officials stated the review and approval process may take several months to complete due to revisions at various stages and competing priorities among stakeholders that may slow the process. Officials also said they have little control over how long it takes stakeholders within CBP leadership, DHS, and the Office of Management and Budget to review and approve the five-year plan. Consequently, according to CBP officials, CBP has not attempted to establish time frames for completing the plan. While we acknowledge that setting time frames for completing the plan may not guarantee timeliness, establishing time frames for each stakeholder could help measure and assess progress in reviewing and approving the draft plan. Standards for Internal Control in the Federal Government state that management should define objectives so that they are understood at all levels, including by outlining the time frames for achievement of those objectives. By establishing time frames for stakeholders throughout the five-year plan review and approval process, CBP would be better positioned to identify and address sources of delay and could improve its ability to meet statutory reporting requirements by including its five-year plan with its annual budget submission to Congress. CBP Has Not Followed a Consistent Methodology for Prioritizing Capital Projects CBP develops a list of roughly eight to twelve priority land border crossing capital projects each year and presents these projects to Congress in the five-year plan, but the agency has not established a consistent methodology in developing this list. CBP’s five year plans note five broad steps CBP follows in developing the list of priority capital projects. These steps are applicable to the entire land border crossing portfolio— regardless of ownership—and include: 1. Strategic Resource Assessment (SRA): According to the five-year plan, CBP conducts SRAs cyclically to compare infrastructure requirements across its portfolio and present a uniform picture of capital investment needs at all land border crossings along the northern and southern borders. 2. Capital Project Scoring: Using data generated during the SRA, CBP scores and ranks each land border crossing by criticality and relative urgency of infrastructure needs. 3. Sensitivity Analysis: CBP then applies a sensitivity analysis and updates its initial ranking based on factors unaccounted for through the SRA, including unique regional conditions, bilateral planning with partners in Canada and Mexico, or interests of other federal, state, or local agencies. 4. Assess Feasibility and Risk: CBP coordinates with project stakeholders—including GSA for all GSA-owned land border crossings—to evaluate the feasibility, risk, and cost associated with project implementation by completing a feasibility study. These studies analyze alternatives and review environmental, cultural, and historic preservation requirements as well as land acquisition requirements and procurement risks. CBP also assesses the likelihood of obtaining funding for the proposed project. 5. Establish a Five-year Capital Investment Plan: After the SRA and the scoring, analysis, and assessment phases, CBP prioritizes land border crossing capital projects and develops a five-year capital investment plan in coordination with GSA. CBP updates the plan annually, taking into account the changing conditions at land border crossings. Although CBP has outlined the five broad steps it uses to prioritize projects, our analysis of CBP’s five-year plans for fiscal years 2014 through 2018 identified that CBP did not follow a consistent methodology across the years or across projects when prioritizing prospective land border crossing projects. For example, in some five-year plans CBP prioritized projects by comparing relative need at land border crossings using more recent SRA data for some land border crossings and older data for other land border crossings. In one such instance in fiscal year 2018, CBP compared relative need using 2015 data for some land border crossings and data dating as far back as 2007 for other land border crossings. Although CBP’s five-year plan states that CBP performs SRAs cyclically, CBP has not established the frequency at which SRAs are to be completed. In 2015, CBP completed a partial SRA update for 36 of 167 land border crossings that it considered high-priority, but has not completed a portfolio-wide SRA since 2007. Our analysis of CBP’s five-year plans for fiscal years 2014 through 2018 also identified that CBP had feasibility studies for some, but not all, projects listed in the five-year plans. Specifically, our analysis identified that CBP had feasibility studies for approximately two thirds (28 of 41) of the projects it prioritized over these years. CBP officials told us that due to the limited shelf-life of feasibility studies (two to three years), CBP and GSA target high-priority land border crossing projects for feasibility studies that are likely to receive funding within the next two to three years. However, of the top five projects CBP ranked as the highest priority in each of its five-year plans in fiscal years 2014 through 2018, CBP completed feasibility studies for approximately half (12 of 20) of these projects. Further, among the 12 projects CBP ranked in the top five in its fiscal years 2014 through 2018 five-year plans that had feasibility studies, 10 of 12 projects had a feasibility study that was more than five years old when CBP prioritized them. In addition, CBP prioritized projects on each of its five-year plans by comparing cost estimates developed through different methodologies. Specifically, CBP prioritized projects using detailed cost estimates developed as part of a feasibility study for some projects and order of magnitude cost estimates for projects that do not have a feasibility study or that had an out-of-date feasibility study. These order of magnitude cost estimates were significantly different from the cost estimates that were later produced for these projects through feasibility studies. For example, CBP’s fiscal year 2015 plan included an order of magnitude cost estimate of $95 million to implement a single project at two separate crossings—San Luis I and II. However, after completing a feasibility study for the project in October 2017, GSA estimated it would cost $289 million—a nearly 300 percent cost increase—to complete the project. CBP outlines the five broad steps it is to take in general to develop a list of priority projects each year and establish an annual five-year plan and these steps are documented at a high level. However, there is not a detailed planning methodology that would help ensure officials consistently and appropriately develop and assess priority projects each year. For example, the five-year plans do not define what minimum steps CBP personnel are to take at each step in the process, such as guidance and procedures on which projects require feasibility studies. The plans also do not include time frames for completing each step, such as establishing expectations for the frequency at which CBP personnel are to update SRA data. As a result, CBP officials told us they rely on informal processes and procedures to complete these steps and prioritize land border crossings in its annual five-year plans. CBP officials acknowledged that they have not issued formal guidance documenting the steps in its prioritization process or establishing procedures and time frames for each step, but stated that they plan to do so going forward. Specifically, officials told us that CBP plans to document its process for prioritizing land border crossing projects to improve transparency, better educate staff on roles and responsibilities, and help ensure CBP consistently applies this process each year. While these would be positive steps, CBP was not able to provide information on specific plans or expected time frames for implementing these steps. Standards for Internal Control in the Federal Government state that management should define objectives so that they are understood at all levels by outlining what is to be achieved, how it will be achieved, and the time frames for achievement. The standards also establish that management should implement control activities through documented policies. To achieve this, management should document policies that establish each unit’s responsibility for achieving the objectives related to an operational process. Establishing and documenting a methodology for CBP’s annual land border crossing capital prioritization process, including procedures and time frames for each step, could help ensure that CBP identifies key activities needed to prioritize projects and that CBP personnel follow a consistent methodology across projects and across years. For example, such a methodology could help CBP identify which projects require feasibility studies in a given fiscal year, and how they are to use information on project feasibility, risk, and cost when prioritizing projects. Further, having time frames for each step could help CBP determine how often to update SRA data across its portfolio for purposes of comparing relative infrastructure needs at land border crossings. Lastly, establishing and documenting a land border crossing prioritization methodology could help CBP ensure it consistently provides Congress with more up-to-date and complete information in its five-year plans. Recent GSA Capital Projects Generally Experienced Schedule Growth, but Met Cost and Scope Goals; CBP and GSA Reported Some Challenges Developing Projects Most of GSA’s 10 Land Border Crossing Projects Experienced Schedule Growth, but Stayed within Cost Contingency Allowances at Full Scope From fiscal years 2014 through 2018, GSA initiated or completed 10 capital infrastructure projects at eight land border crossings. Among these projects, six were complete and four were ongoing as of March 2019. Projects at three of these border crossings—Alexandria Bay, Calexico West, and San Ysidro—consist of multiple phases. GSA manages each phase as a distinct project funded under separate congressional appropriations and executed through separate contracts. Across all 10 projects, the amount of schedule growth against the original schedule baselines ranged from 0 percent growth to 59.2 percent growth, though several of these projects revised their baselines to account for the schedule growth. Half of the projects experienced less than 10 percent schedule growth above their original schedule baselines, and the other half experienced more than 10 percent schedule growth. When accounting for projects for which schedule baselines were revised, among the 10 projects, six have met or are on track to meet schedule baselines. The Alexandria Bay project, which GSA expects to complete in January 2020, is the only project on track to meet its original schedule baseline. GSA revised its schedule baselines during construction for the remaining five projects and all have met or are on track to meet these revised baselines. More specifically, Calexico West, Derby Line, and Nogales West-Mariposa are the three projects that are complete and met revised schedule baselines. San Ysidro phases II and III are the two ongoing projects that are on track to meet their revised schedule baselines as of January 2019. See table 4 below for a breakdown of project schedule performance. Four of GSA’s 10 projects did not meet, or are not expected to meet, their schedule baselines. The Tornillo-Guadalupe project experienced the most schedule growth of the projects we reviewed. GSA completed the Tornillo-Guadalupe project in October 2014, 470 days later than its original baseline in July 2013 and 80 days later than its August 2014 revised baseline. Schedule growth at Tornillo-Guadalupe was primarily due to delays in the construction of corresponding Mexican infrastructure, unstable soil conditions, and contractor performance, according to GSA officials. In addition to Tornillo-Guadalupe, the San Ysidro I and Laredo projects did not meet their schedule baselines, and the Columbus project is not on track to meet its schedule baseline, as of January 2019. Of the four projects that experienced schedule growth against their final schedule baselines, two projects had less than 5 percent growth and two projects had about 10 percent growth. While none of the 10 projects kept costs at or below baselines, eight projects stayed within their 10 percent cost contingency allowance. The Tornillo-Guadalupe and Derby Line projects both exceeded their cost contingency allowance. GSA completed the Tornillo-Guadalupe project in October 2014 at a final construction cost of $59 million—18.7 percent above its cost baseline—due to challenges described above. GSA completed the Derby Line project in November 2018 with a final construction cost of $26.4 million—10.6 percent above its cost baseline— mainly due to CBP-requested changes, according to GSA officials. The total baseline construction cost for all 10 projects, as of January 2019, is $1.03 billion and the combined current contract value is $1.09 billion— which is about $62.9 million (6.1 percent) over baseline budgets. See table 5 below for a breakdown of project cost performance. GSA has completed, or expects to complete, nine out of the 10 projects at full scope. GSA reduced scope for one project—Laredo, TX—due to cost concerns after the construction contract award. During Laredo project construction, GSA removed plans to build a footbridge spanning the passenger vehicle primary lanes and cosmetic finishes to buildings to avoid further cost overruns, according to GSA and CBP officials. See appendix II for detailed descriptions of the ten projects. GSA and CBP Reported Facing Various Challenges Related to Planning, Designing, and Constructing Infrastructure Projects at Land Border Crossings Project Challenges During Planning and Design GSA reported facing challenges planning and designing land border crossing capital projects. These challenges included delays between design and construction and the division of large projects into smaller phases, which GSA officials reported led to higher costs and longer development timelines. Funding Lags. GSA officials reported that funding lags between project design and construction can increase costs and extend construction timelines. GSA has requested separate appropriations for project design and construction using a model known as design-bid-build, which created the potential for funding lags to occur. According to CBP and GSA officials, the process from requesting an infrastructure project to completing the project lasts approximately 7 years. However, GSA experienced funding lags of up to 10 years between design and construction. Figure 16 identifies development timelines from initial planning through construction for our 10 selected land border crossing capital projects. The cost of labor and materials can escalate when funding lags occur between design and construction. For example, after completing design for the Calexico West project, GSA requested construction funding in fiscal year 2010, but did not receive funding until five years later. As a result, estimated construction costs escalated from $78.5 to $90.8 million (16 percent). To keep project cost estimates up-to-date during funding lags, GSA officials explained that GSA typically increases project cost estimates over time to account for inflation, changes in the labor market, and the cost of materials, among other factors. To help address cost escalation, contractors have purchased materials upfront, and GSA has combined projects that would otherwise be constructed separately. To address increasing materials costs for the Alexandria Bay project, the contractor purchased steel upfront in order to avoid future cost increases due to import tariffs, according to GSA officials. The Laredo project faced significant labor and material cost growth due to a boom in the Texas construction market. As a result, GSA decided to combine the two Laredo crossings into one contract to lock in prices and avoid paying higher prices in the future. According to GSA officials, funding lags between design and construction may result in outdated project designs that do not reflect newer CBP infrastructure requirements. In such instances, GSA must invest additional time and resources to update project designs and incorporate new CBP requirements, such as newer inspection technologies or facilities. According to GSA officials, design refreshes can be challenging due to a lack of continuity and staff turnover at the architecture and engineering firms that originally designed the project. In some instances, according to GSA officials, the original firms may not be available or interested in redesigning the project and GSA may need to hire a new firm. For example, GSA spent $3.3 million on design for the Columbus project in fiscal years 2007 and 2009. However, the funding lag between design and construction required a $7.4 million design refresh in fiscal year 2014. In another example, GSA established the Calexico West project’s design concept in fiscal year 2007, but didn’t receive construction funding until fiscal year 2015. According to officials, GSA had to spend approximately $1 million for a design refresh to account for new CBP requirements, which resulted in a longer development timeline. To address risks of funding lags with the design-bid-build approach, GSA has shifted toward using contract vehicles for land border crossing capital projects that combine design and construction into a single appropriation. This approach allows for more precise planning, less risk from delays, and less time for costs to escalate, according to GSA officials. Project Phasing. According to GSA officials, OMB may request that GSA and CBP divide large projects into separate phases when high-cost projects are unlikely to be funded in a single appropriation. For example, of the eight border crossing locations represented across the 10 projects in our review, CBP and GSA broke three projects at three locations into phases to obtain approval: Alexandria Bay, Calexico West, and San Ysidro. However, for similar reasons as those related to funding lags between design and construction, breaking up projects into smaller phases can increase overall costs and add years to project timelines. According to GSA and CBP officials, when appropriations do not align with project schedules, contractors may leave the site after completing a single phase to pursue new work opportunities. Additionally, by the time GSA receives appropriations for latter phases, the contractor must remobilize equipment and labor, costs of labor and material may have increased, and projects may need design refreshes, as described above. For example, after Calexico West phase II remained unfunded two years after phase I was completed, GSA estimated that project costs increased by $27.7 million due to increases in labor and materials and potential redesign work. In another example, GSA officials told us that after originally designing the Alexandria Bay project as a single-phase in 2010, OMB directed GSA to break the project into two phases in 2014 to increase the likelihood of funding. According to GSA officials, redesigning Alexandria Bay as a two- phase project added as much as $16.5 million to total project costs. Construction costs escalated by about $58.4 million from the single-phase estimate in fiscal year 2011 to fiscal year 2017 when phase I construction began, an increase of 36 percent. Further, completing the Alexandria Bay project in two phases added an additional three years to the project timeline. While breaking projects into phases can potentially lead to higher costs, GSA officials told us that doing so can be an effective way to start work on a large capital project when funding for the entire project is not available in a single year and can be cost effective when GSA receives appropriations for each phase in line with its planned schedule. Project Challenges During Construction GSA and CBP have reported facing challenges constructing land border crossing projects, including those related to CBP-requested changes, geographical and environmental factors, and inadequate or incomplete infrastructure in neighboring countries. CBP Change Requests. CBP may request modifications to ongoing projects through Reimbursable Work Authorizations to meet changing infrastructure requirements, such as incorporating newer technologies and CBP design standards. These requests range from installing new information technology and security equipment to enhancing office, holding facilities, or public-facing areas of the port. CBP change requests are often necessary because the span between design and construction can last up to 10 years, according to CBP and GSA officials. While CBP typically pays for the cost of these modifications, GSA must incorporate changes into existing project plans, which can result in schedule growth, according to GSA officials. CBP-requested changes led to cost and/or schedule growth at the Calexico West, Columbus, Derby Line, Nogales West-Mariposa, and San Ysidro land border crossing projects, according to GSA officials. In one example, GSA revised the Nogales West- Mariposa project’s schedule baseline from March 2014 to August 2014 to incorporate a $10 million Reimbursable Work Authorization from CBP that added an outbound inspection facility. Environmental and geographical challenges. Environmental and geographical factors including extreme climates, remote locations, and limited space, can create construction challenges, according to CBP and GSA officials. Extreme climates can disrupt construction activities, such as concrete work at land border crossings. CBP officials said that at some southern crossings concrete may crack when it dries too quickly due to extreme heat, requiring contractors to pour concrete in the early morning when temperatures are cooler. However, officials said that because this work typically occurs outside of regular business hours, it often comes at a premium and can increase project costs. Along the northern border, contractors may not be able to do concrete work during the winter months because temperatures can be too cold to pour concrete. At Derby Line, because of delays earlier in construction, work extended into an additional winter season, contributing to cost and schedule growth because contractors were slowed or limited by weather, according to GSA officials. Environmental conditions surrounding construction sites also led to construction challenges, and in turn, cost and schedule growth. The area surrounding the Columbus land border crossing is prone to severe flooding, and major flood events have forced CBP to close the port several times a year, according to GSA officials. Officials also said flooding posed a potential risk of deteriorating port structures. After GSA spent $3.3 million to develop the original design, it spent an additional $7.4 million on a design refresh to incorporate flood protection and update CBP requirements to prepare for construction. In another example, GSA and the contractor discovered unstable soil conditions during the Tornillo- Guadalupe project that resulted in a two month delay and $1.3 million cost increase (about 3 percent of the project budget) to mitigate. GSA officials told us they may also experience challenges accessing labor, materials, and utilities for projects at remote land border crossings. For example, Alexandria Bay’s remote location created logistical challenges for transporting concrete to the site. Because the land border crossing is on an island and only accessible via toll bridge, the contractor determined it was more cost effective to construct a temporary concrete plant onsite. GSA officials also stated the labor market in Alexandria Bay is limited—due in part to its remoteness—and that labor costs were high because the contractor had to temporarily relocate its employees to the area. In another example, officials reported challenges with transporting construction materials to the Tornillo-Guadalupe site due to its remote location, contributing to 2.5 months in schedule growth. Natural features and dense population centers surrounding land border crossings can create challenges for contractors during construction. For example, the Alexandria Bay project—which will triple the crossing’s footprint when complete—required contractors to blast massive rock formations to create more room for facilities. GSA officials stated the rock removal entailed significant coordination with CBP because GSA required CBP to temporarily halt vehicle processing for safety reasons when GSA’s contractor was using dynamite. Officials also told us that snow removal is a challenge at Alexandria Bay because there are limited places to put plowed snow without impeding traffic and interrupting CBP operations. Corresponding international infrastructure. Inadequate or incomplete infrastructure in neighboring countries can lead to project delays. GSA officials explained that because land border crossings on both sides of the border need to connect, capital infrastructure projects in the United States are largely dependent on the readiness of Mexican or Canadian infrastructure. For example, GSA completed the Tornillo-Guadalupe project in October 2014, but delayed opening cargo processing facilities due to Mexico’s delays in completing its new commercial facilities and bridge system required for commercial traffic. As a result, CBP did not begin processing inbound cargo at Tornillo-Guadalupe until March 2016—16 months after it began processing passenger vehicles. Furthermore, after processing 277 trucks in 14 months, CBP suspended commercial inspection operations in May 2017, citing low traffic volumes. CBP officials said that commercial transporters were unwilling to use underdeveloped Mexican infrastructure in the region, leading to low commercial traffic volumes, and in turn, CBP’s decision to suspend commercial operations. Similarly, GSA had to delay work for 3 months on the Calexico West project because Mexico was behind schedule on its infrastructure project, according to GSA officials. To address this issue, GSA slowed work in that area and Mexico accelerated its schedule so that GSA and Mexico could complete their sections simultaneously. Conclusions CBP is charged with facilitating billions of dollars in trade and travel at the nation’s border, while also preventing terrorists, criminals and other inadmissible individuals from entering the country. Given that CBP relies on infrastructure at land border crossings to fulfill its mission, maintaining the condition of the infrastructure is critical and can also be challenging, as many land border crossings were built more than 70 years ago. By developing and implementing a plan to ensure CBP executes its FCA program to assess the condition of infrastructure at CBP-owned land border crossings consistent with DHS policy, CBP would be able to maintain more complete and current information on its overall infrastructure needs. Also, given that GSA owns many of the land border crossings out of which CBP operates, sharing and using certain relevant information with each other—such as their respective facility assessments and repairs at land border crossings—could help both agencies improve the accuracy and completeness of their respective assessments of facility condition. Additionally, while CBP develops five-year plans to prioritize capital projects at land border crossings, establishing time frames for stakeholders who review and approve the plans would better position CBP to identify and address sources of delay and could improve its ability to complete a plan each year and include it in the budget submission to Congress. Furthermore, by also establishing a methodology for prioritizing its capital projects—including key required procedures and time frames—CBP could better ensure consistency in its approach from year to year. Recommendations for Executive Action We are making a total of seven recommendations, including five to CBP and two to GSA: The CBP Commissioner, in conjunction with the DHS Office of the Chief Readiness Support Officer, should develop and implement a plan to ensure that CBP executes its FCA program by conducting FCAs at each CBP-owned land border crossing consistent with DHS Directive 119-02- 004. (Recommendation 1) The CBP Commissioner should share FCA reports with GSA and use facility condition information in GSA’s Building Assessment Tool to inform FCAs. (Recommendation 2) The GSA Administrator should share Building Assessment Tool reports with CBP and use facility condition information in CBP’s FCAs to inform its assessments through the Building Assessment Tool. (Recommendation 3) The GSA Administrator, in conjunction with CBP, should share with CBP information on GSA maintenance and repair work at GSA-owned land border crossings at the level of detail necessary to inform CBP’s data in TRIRIGA. (Recommendation 4) The CBP Commissioner should use information on maintenance and repair work conducted by GSA at GSA-owned land border crossings to update facility condition information in TRIRIGA on an ongoing basis. (Recommendation 5) The CBP Commissioner should establish review time frames for stakeholders involved in its Five-year Capital Investment Plan review and approval process. (Recommendation 6) The CBP Commissioner should establish and document a methodology for its annual land border crossing capital prioritization process that includes procedures and time frames for each step. (Recommendation 7) Agency Comments and Our Evaluation We provided a copy of this report to DHS and GSA for review and comment. DHS and GSA provided comments, which are reproduced in full in appendix III and appendix IV, respectively, and discussed below. DHS also provided technical comments, which we incorporated as appropriate. In its comments, DHS and GSA concurred with our seven recommendations and described actions planned to address them. With respect to our first recommendation that CBP develop and implement a plan to execute FCAs at CBP-owned land border crossings consistent with DHS Directive 119-02-004, DHS stated that CBP intends to develop a plan for completing FCAs at CBP-owned land border crossings consistent with the Directive. With regard to our second recommendation that CBP share FCA reports with GSA and use GSA’s Building Assessment Tool to inform CBP FCAs, DHS stated that CBP plans to provide FCA data to GSA. DHS also stated it has already begun receiving Building Assessment tool reports from GSA and will determine how to best use the information to inform CBP FCAs. With respect to our third recommendation that GSA share Building Assessment Tool reports with CBP and use CBP’s FCAs to inform its assessments, GSA stated it is developing a plan to share Building Assessment Tool information and use FCA information to inform its assessments. With regard to our fourth recommendation that GSA share information on its maintenance and repair work at GSA-owned land border crossings at the level of detail necessary to inform CBP’s data in TRIRIGA, GSA stated it will develop a plan to share information on GSA maintenance and repair work at the level of detail necessary to inform CBP’s data in TRIRIGA. With respect to our fifth recommendation that CBP use information on maintenance and repair work conducted by GSA at land border crossings and update facility condition information in TRIRIGA on an ongoing basis, DHS stated it has already begun receiving data from GSA on corrective maintenance work at land border crossings and that CBP will develop a plan for updating facility condition information in TRIRIGA using the data. With regard to our sixth recommendation that CBP establish time frames for stakeholders involved in its Five-year Capital Investment Plan review and approval process, DHS stated that CBP will establish a policy that outlines time frames for stakeholders involved in the review and approval process. DHS also concurred with our seventh recommendation that CBP establish and document a methodology for its annual land border crossing capital prioritization process that includes procedures and time frames for each step. Specifically, DHS stated that CBP will document the process and procedures, and provide time frames, for each step in the process. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Homeland Security, and the Administrator of the General Services Administration. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-8777 or gamblerr@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of our report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: U.S. Land Border Crossings along the Northern and Southern Borders U.S. Customs and Border Protection (CBP) operates at 167 land border crossings along the northern and southern borders. Of the 167 land border crossings, CBP owns 40. The General Services Administration (GSA) fully owns 101, partially owns three, and leases 19. The National Park Service owns two and the U.S. Forest Service owns one. One land border crossing is privately owned. Further, CBP and GSA have assessed the condition of 95 of the 167 land border crossings along the northern and southern borders and calculated a facility condition index (0- 10% good, 10-20% fair, 20-30% poor, and 30-100% critical) and identified the total cost of infrastructure deficiencies at each crossing. Table 6 identifies land border crossings by name, state, ownership, year constructed, the year last renovated, facility condition index score, and the cost of known infrastructure deficiencies, according to CBP data, and is for informational purposes only. Appendix II: Land Border Crossing Project Profiles Overview of Recent GSA Land Border Crossing Capital Projects To provide an overview of recent land border crossing capital infrastructure projects, we developed a profile for each project that was active during fiscal years 2014 through 2018. These profiles contain background information on each crossing, along with basic travel, trade, and law enforcement data. Each profile also contains information on how infrastructure constraints affected U.S. Customs and Border Protection (CBP) operations, and how CBP and the General Services Administration (GSA) addressed those constraints through the capital project. Finally, the profiles include an assessment of project cost and schedule performance. We compiled information in the following project profiles from a variety of federal sources. We provide background information on each land border crossing in the “At A Glance” section of each profile. Some land ports of entry contain multiple land border crossings. While each project, and associated project performance data, refers to a single crossing unless otherwise noted, all throughput and trade data in this section is provided at the port-level. Law enforcement data are provided at the port-level, with the exception of arrests, which is provided at the crossing level. Daily CBP officers assigned to the port refers to the daily average for fiscal year 2017. We obtained condition, staffing, and law enforcement data from CBP’s Office of Facilities and Asset Management. Condition information includes the year GSA built each individual crossing and when GSA last modernized it through a major capital project. The number of arrests refers to arrests at land border crossings made by CBP Office of Field Operations officers, and does not include Border Patrol apprehensions. We analyzed data on imports, exports, and trade values from the Department of Transportation’s Bureau of Transportation Statistics (BTS) TransBorder Freight Data. These data are collected by CBP, processed and validated by the U.S. Census Bureau, and analyzed by BTS. Value of trade includes the combined totals of imports and exports for 2017. We also analyzed BTS’s Crossing/Entry Data to determine throughput for pedestrians, passenger vehicles, and cargo trucks. We analyzed project cost and schedule performance data from GSA’s Electronic Project Management system. These data included project cost and schedule baselines, and updated cost and schedule performance data as of January 2019. For multi-phase projects with only one phase included in our scope, phase costs may not equal total project costs when combined because certain project costs, such as site acquisition, cannot be attributed to an individual phase. Under schedule performance, original completion date refers to the project’s baseline substantial completion date. Revised completion date, if applicable, refers to a project’s updated substantial completion as revised by GSA to address project setbacks or delays. For ongoing projects, expected completion date is the date when GSA officials expect to complete the project. For completed projects, the actual completion date is the date the project reached substantial completion. We obtained information on crossing infrastructure constraints and project plans through interviews with GSA and CBP officials and project documents. These officials included GSA headquarters and project management officials, as well as CBP Office of Field Operations field office officials and local CBP officers. “Infrastructure Impacts on CBP Operations” refers to infrastructure constraints that existed prior to GSA’s recent capital project, while “Infrastructure Improvement Plans” describes each project’s scope and performance. To assess the reliability of project performance data from GSA’s Electronic Project Management system, we examined the data for obvious errors, and discussed the data with GSA project management officials. We determined the data to be sufficiently reliable for our purposes. To assess the reliability of trade data, we reviewed documentation and conducted interviews with officials from the U.S. Census Bureau, the original source of the validated data. Specifically, we analyzed procedures by agencies responsible for collecting the statistics, and reliability assessments by those agencies and outside sources. After reviewing data dictionaries and BTS’s quality control measures for analyzing the Census data, and conducting data quality checks, we determined that the trade data, originally collected by Census and released by BTS, are sufficiently reliable for providing contextual information about the value of trade. To assess the reliability of BTS crossing/entry data, we reviewed relevant documentation and procedures for analyzing the data, and met with BTS officials to discuss potential limitations. We determined the data to be sufficiently reliable for the purposes of reporting entry data for pedestrians, passenger vehicles, and trucks. Finally, we found the dates land border crossings were built and last modernized may be inconsistently recorded as provided by CBP’s Office of Facilities and Asset Management, but we provided accurate information in the project profiles. Built in 1974, Alexandria Bay is the seventh busiest commercial border crossing between the United States and Canada, as of 2017. In 2017, U.S. Customs and Border Protection (CBP) processed about 4,100 passengers, 1,600 passenger vehicles, 4 buses, and 600 trucks per day at Alexandria Bay. The majority of people crossing into the United States through Alexandria Bay in passenger vehicles are tourists traveling from the Ottawa, Kingston, Toronto, and Montreal regions, according to General Services Administration (GSA) project documentation. In 2017, GSA began phase I of a capital infrastructure project at Alexandria Bay. Prior to the project, the existing crossing lacked capacity to process growing traffic volumes which led to significant backups. These delays effectively brought cross-border traffic to a standstill, with traffic backups sometimes stretching three miles into Canada. The preprimary area did not provide adequate space for commercial traffic because the bridges connecting the United States and Canada were not designed to support prolonged periods of heavy traffic caused by backups. The commercial inspection facility provided enough space to unload a single commercial truck at a time and CBP’s commercial office space was housed in mobile trailers. GSA’s projected increases in traffic volume and updated CBP security procedures would necessitate an increase in the federal workforce beyond the existing crossing’s capacity. Capital Project Performance Cost Performance Phase I of this two-phase project will feature a new commercial building and warehouse, new commercial inspection lanes, and a new veterinary services building, among other enhancements. The completed two-phase project will more than double building space and triple the crossing’s footprint. Phase I will include five commercial inspection lanes—some of which will be equipped to process both commercial and passenger vehicles. After phase II, the crossing will feature five more passenger vehicle lanes and five more commercial lanes than the existing facility. An improved traffic pattern throughout the crossing will increase queuing space and allow safe and secure processing of traffic entering from Canada. Total funding for the entire project is $238 million, including $105 million for phase I, and construction began in August 2017. Phase I remains largely on budget and on schedule for completion in January 2020. GSA is expected to begin phase II in January 2020 and complete the project in July 2022. Calexico West, located in downtown Calexico, California, processes pedestrians and passenger vehicles. Inbound commercial and bus traffic are processed at the nearby Calexico East land border crossing, which opened in 1997 after Calexico West ceased commercial operations. Calexico West is the main crossing linking the California Imperial Valley agricultural industry to the Mexican state of Baja California and, according to U.S. Customs and Border Protection (CBP) officials, processes large volumes of farm workers during harvest season. CBP and General Services Administration (GSA) officials reported that the crossing’s facilities were undersized relative to current traffic volumes and obsolete in terms of inspection officer safety and border security. According to GSA, the crossing’s layout was also inefficient, resulting in bottlenecks and long lines for passenger vehicles and pedestrians. Passenger vehicle wait times regularly exceeded 1.5 hours during peak travel times, with outbound traffic often extending 1.5 miles into the United States. Facilities in the main building, including agricultural inspection laboratories, storerooms, holding cells, waiting areas, and officer work areas, were inadequate and undersized. CBP faced challenges finding space to install newer inspection equipment and technologies in the existing facilities, according to CBP officials. Finally, the passenger vehicle secondary inspection area was open to public view, enabling individuals to observe CBP inspections. CBP and GSA officials reported that phase I of this two-phase project reconfigured and expanded the existing crossing to reduce congestion and created five times more building space. Phase I delivered a new main building, 10 of 16 planned inbound vehicle inspection lanes, and five outbound vehicle inspection lanes. It also included a secondary vehicle inspection facility with canine kennels. The new preprimary inspection area is significantly larger, allowing CBP to actively manage traffic and reduce congestion. Further, the larger preprimary inspection area allows CBP officers to safely and effectively patrol this area with canine units, improving the effectiveness of CBP’s inspections. GSA completed the $94.6 million phase I construction in September 2018, about 6.4 percent above its cost baseline and six months later than planned. Delays associated with a corresponding infrastructure project in Mexico and CBP- requested modifications contributed to schedule growth. Phase II received partial funding in February 2019—two years after Phase II was scheduled to begin. Built in 1989, Columbus processes commercial traffic, passenger vehicles, and pedestrians. It is the only 24-hour pedestrian border crossing in New Mexico. Commercial traffic has steadily increased from about 5,700 trucks in 2007 to over 14,100 trucks in 2017. Historically, according to a GSA planning study, commercial traffic spiked in August and September during harvest season because produce is one of Columbus’s primary imports. Pedestrian traffic is higher during the harvest months due to farm workers and the winter when seasonal visitors cross the border. In 2017, the General Services Administration (GSA) began a capital infrastructure project at Columbus. U.S. Customs and Border Protection (CBP) and GSA officials reported that prior to this project, CBP operated from deteriorating facilities that were reaching the end of their useful lives. The volume of commercial trucks and travelers has increased significantly since the crossing opened and is expected to continue to grow. Over the years, GSA added additional facilities that, in turn, impeded traffic flow, caused backups, and threatened officer safety. Prior to the project, CBP could inspect two trucks at a time at the cargo loading dock. CBP also lacked the space to completely offload cargo, limiting inspection effectiveness. The site experienced significant flooding during major rain events that further limited inspection space and further deteriorated infrastructure, according to officials. CBP and GSA officials reported that the project involves a complete demolition of existing facilities and more than triples the crossing’s footprint with donated land. New facilities include a separate commercial processing facility and an expanded main building with new Non-Intrusive Inspection technologies, a hazardous material inspection area, canine kennel, narcotics vault, and site drainage improvements to address flooding. Processing capacity will expand from one pedestrian lane to four, from two passenger vehicle lanes to three, and from zero commercial lanes to one, and will increase usable commercial dock spaces from two to 12. GSA spent $3.3 million on design from 2007 to 2009. It spent another $7.4 million in 2014 on a redesign that incorporated flood protection and new CBP standards. GSA expects to complete the $87 million project in April 2019--about 3 percent above its cost baseline and two months later than planned due to CBP requested changes. Built in 1965, Derby Line I-91 is the busiest land border crossing in Vermont. The crossing processes passenger vehicles, buses, cargo, and pedestrians. There are two border crossings in Derby Line, at I-91 and about a half mile west on Route 5. The I-91 crossing is a large facility located on a major highway whereas the Route 5 crossing is relatively small, located on the village’s Main Street. U.S. Customs and Border Protection (CBP) processed about 3,000 passengers per day in 2017, along with about 1,500 passenger vehicles and 300 trucks. In 2016, General Services Administration (GSA) began a capital infrastructure project at the Derby Line I-91 crossing. CBP and GSA officials reported that CBP substantially increased staffing at the crossing over the years, resulting in overcrowded conditions. The administrative building lacked sufficient office and storage space, had limited secure areas to perform interviews and searches, and lacked a secure holding area. Due to insufficient space and outdated IT systems, the crossing could not accommodate newer inspection technologies. The commercial secondary inspection area was too small to completely offload cargo trucks for inspection and the vehicle lift was inoperative. The facility also lacked sufficient space to inspect buses and luggage. The crossing had poor lighting and inadequate perimeter security, and lacked measures to prevent travelers from exiting the crossing without authorization. Finally, poorly designed inbound primary inspection lanes made it difficult for commercial trucks to navigate through the crossing, at times resulting in long traffic delays, according to officials. CBP and GSA officials reported that the capital project will reduce cross- border travel times and improve the traveler experience. The project expanded the crossing’s footprint from 0.25 to 23 acres and improved traffic flow around the crossing, while adding measures to prevent travelers from exiting the crossing without authorization. Site improvements included new lighting, fire protection, and storm water management systems, among others. The project included a main building, and a commercial secondary inspection facility for CBP to offload and inspect trucks. GSA completed construction in November 2018 about 5 months later than originally planned and 11 percent above its cost baseline. Cost and schedule growth were primarily due to CBP-requested changes and contractor performance. The Laredo Land Port of Entry is made up of four land border crossings, each with its own bridge. In January 2019, the General Services Administration (GSA) completed a capital project at two of these crossings —the Convent Street Bridge (Laredo 1), and the Lincoln-Juarez Bridge (Laredo 2). Laredo 1 and 2 are located in downtown Laredo and process passenger vehicle and pedestrian traffic. The other two crossings–the Colombia Solidarity Bridge (Laredo 3) and the World Trade Bridge (Laredo 4)—primarily process cargo. The city of Laredo owns and maintains these bridges, while GSA owns and maintains the crossings and all property inside the crossing facilities. Laredo at a Glance Capital Project Performance Cost Performance U.S. Customs and Border Protection (CBP) and GSA officials reported that volume at Laredo 1 and 2 have increased significantly in recent decades. Prior to the capital project, facilities at Laredo 1 did not effectively separate vehicles, bicycles, and pedestrians within the crossing, creating congestion, safety concerns, and pedestrian queues that could extend across the bridge into Mexico. GSA is unable to make extensive alterations or expand Laredo 1 because it is a U.S. Historic Site and is surrounded by businesses and homes. Laredo 2 was unable to efficiently process current traffic volumes. For example, GSA originally designed Laredo 2 to process up to 10 buses per day. However in 2017, Laredo 2 processed approximately 110 buses and 2,000 bus passengers each day. To accommodate these volumes, CBP converted Laredo 2’s passenger vehicle secondary facility to inspect buses and moved secondary vehicle inspections to a temporary facility. CBP and GSA officials reported that the capital project focused on improving efficiency, safety, and security while expanding pedestrian capacity at Laredo 1 and bus capacity at Laredo 2. GSA combined improvements at the two crossings into one estimated $96.6 million project ($33 million for Laredo I and $63.6 million for Laredo II) to save on labor and material costs. At Laredo 1, GSA replaced the main building, expanded pedestrian lanes from eight to 14, and reconfigured vehicle lanes to integrate newer inspection technologies. At Laredo 2, GSA enlarged the main building, built a facility to process passenger vehicle and bus passengers, and expanded bus processing capacity from two to eight lanes. GSA scoped out a footbridge and scaled back aesthetic finishes to control costs. GSA completed Laredo 1 in April 2018 and Laredo 2 in January 2019—about 3 months later than originally planned and 6 percent above cost baseline. Nogales West-Mariposa is one of three land border crossings in Nogales, Arizona and is one of the busiest land border crossings in the United States. It serves as the southern border’s main entry and distribution point for produce entering from Mexico. Nogales West processes about half of the agricultural commodities entering the United States from Mexico and has facilities for pedestrian, passenger vehicle, and commercial traffic. The other crossings in Nogales are the DeConcini (pedestrians and passenger vehicles) and Morley Gate crossings (pedestrians). In 2010, the General Services Administration (GSA) initiated a $180 million capital infrastructure project. U.S. Customs and Border Protection (CBP) and GSA officials reported that facilities and technologies at the original Nogales West-Mariposa land border crossing were outdated. The crossing’s layout was also inefficient resulting in bottlenecks, congestion, and commercial traffic backups that extended for miles into Mexico. GSA subsequently added new facilities to accommodate bus and pedestrian inspections, but did so in a way that further constrained space, impairing traffic movement within the crossing, according to officials. Wait times of up to eight hours resulted in spoilage or reduced shelf-life of perishable goods, resulting in financial losses for businesses. The original crossing also lacked adequate space and CBP repurposed some facilities to accommodate operational needs, including storing evidence in holding areas. CBP and GSA officials reported that the capital project focused on improving operational efficiencies, processing capacity, and security and safety of officers and the traveling public. The project entailed demolishing all existing structures and replacing them with new facilities, including new inspection areas, a main building, and other support facilities. GSA added 13 acres to the crossing’s footprint and expanded processing capacity from three to eight cargo primary lanes, one to five commercial exit lanes, 23 to 56 cargo docks (including six for refrigerated inspection), four to 12 passenger vehicle primary lanes, and eight to 24 passenger vehicle secondary inspection spaces. GSA completed the $180 million project in August 2014 more than 5 months later than originally planned and 5.5 percent above its cost baseline. This was due to CBP-requested changes, design deficiencies, and high site utility costs, among other reasons, according to officials. The project resulted in reduced wait times, but led to higher than expected operational and maintenance expenses. Built in 1932, San Ysidro is the busiest land border crossing in the western hemisphere, with 24/7 operations. San Ysidro processes pedestrians, passenger vehicles, and buses. The crossing does not have any commercial facilities for screening cargo. In 2017, U.S. Customs and Border Protection (CBP) processed about 65,000 northbound vehicle passengers and 23,000 northbound pedestrians each day at San Ysidro. The General Services Administration (GSA) began construction on a three-phase, $741 million project in 2011, with plans to complete all three phases by late 2019. CBP and GSA officials reported that queues and wait times at San Ysidro steadily increased over the years and that existing facilities could no longer accommodate the traffic volume. CBP also reported that outdated infrastructure in the pedestrian primary inspection area created officer safety concerns and that renovations were necessary to provide a safe and secure work environment for CBP staff. For example, CBP officials stated that the design and location of the existing pedestrian primary inspection booths obstructed officers’ view of pedestrians as they entered the primary inspection area. CBP and GSA officials reported that to better accommodate traffic growth and CBP’s requirements, GSA’s capital project is expanding and reconfiguring the crossing. The project entails demolishing existing structures and constructing new primary and secondary passenger vehicle inspection areas, a new main building, and other support structures. The project also includes two pedestrian processing areas—on the east and west sides of the crossing—that connect with transportation centers in Mexico and the United States. Once complete, the crossing will have 34 passenger vehicle lanes with 62 booths, including stacked booths that allow CBP officers to simultaneously inspect two vehicles in most lanes. The crossing will also add a dedicated bus lane and a total of 36 pedestrian primary inspection lanes across its two pedestrian facilities. GSA is building the $741 million project in three stand-alone phases, with expected completion in November 2019. Tornillo-Guadalupe (also known as the Marcelino Serna land border crossing) opened in 2015. Tornillo-Guadalupe replaced the Fabens land border crossing, which dated back to 1938. U.S. Customs and Border Protection (CBP) currently processes passenger vehicles and pedestrians at Tornillo-Guadalupe. Although Tornillo-Guadeloupe has commercial processing facilities, CBP ceased using these facilities in 2017 due to low volumes of commercial traffic. CBP and General Services Administration (GSA) officials reported that the original Fabens land border crossing was unable to process high traffic volumes and that the existing bridge connecting the United States and Mexico was no longer structurally sound enough to support commercial crossings. CBP ceased all commercial operations at Fabens in 2001, limiting CBP to pedestrian and passenger vehicle traffic processing. The number of CBP personnel at the crossing exceeded facility capacity and the limited space hindered CBP’s ability to process, interview, isolate, and detain travelers, according to CBP officials. Further, the existing septic system was not designed for the number of employees at the facility and the original water system was insufficient. CBP had to haul water on site to operate its facilities and provide bottled water for its employees and the public to drink, according to officials. CBP and GSA officials reported that the recent capital project delivered new passenger vehicle and pedestrian inspection facilities along with a new main building. The project also included a dedicated bus inspection area and a parking lot for seized vehicles. Commercial facilities included a new bridge and commercial building, 10 covered secondary inspection docks, two primary inspection lanes with a canopy, a hazardous materials containment area, agriculture lab, and seized narcotics storage. The project also added an additional 109 acres of donated farmland to the original crossing’s 6 acre footprint. GSA completed the $73.5 million construction project in October 2014, about 15 months later than planned and 19 percent above its cost baseline. Unstable soil conditions and contractor performance issues contributed to cost and schedule growth, according to GSA. Delays associated with infrastructure in Mexico delayed the start of cargo processing by 16 months. Despite investing in new commercial processing facilities at the crossing, CBP suspended cargo processing in May 2017 after 14 months, citing low traffic volumes due to underdeveloped infrastructure in Mexico. Appendix III: Comments from the Department of Homeland Security Appendix IV: Comments from the General Services Administration Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Michael Armes (Assistant Director) Kirk Kiester (Assistant Director), Bruce Crise (Analyst in Charge), Lilia Chaidez, Michele Fejfar, Eric Hauswirth, Susan Hsu, Daniel Kuhn, Jeremy Manion, Mara McMillen, Marc Meyer, and Sasan J. “Jon” Najmi made significant contributions to this report.
Why GAO Did This Study CBP and GSA own, lease, or manage all of the nation's 167 land border crossings. CBP facilitates trade and travel at these crossings and has identified significant capital investment needs at these facilities. GAO was asked to review land border crossing infrastructure. This report examines (1) infrastructure constraints CBP faces and the extent CBP and GSA have information on infrastructure condition, (2) the extent CBP prioritizes capital projects and (3) the extent recent GSA capital projects met cost, schedule, and scope goals and challenges CBP and GSA reported. GAO analyzed land border crossing data and documentation, including CBP and GSA facility assessments, CBP capital investment plans for fiscal years 2014 through 2018, and data for GSA capital infrastructure projects active during those years. GAO also interviewed officials from CBP field offices that oversee all crossings about infrastructure constraints and visited 16 crossings selected based on high traffic volume and border crossings CBP has prioritized for infrastructure improvement. What GAO Found The Department of Homeland Security's (DHS) U.S. Customs and Border Protection (CBP) reported infrastructure constraints at land border crossings including limited inspection capacity, technology challenges, and security limitations. However, CBP does not have complete information on infrastructure conditions at all land border crossings. Specifically, CBP assessed facility conditions at four of the 40 land border crossings it owns from 2016 through 2018. Further, CBP has not developed a plan to ensure it conducts such assessments, consistent with DHS policy which calls for them every three years. Developing and implementing a plan to ensure CBP executes its facility condition assessment program would enable CBP to collect more complete and current infrastructure information. In addition, while CBP and the General Services Administration (GSA) both assess facility conditions at 101 GSA-owned land border crossings, they do not consistently share or use each other's information. Doing so could enable CBP and GSA to improve the accuracy and completeness of their respective assessments. CBP prioritizes land border crossing capital projects in a five-year plan, which by statute is to be submitted with DHS's annual budget request to Congress. In fiscal years 2014 through 2018, CBP submitted two plans on time, submitted two plans more than 100 days after submission of the budget request, and did not submit a plan in one year due to delays in the plan's review and approval process. By establishing timeframes for the review process, CBP would be better positioned to identify and address sources of delay in the review process, and improve its ability to meet statutory reporting requirements by including its five-year plan with its annual budget submission to Congress. The 10 completed or ongoing GSA land border crossing capital projects in fiscal years 2014 through 2018 generally experienced schedule growth ranging from 0 to 59 percent, but stayed within a 10 percent cost contingency allowance. Circumstances contributing to increased project costs or schedule growth include funding lags between project design and construction, and CBP-requested changes during construction to meet evolving mission needs, according to GSA and CBP officials. What GAO Recommends GAO is making seven recommendations, including that CBP develop a plan to ensure it conducts facility condition assessments consistent with DHS policy; that CBP and GSA share and use each other's information on facility conditions at land border crossings; and that CBP establish review timeframes for its capital investment plan.
gao_GAO-19-561
gao_GAO-19-561_0
Background The Corps is one of the world’s largest public engineering, design, and construction management agencies. Located within the Department of Defense, the Corps has both military and civilian responsibilities. Through the civilian Civil Works Program, the Corps plans, constructs, operates, and maintains a wide range of water resources development projects such as navigation and flood risk projects. The Assistant Secretary of the Army for Civil Works, appointed by the President, sets the strategic direction for the program and has principal responsibility for the overall supervision of functions relating to the Army’s Civil Works Program. The Chief of Engineers, a military officer, is responsible for execution of the civil works and military missions. At the Corps level, the Civil Works Program is organized into three tiers: headquarters in Washington, D.C.; eight regional divisions; and 38 local district offices (see fig. 1). Corps headquarters primarily develops policies and guidance to implement the agency’s responsibilities and plans the direction of the organization. The divisions, which were established generally according to watershed boundaries, primarily coordinate the districts’ civil works and military projects and are commanded by military officers. The districts, also commanded by military officers, are to, among other things, plan and implement feasibility studies and the resulting water resources development projects that are approved by the divisions and headquarters. Major Steps in Corps Water Resources Development Projects There are several steps in conducting a Corps water resources development project. When a local community perceives a need or experiences a water resources problem that is beyond its ability to solve, it typically contacts the Corps for assistance. These communities and Congress, as well as other entities, play key roles in the process. Figure 2 illustrates the major steps in conducting a Corps water resources development project. Corps Feasibility Studies As identified above, one of the major steps in initiating a water resources development project is conducting a feasibility study. Feasibility studies further investigate a water resources problem and make recommendations on whether a project is in the federal interest, and if so, how the problem should be addressed. Generally, the cost of a feasibility study is shared between the Corps and a nonfederal sponsor, such as a local port authority or a state agency. In 2012, the Corps began using a new approach to conducting feasibility studies, referred to as SMART Planning. As part of this approach, Corps officials are to use and document a risk-informed approach to decision- making. Specifically, Corps officials are to consider risks at each point in the feasibility study process and balance the probability and consequences associated with those risks with the time and costs needed to avoid or mitigate risks through, for example, collecting additional data or conducting additional analysis. By doing so, they are to conduct only the additional analysis needed to make a decision at that point in the process. At each step, Corps officials are to use an approach that balances the level of detail, data collection, research, and associated risks with what is necessary to deliver the feasibility study, and they are to justify any additional work as the best course forward. The Corps’ feasibility study process consists of four phases (scoping, alternative evaluation and analysis, feasibility-level analysis, and Chief’s report) and a number of key milestones, such as identifying project alternatives for further review (see fig. 3). The complete feasibility study process is to take place within the statutory target time frame of less than 3 years (36 months). The Corps uses SMART Planning to help feasibility studies meet the agency’s 3x3x3 rule. Corps policy allows the Corps to spend more money and take more time on an unusually complex feasibility study if the district leading the study requests and receives an exemption from headquarters or the Assistant Secretary of the Army for Civil Works. However, Corps policy indicates that such exemptions are not routine and are to be granted only after careful consideration and review by division and headquarters officials. In addition, WRRDA 2014, as amended, provides that the Secretary of the Army may make an exception by extending the timeline of a study if the Secretary determines that the study is too complex to comply with the 3x3x3 rule. The Secretary is not to extend the timeline for a feasibility study for a period of more than 10 years, and any feasibility study that is not completed before that date shall no longer be authorized. The act also requires the Secretary to provide written notice to the Senate Committee on Environment and Public Works and the House Committee on Transportation and Infrastructure each time the Corps grants such an exception. The feasibility study process includes work the Corps undertakes to satisfy requirements under the National Environmental Policy Act (NEPA) and other environmental statutes. Under NEPA, federal agencies are to evaluate the potential effects of proposed projects on the environment. When the Corps determines that a water resources development project could have significant environmental effects, it must prepare an EIS. The Corps issues a draft EIS as part of the overall draft feasibility report for public and stakeholder review and issues a final EIS when it issues its final feasibility report. Feasibility studies that require an EIS typically represent larger and more complex studies than those that do not require an EIS. According to a 2013 Congressional Research Service report, Corps feasibility studies that are larger and more complex tend to require additional funding and time when compared to less complex, smaller studies. While the Corps does not publish information on the length of time it takes to complete feasibility studies, our analysis of publicly available data showed that the median time it took the Corps to complete a feasibility study with an EIS was more than 7 years for those studies completed from 2008 through 2018. Statutory Provisions for Accelerating Feasibility Studies WRRDA 2014 contains provisions related to, among other things, accelerating the completion of feasibility studies for which an EIS is prepared. These provisions broadly fall into different general categories, which we grouped as follows: Coordination and administration. These provisions are generally process oriented. Among other things, they relate to facilitating the process of coordinating and administering feasibility studies by, for example, encouraging the Corps and other agencies to coordinate early in the feasibility study process and resolve issues expeditiously. Environmental review. These provisions relate to implementing NEPA and other environmental statutes when conducting feasibility studies. For example, the Corps is to establish a program to measure and report on progress made to improve and expedite the planning and environmental review process. Public transparency. These provisions generally require the Corps to, among other things, make information publicly available on how it is implementing the acceleration provisions. The Corps Has Taken Steps to Address Some Feasibility Study Acceleration Provisions but Not Others The Corps has taken steps to address broad WRRDA 2014 provisions related to facilitating the process of coordinating and administering feasibility studies. For example: Issuance of a joint coordination guide. In September 2015, as a result of the act and previous ongoing coordination efforts, the Corps, NMFS, and FWS worked together to jointly issue a coordination guide for conducting feasibility studies. The guide discusses the feasibility study process in depth and emphasizes the importance of substantive, early engagement among the three agencies to successfully deliver projects and avoid delays later in the process that may result from lingering disagreements among the agencies. Issuance of Corps guidance on WRRDA 2014 acceleration provisions. In March 2018, the Corps issued guidance on how officials should implement the WRRDA 2014 acceleration provisions when conducting feasibility studies. This includes guidance on implementing administrative changes such as deadlines for gathering agency or public comments. It also includes guidance on coordination within the agency as well as with other agencies and stakeholders, such as nonfederal sponsors. For example, WRRDA 2014 provides that the Corps is to make certain information available to other agencies as early as practicable in the environmental review process. The Corps’ March 2018 guidance indicates that Corps officials are to provide information on the (1) environmental and socioeconomic resources located within the physical area associated with a feasibility study, and (2) general locations of the different alternatives under consideration. While the guidance was not issued for almost 4 years after the enactment of WRRDA 2014, several Corps headquarters and district officials said the Corps disseminated information on how to implement the acceleration provisions to the districts in various ways, such as through webinars and working with teams that had initiated feasibility studies subject to the act’s acceleration provisions. Many Corps headquarters, division, and district officials said that many of the act’s coordination and administration provisions are similar to long- standing practices they followed, based on requirements in other laws such as NEPA. For example, according to many Corps headquarters, division, and district officials, the WRRDA 2014 provision to develop a coordinated environmental review process is generally consistent with NEPA and its implementing regulations. According to a Corps headquarters official, the WRRDA 2014 coordination provisions add specificity to the Corps’ existing practices by detailing which agencies to involve in coordination efforts and when to involve them. The Corps also has taken steps to address one of the WRRDA 2014 provisions related to public transparency. Specifically, the Corps is to annually prepare, and make publicly available, a list of feasibility studies subject to the acceleration provisions that do not have adequate funding to make substantial progress toward completion of the study. Corps headquarters and district officials said that in the past the Corps funded several hundred active feasibility studies at any given time. While this allowed for many feasibility studies to remain active and make some progress, it also made less funding available for individual feasibility studies and slowed the progress of some studies, according to several Corps officials. According to a February 2012 Corps policy memo, agency leadership initiated a process to review all active feasibility studies to determine which were the most viable for congressional funding. The Corps re-scoped or deactivated the remainder of the feasibility studies. Many Corps district and headquarters officials told us this allowed for increased funding for and progress to be made on the feasibility studies that remained active. As a result of the Corps’ efforts, headquarters officials said the number of active Corps feasibility studies decreased from 653 in 2012 to 89 at the end of 2018. In addition, they said that because active feasibility studies now have greater levels of funding, the agency has not had to report any active feasibility studies that do not have adequate funding. However, as of May 2019, the Corps has not addressed other WRRDA 2014 provisions related to public transparency and environmental review. These include the following: Status and progress database. By June 2015, the Corps was to establish and maintain an electronic database and, in coordination with other federal and state agencies, issue reporting requirements to make publicly available the status and progress regarding compliance with applicable requirements of NEPA and other required approval or action. Performance measurement. The Corps is to establish a program to measure and report on progress made toward improving and expediting the planning and environmental review process. Environmental review guidance. The Corps is to (1) prepare, in consultation with the Council on Environmental Quality and other federal agencies with jurisdiction over actions or resources that may be impacted by a project, guidance documents that describe the coordinated environmental review processes the Corps intends to use to implement reforms for planning projects, and (2) issue guidance on the use of programmatic approaches for the environmental review process that carries out specified actions and meets specified requirements. In other instances, the Corps has taken some initial steps but has not fully addressed certain WRRDA 2014 provisions. Specifically, not later than 180 days after the act’s enactment, the Corps was to survey the agency’s use of categorical exclusions in projects since 2005, publish a review of that survey, and solicit requests from other federal agencies and project sponsors for new categorical exclusions. By June 2015, the Corps was to propose a new categorical exclusion if it identified a category of activities that merited such action. As of May 2019, the Corps had conducted an internal survey and solicited input through the Federal Register on its procedures for implementing NEPA. However, Corps headquarters officials said they had not published a review of its survey, targeted requests for new categorical exclusions to other federal agencies and nonfederal sponsors, or proposed new exclusions as merited. Appendix II contains a more detailed summary of the WRRDA 2014 acceleration provisions, along with information on Corps actions to address each provision. Corps headquarters officials identified resource constraints as the primary reason for not addressing some public transparency and environmental review provisions. For example, to develop environmental review guidance, Corps headquarters officials told us that they would need to conduct various steps, including drafting guidance, conducting administrative review with other federal agencies, soliciting public comment, and revising the guidance. Headquarters officials also said they were involved in a similar effort with other federal agencies to develop environmental review guidance in a publication called the 2015 Red Book, an effort they characterized as labor intensive. In addition, to establish a database to publicly report on the status of its feasibility studies, Corps headquarters officials said they would need to stand up and maintain a website similar to the Federal Infrastructure Permitting Dashboard for federal infrastructure projects. The Corps is one of many agencies involved in the effort to create and maintain this dashboard, and Corps headquarters officials said the effort was a resource-intensive process. Corps headquarters officials said that while they have not created the database required by WRRDA 2014, relevant information is available through the agency’s annual public reports on active and recently completed feasibility studies’ milestones and schedules. Corps headquarters officials also said the status of feasibility studies is often available on the Corps districts’ websites. However, this information is not easily accessible without knowing which district office is responsible for a given feasibility study. While Corps officials identified resource constraints as the primary reason for not addressing certain WRRDA 2014 provisions, they did not provide specific estimates on the resources that the Corps would need to address these provisions. In addition, the officials said they do not have a plan that addresses how and when they intend to implement the provisions they have yet to address. We have previously reported on leading practices for sound planning and have found that implementation plans that include resource estimates help ensure organizations achieve their goals and objectives. Such a plan would better position the Corps to address the remaining WRRDA 2014 provisions related to environmental review and public transparency. The Corps Has Performed Some Review of Its Feasibility Study Acceleration Reforms but Has Not Conducted a Comprehensive Evaluation of Impacts The Corps monitors feasibility studies and has done some review of its acceleration reforms but has not conducted a comprehensive evaluation of the impacts of these reforms. In terms of monitoring, Corps policy states that division and district leaders are responsible for monitoring feasibility studies within their areas of responsibility. According to Corps policy, districts are to prepare a quality control plan for each project to ensure compliance with all technical and policy requirements, and divisions are responsible for quality assurance by ensuring that districts plan, design, and deliver quality projects on schedule and within budget. Corps headquarters officials also said they monitor the progress of feasibility studies during management meetings, during which they discuss the cost and status of feasibility studies as well as the quality of those studies; such meetings are largely led by Corps management or by the Corps’ Planning Advisory Board, which oversees the quality of feasibility studies. In addition to monitoring individual feasibility studies, Corps headquarters officials said they have conducted some broader reviews of how the acceleration reforms are progressing. For example, they conducted a trend analysis in October 2018 and again in April 2019 to identify the reasons why some feasibility studies have received exceptions from the timing and cost requirements of the 3x3x3 rule. These analyses, among other things, identified that some studies were too complex to be completed within 3 years or for less than $3 million, according to Corps officials. Furthermore, based on their experiences with various reform efforts, Corps officials said that they have been making real-time enhancements. For example, based on input from the Corps’ Planning Advisory Board, Corps leadership has called for the agency to clarify its updated approach to risk management, according to Corps officials. These officials said each component within the Corps that is involved in conducting feasibility studies is to issue internal guidance on its risk management approach. However, Corps headquarters officials said the Corps has not conducted a comprehensive evaluation of acceleration reforms to determine what impacts the reforms have had and whether any modifications to those reforms are needed. Corps and other agency officials and stakeholders we interviewed differed in their views of the acceleration reforms’ impacts on the cost, time frames, and quality of feasibility studies: Cost and time frames for completing feasibility studies. Many Corps officials said they agreed with the overall goals of reducing costs and increasing the speed with which feasibility studies are carried out. Some Corps headquarters and district officials said SMART Planning and the 3x3x3 rule are changing the Corps’ culture around the amount of time and cost a feasibility study should take. However, several Corps district and headquarters officials said some Corps staff are experiencing difficulties with the cultural change represented by SMART Planning and the 3x3x3 rule. For example, a Corps district official said that in the past some Corps navigation economists had one year to complete some modeling analyses for feasibility studies, but they now are to complete such work in 90 days due to the constraints of SMART Planning and the 3x3x3 rule, which has been a difficult adjustment. In addition, many Corps headquarters, division, and district officials raised concerns that the cost limitation of $3 million may not be realistic given differences in cost across geographic locations or the loss of spending value over time caused by inflation. Quality of feasibility studies. Several Corps district officials we interviewed said they like the Corps’ new policy of involving other agencies earlier in the process and with more frequency. They said they believe this approach has improved coordination with other agencies—by, for example, inviting the other federal agencies to join the Corps in a formal initiation meeting—which can in turn improve the overall quality of a feasibility study. However, some FWS and NMFS officials said they would like to be more involved and have better communication with the Corps than they currently do, such as throughout the feasibility study process rather than just at the beginning of a study and at the end when their formal review is requested. Similarly, several Corps headquarters, district, and division officials have commended the agency’s new approach to risk management and stated that they aim to provide partner agencies with the information they need to conduct their work on the feasibility study. However, many Corps, FWS, and NMFS officials and nonfederal sponsors we interviewed said they were concerned that this new approach might result in insufficient information for making decisions, which could affect the quality of feasibility studies. For example, for six of the seven studies that we reviewed, officials from FWS and NMFS said it has become more difficult for them to provide meaningful input on the feasibility study alternatives considered because the Corps provides them with less detailed information than in the past. Corps officials and other stakeholders we interviewed also expressed concern about possible impacts of the 3x3x3 rule on the quality of feasibility studies. For example, many Corps headquarters, division, and district officials said that because the 3x3x3 rule puts constraints on costs and time frames, if the scope of a feasibility study is not similarly reduced, it can affect the study’s quality. In addition, nonfederal sponsors for four of the seven studies we examined expressed concerns with the 3x3x3 rule; three of these four nonfederal sponsors said they believe that the Corps is more focused on meeting the cost and schedule timelines than on the needs or quality of the study. Senior Corps headquarters officials said they are confident that the cost and duration of feasibility studies has decreased overall as a result of the acceleration reforms but could not provide us with documentation to support this observation. Specifically, officials said in March 2019 that based on analysis they had recently conducted, most feasibility studies are now being completed within 4 years and at a lower cost than feasibility studies undertaken prior to implementation of the 3x3x3 rule. While these results may not meet the 3x3x3 rule, officials said that these feasibility studies were the first subject to the acceleration reforms and may not depict the likelihood of future feasibility studies meeting the rule. This is, in part, because Corps officials who are working on new feasibility studies have the benefit of the past several years of experience working with the SMART Planning process. Further, Corps officials said that they do not have formal documentation summarizing how the acceleration reforms have affected the quality of their feasibility studies overall, but they monitor individual feasibility studies, as described earlier. According to Corps headquarters officials, the Corps has not conducted a more comprehensive evaluation of the broader impacts of the acceleration reforms because it has only completed a small number of feasibility studies since 2012 under the acceleration reforms, and officials are focused on monitoring their ongoing individual studies. These officials said they see the value in conducting such an evaluation as they complete more studies but that they have not developed formal plans to do so. Effective program evaluation includes an evaluation plan—that is, a plan that takes into account the questions guiding the evaluation, the constraints faced in studying the program, and the information needs of the intended users. Developing an evaluation plan would help position the Corps to conduct a timely and effective review of the impacts of the acceleration reforms overall. The Corps Has Not Maintained Complete Milestone Data for Selected Feasibility Studies in Its Central Data System The Corps has not maintained complete data on the 10 key milestones in its central data system for more than half of the feasibility studies we reviewed. Specifically, for the 19 feasibility studies we reviewed, we found that: seven studies in the Corps’ central data system included complete data for all 10 key milestones, and twelve studies were missing one or more milestones in the data system. Table 1 provides information on the key milestone data included in the Corps’ central data system for the 19 feasibility studies we reviewed. Many Corps headquarters and division officials said that Corps officials vary in their knowledge of its central data system. Many headquarters, division, and district officials we interviewed also acknowledged that, in general, the milestone information entered into the Corps’ central data system can be inconsistent across different feasibility studies. Corps headquarters officials said agency policy requires district officials conducting feasibility studies to enter data on 10 key milestones for each study into the agency’s central data system. However, while the policy identified the 10 milestones, it only explicitly requires that two of the 10 milestones be entered into the agency’s central data system. Specifically, the policy states that officials are to enter into the Corps’ data system the milestones for (1) feasibility study initiation and (2) posting of the plan for peer and stakeholder review. Corps officials said the intent of the policy is for all 10 key milestones to be entered into the central data system but acknowledged that the policy may not be clear. In part to assist district officials in conducting feasibility studies, Corps headquarters officials created a template, which includes information on nine of the 10 key milestones. In addition, a Corps district official said she was unclear on the agency’s expectations about which milestones to enter into the central data system. Corps headquarters officials said they contact district officials responsible for feasibility studies to obtain up-to- date information and ensure they understand the progress of each feasibility study. While this may help to ensure accuracy and completeness of milestone data on feasibility studies, several Corps district officials said the process of responding to such data calls can be time consuming and take them away from their core responsibilities. Without clarifying its policy to help ensure district officials enter data on all key milestones for feasibility studies into its central data system, the Corps will not have complete data to efficiently monitor the progress of feasibility studies. Conclusions The Corps has taken steps to address the acceleration provisions in WRRDA 2014, such as those related to coordination. However, it has not fully addressed provisions related to environmental review or public transparency. Corps officials said they do not have a plan that addresses implementation of remaining provisions or the resources that will be required to implement them. An implementation plan that includes resource estimates would better position the Corps to address the remaining provisions in WRRDA 2014. Further, the Corps monitors the progress of feasibility studies and has conducted some reviews of the individual acceleration reforms. However, the agency has not developed an evaluation plan for its acceleration reforms to better understand the reforms’ impacts overall and determine whether any modifications to those reforms are needed. Developing such a plan would enable the Corps to conduct a timely and effective evaluation. Further, without clarifying its policy to ensure district officials enter all key milestone dates for feasibility studies into its central data system, the Corps will continue to lack complete data to efficiently monitor the progress of feasibility studies. Recommendations for Executive Action We are making the following three recommendations to the Department of Defense: The Secretary of the Army should direct the Assistant Secretary of the Army for Civil Works to develop an implementation plan that includes resource estimates to address the remaining WRRDA 2014 acceleration provisions. (Recommendation 1) The Secretary of the Army should direct the Assistant Secretary of the Army for Civil Works to develop a plan to conduct a comprehensive evaluation of the impacts of the agency’s feasibility study acceleration reforms. (Recommendation 2) The Secretary of the Army should direct the Assistant Secretary of the Army for Civil Works to clarify its policy to help ensure district officials enter data on all key milestones for feasibility studies into its central data system. (Recommendation 3) Agency Comments We provided a draft of this report to the Department of Defense for review and comment. In its written comments, reprinted in appendix III, the Department concurred with our recommendations. The Department commented that we should redirect our recommendations to the Assistant Secretary of the Army for Civil Works rather than to the Chief of Engineers and the Commanding General of the U.S. Army Corps of Engineers, which we did. The Department also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of the Department of the Interior, the Secretary of Commerce, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or FennellA@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines the extent to which the U.S. Army Corps of Engineers has (1) addressed the feasibility study acceleration provisions under the Water Resources Reform and Development Act of 2014 (WRRDA 2014), (2) reviewed the impact of its feasibility study acceleration reforms, and (3) maintained complete milestone data for feasibility studies in its central data system. To conduct our work, we reviewed the first 19 feasibility studies subject to the WRRDA 2014 feasibility study acceleration provisions, among other things. These feasibility studies included those that (1) were initiated after June 10, 2014, the date WRRDA 2014 was enacted, through August 15, 2018, and (2) for which an environmental impact statement (EIS) is prepared. We chose to review studies through August 15, 2018, because after that date the Corps initiated several feasibility studies using funding in a supplemental appropriation the Corps received in February 2018 to conduct work in response to recent large hurricanes, and Corps officials said they planned to use a somewhat different approach to conducting these studies. For each study, we reviewed Corps guidance on the agency’s process for planning feasibility studies and other related documentation. We examined information from the Corps on the progress and status of the19 feasibility studies. We also reviewed information for each feasibility study on the Corps’ business line or program, the district or division overseeing the study, and information on which studies had received exceptions from the 3x3x3 rule. We also conducted a more in-depth review of seven of these 19 feasibility studies. We selected these seven studies because they represent different types of water resources development projects, were at varying stages of completion, and are geographically dispersed. The seven studies, and the Corps districts leading these studies, are: Coastal Texas Protection and Restoration (Galveston District); Houston Ship Channel Expansion Channel Improvement Project (Galveston District); Matagorda Ship Channel (Galveston District); Gulf Intercoastal Waterway: Brazos River Floodgates and Colorado River Locks Systems (Galveston District); Mississippi River Ship Channel, Gulf to Baton Rouge, Louisiana General Reevaluation Report (New Orleans District); Sacramento River, General Reevaluation Report (Sacramento Port of Long Beach Deep Draft Navigation Improvements (Los Angeles District). For each of these seven studies, we reviewed project management plans and other project documents, such as draft feasibility studies, if available. From August 2018 through November 2018, we visited the four district offices that led these seven studies, including the Corps’ Galveston, Los Angeles, New Orleans, and Sacramento district offices. During these visits, we discussed the status and progress of each of these feasibility studies and the Corps’ coordination with other federal agencies and nonfederal sponsors, among other things. For each study, we interviewed officials from nonfederal sponsors—such as the state or local government associated with individual studies—and from federal partners—including the Fish and Wildlife Service (FWS) and the National Marine Fisheries Service (NMFS). We selected FWS and NMFS because of the important role they play in reviewing environmental aspects of Corps feasibility studies and their role in the 2015 joint publication on coordination. We also interviewed Corps officials at the three divisions overseeing the districts that conducted the feasibility studies we selected. This included officials from the Corps’ South Pacific, Mississippi Valley, and Southwestern divisions. While the seven studies provide illustrative examples, they are not generalizable to all of the Corps’ feasibility studies for which an EIS is prepared. We developed and used four standard sets of semi-structured interview questions for the following groups: the (1) Corps district office officials conducting the seven selected feasibility studies, (2) FWS and NMFS officials working with the Corps on these studies, (3) Corps division officials overseeing each study, and (4) nonfederal sponsors who worked with the Corps on each study. To characterize the views of those we interviewed throughout the report, we defined modifiers to quantify officials’ views as follows: “some” refers to responses from two to four Corps officials and/or stakeholders; “several” refers to responses from five to seven Corps officials and/or stakeholders; and “many” refers to responses from eight or more Corps officials and/or stakeholders. To examine the extent to which the Corps addressed the WRRDA 2014 feasibility study acceleration provisions, we compiled a list of the provisions. We then reviewed the Corps’ documentation related to the implementation of these provisions, including agency guidance and policies. We compared this information with the WRRDA 2014 acceleration provisions. To do this, we created categories for the acceleration provisions and grouped the provisions by category. To examine the extent to which the Corps has reviewed the impact of its acceleration reforms, we reviewed Corps policy, guidance, training, and other documentation on implementation of those reforms. We use the term acceleration reforms to refer to the requirements that new feasibility studies are to be completed in less than 3 years and at a cost of not more than $3 million, the Corps’ risk management of feasibility studies through its new SMART Planning process, and the WRRDA 2014 acceleration provisions. We reviewed documentation from the Corps on the feasibility studies that have received exceptions from the 3x3x3 rule. We interviewed Corps headquarters officials to learn what, if any, (1) new policies were in place to help division and district staff implement the reforms; and (2) review or analysis headquarters officials had completed of the impacts of the reforms on the cost, time frames, or quality of feasibility studies. We also interviewed Corps districts and division officials who were responsible for the seven studies about how the acceleration reforms were working, as well as FWS and NMFS officials and nonfederal sponsors about their views of the impacts of the new processes on their work on these feasibility studies. We compared this information with program evaluation guidance. To examine the extent to which the Corps has maintained complete milestone data for feasibility studies in its central data system, we obtained milestone data from the system for the 19 Corps feasibility studies in our review. We analyzed the milestone data to determine which milestone dates were in the system and then worked with Corps headquarters officials to verify that information. We assessed the reliability of these data by reviewing related documentation and interviewing knowledgeable officials, among other things. We determined that the data were sufficiently reliable for the purpose of understanding which districts and divisions conducted feasibility studies and for understanding the types of milestones that were entered into the central data system. However, as discussed in this report, we determined that the milestone data were not sufficiently reliable for other purposes. We reviewed data for all feasibility studies in our review to determine whether they conformed to Corps expectations on what milestone data should be in the system. We estimated the median time it took the Corps to complete a feasibility study for which an EIS was prepared. To do this, we obtained from the Corps website the names of all feasibility studies completed with a Chief’s Report from July 2008 through June 2018 and the dates they were completed. We verified with Corps headquarters officials that its list of studies with a Chief’s Report was current for that time frame. For each of these feasibility studies, we then found the associated notice of intent to complete an EIS as published in the Federal Register. While the date the Corps filed a notice of intent to complete an EIS is not the initiation date for the feasibility study, we used it as a proxy since Corps headquarters officials said that, in the past, the notice of intent was filed soon after a study was initiated. We calculated the time between the date the notice of intent was filed and the date of the Chief’s report to arrive at an estimate of the amount of time the each feasibility study took to complete. We then calculated the median time it took to complete these feasibility studies. We conducted this performance audit from April 2018 to July 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform our audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: U.S. Army Corps of Engineers Project Acceleration Statutory Provisions and Corps Actions Related to Each Provision Appendix II: U.S. Army Corps of Engineers Project Acceleration Statutory Provisions and Corps Actions Related to Each Provision provides guidance, independently evaluates, and approves the document before taking subsequent action; and ensures the project sponsor complies with all design and mitigation commitments. In addition, any NEPA documents prepared in this way are to be adopted and used by any federal agency when making any determination to the same extent the agency could adopt or use a document prepared by another federal agency under NEPA. Category name Coordination and Administration Designating Jurisdictional Agencies GAO summary of statutory provision For all federal, state, and local governments and Indian tribes that may have jurisdiction over a project or that may be required to review some aspect of the feasibility study or make a determination on issuing a permit or other decision, the Corps must: identify these agencies as early as practicable, and invite these agencies to participate or coordinate as early as practicable and set a deadline for response. Corps actions related to provision The Corps issued its WRRDA 2014 acceleration guidance in March 2018. as well as the Principles and Guidelines and Planning Guidance Notebook. Any federal agency invited by the Corps will be designated as a cooperating agency unless that agency follows certain specified steps. Plan for Coordinating Input and Completing Environmental Review The Corps, after consultation with and with the concurrence of relevant entities is to establish a plan for coordinating public and agency participation in, and comment on, the environmental review process for each feasibility study or category of studies. As soon as practicable but not later than 45 days after the close of the public comment period on a draft Environmental Impact Statement (EIS), the Corps, after consultation with and with the concurrence of relevant entities, also is to establish, as a part of the coordination plan, a schedule for completing the environmental review process. In doing so, the Corps is to consider certain factors, provide the schedule to relevant entities, and make it available to the public. The Corps issued its WRRDA 2014 acceleration guidance in March 2018. In addition, a Corps official indicated that portions of this provision are implemented under the Corps’ NEPA procedures. The Corps issued its WRRDA 2014 acceleration guidance in March 2018. not more than 60 days for agency or public comment on a draft EIS, and not more than 30 days for agency and public comment on other environmental review documents. GAO summary of statutory provision Issue Identification and Resolution The Corps, the cooperating agencies, and any participating agencies are required to work cooperatively to identify and resolve issues that could delay completion of the environmental review process or result in the denial of any approval required for the project study under applicable laws. Corps actions related to provision The Corps issued its WRRDA 2014 acceleration guidance in March 2018. In addition, a Corps official indicated that portions of this provision are implemented under the Corps’ NEPA procedures and Planning Guidance Notebook. Many Corps district officials said they have used various strategies, such as meetings, to resolve issues with other agencies. The Corps is to make information available to the cooperating and participating agencies as soon as practicable in the environmental review process regarding the environmental and socioeconomic resources located within the project area and the general locations of the alternatives under consideration. Based on information from the Corps, cooperating and participating agencies are to identify as early as practicable any issues of concern regarding the potential environmental or socioeconomic impacts of the project, including any issues that could substantially delay or prevent an agency from granting a permit or other approval that is needed for the project study. On the request of a participating or cooperating agency or project sponsor, the Corps is to convene an issue resolution meeting with the relevant entities to resolve issues that may (1) delay completion of the environmental review process, or (2) result in denial of any approval required for the project study under applicable laws. Such a meeting is to be held not later than 21 days after the Corps receives the request for the meeting unless the Corps determines there is good cause to extend that deadline. Additionally, the Corps may convene an issue resolution meeting at its discretion, regardless of whether such a meeting is requested. If resolution cannot be achieved within 30 days of an issue resolution meeting and the Corps determines that all information necessary to resolve the issue has been obtained, the Corps is to forward the dispute to the heads of the relevant agencies for resolution. Corps actions related to provision The Corps issued its WRRDA 2014 acceleration guidance in March 2018. The Corps must notify the Senate Committee on Environment and Public Works and the House Committee on Transportation and Infrastructure as soon as practicable. The Corps must continue notifications every 60 days thereafter until all decisions have been made by the federal agency. The amount of funds made available to support the office of the head of that federal agency must be reduced by certain specified amounts, subject to certain limitations. The Corps, NMFS, and FWS jointly issued a coordination guide for conducting feasibility studies in September 2015. The Corps also issued its WRRDA 2014 acceleration guidance in March 2018.b In addition, a Corps official indicated that portions of this provision are implemented under the agency’s NEPA procedures and Planning Guidance Notebook,f as well as the Principles and Guidelines.e Upon request by a state or project sponsor, and to the maximum extent practicable and appropriate, as determined by the agencies, the Corps and other federal agencies with relevant jurisdiction in the environmental review process are to provide technical assistance to the state or project sponsor in carrying out early coordination activities. If requested by a state or project sponsor, the Corps, in consultation with other federal agencies with relevant jurisdiction, may establish memoranda of agreement with certain entities to carry out early coordination activities, subject to certain limitations. New Information The Corps is to consider information received after the close of a comment period if the information satisfies the requirements for a supplemental EIS under NEPA regulations. The Corps issued its WRRDA 2014 acceleration guidance in March 2018. Corps actions related to provision The Corps issued its WRRDA 2014 acceleration guidance in March 2018. With respect to the environmental review process for any project study, the Corps is to have the authority and responsibility to (1) take actions as are necessary and proper and within the Corps’ authority to facilitate the expeditious resolution of the environmental review process for the project study, and (2) prepare or ensure that any required EIS or other environmental review document required to be completed under NEPA is completed in accordance with applicable federal law. Publishing Information on Studies with Inadequate Funding to Make Substantial Progress The Corps is to annually prepare and make publicly available a list of feasibility studies that the agency does not have adequate funding to make substantial progress toward the completion of the study. The Corps has undertaken a multi-year effort to focus funding on the feasibility studies the agency determined are the most viable options for Congressional funding and then re-scope or deactivate the remaining studies. The Corps has not taken action as of May 2019. not later than June 10, 2015, establish and maintain an electronic database and, in coordination with other federal and state agencies, issue reporting requirements to make publicly available the status and progress with respect to compliance with applicable requirements of NEPA and other required approval or action; and publish the status and progress of any such required approval or action on a feasibility study. Categorical Exclusions Not later than 180 days after June 10, 2014, the Corps is to: conduct an internal survey on its use of categorical exclusions since 2005, publish a review of the survey that includes a description of certain specified information, and solicit requests from other federal agencies and project sponsors for new categorical exclusions. As of May 2019, the Corps had conducted an internal survey and solicited public input through the Federal Register on its procedures for implementing NEPA. However, Corps headquarters officials said they had not published a review of its survey, targeted requests for new categorical exclusions to other federal agencies and nonfederal sponsors, or proposed new exclusions as merited. If the Corps identifies a category of activities that merits establishing a new categorical exclusion, the agency is also to propose new categorical exclusions by June 10, 2015. Performance Measurement The Corps is to establish a program to measure and report on progress made toward improving and expediting the planning and environmental review process. The Corps has not taken action as of May 2019. GAO summary of statutory provision Guidance on Coordinated Environmental Review The Corps, in consultation with the Council on Environmental Quality and other federal agencies with jurisdiction over actions or resources that may be impacted by a project, is to prepare guidance documents that describe the coordinated environmental review processes that the Corps intends to use to implement the reforms for the planning of projects. Corps actions related to provision The Corps has not taken action as of May 2019. Corps officials said they have reached out to the Council on Environmental Quality several times and are waiting for feedback on preparing this guidance. Guidance on Programmatic Approaches to Environmental Review The Corps is to issue guidance on the use of programmatic approaches to carry out the environmental review process that carries out specified actions and meets specified requirements. The Corps has not taken action as of May 2019. U.S. Army Corps of Engineers, Implementation Guidance for Section 1005 of the Water Resources Reform and Development Act of 2014 (WRRDA 2014), Project Acceleration (Washington, D.C.: March 2018). Pub. L. No. 91-190, 83 Stat 852 (1970) (codified as amended at 42 U.S.C. §§ 4321-4347). Appendix III: Comments from the Department of Defense Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Vondalee R. Hunt (Assistant Director), Candace Carpenter (Analyst in Charge), Matthew Levie, and Rebecca Makar made key contributions to this report. In addition, Michael Armes, Justin Fisher, Gwen Kirby, Patricia Moye, and Kiki Theodoropoulos contributed to the report.
Why GAO Did This Study Water resources development projects undertaken by the Corps—such as those to reduce the risks from coastal storms—historically have taken years or even decades to complete. To implement these projects, the Corps first conducts a feasibility study, which includes an analysis of the federal interest and the costs, benefits, and environmental impacts of a project; such studies can take several years to complete. WRRDA 2014 requires the Corps to, among other things, conduct activities to accelerate the completion of feasibility studies. The act also includes a provision for GAO to assess acceleration reforms. This report examines the extent to which the Corps has (1) addressed the WRRDA 2014 feasibility study acceleration provisions, (2) reviewed the impact of its feasibility study acceleration reforms, and (3) maintained complete milestone data for its studies. GAO reviewed WRRDA 2014 and Corps documents; reviewed 19 feasibility studies subject to the act's acceleration provisions; analyzed data on key milestones; and interviewed Corps officials and stakeholders. What GAO Found The U.S. Army Corps of Engineers has taken steps to address some feasibility study acceleration provisions under the Water Resources Reform and Development Act of 2014 (WRRDA 2014) but not others. For example, to implement a provision related to coordination, the Corps in September 2015 issued guidance emphasizing the importance of early coordination with other federal agencies to avoid delays later in the process. However, the Corps has not taken steps to address other provisions, such as one that calls for the Corps to establish a database to make publicly available information on the status of feasibility studies, citing resource constraints. The Corps does not have a plan to address these other provisions. A plan that includes resource estimates would better position the Corps to address the remaining acceleration provisions. The Corps regularly monitors feasibility studies and has conducted some reviews of its acceleration reforms, such as an analysis that found that some studies were too complex to complete within the agency's timing and cost requirements—i.e., within 3 years and for less than $3 million. However, the Corps has not comprehensively evaluated the reforms' impacts. Corps officials and stakeholders expressed differing views on the reforms' impacts on the costs, time frames, and quality of feasibility studies. For example, many Corps officials GAO interviewed said the reforms' overall goals to reduce studies' cost and time frames were positive, but others raised concerns, such as that the $3 million cost limitation may not be realistic for different geographic areas. Corps officials said they have not conducted a comprehensive impact review in part because they are focused on monitoring ongoing studies. These officials said they see the value in conducting such a review as they complete more studies, but they have not developed a plan to do so. Developing an evaluation plan would help the Corps conduct a timely and effective review. The Corps has not maintained complete milestone data in its central data system for the 19 feasibility studies GAO reviewed (see figure). For example, 12 studies did not include data for one or more milestones. Corps officials said agency policy requires the entry of information on 10 key milestones in the agency's central data system. However, GAO found that the policy only explicitly requires that two of the key 10 milestones be entered into the agency's central data system. Without clarifying its policy to help ensure officials enter data on all milestones in the central data system, the Corps will not have complete data to efficiently monitor the progress of feasibility studies. What GAO Recommends GAO is making three recommendations to the Department of Defense to direct the Assistant Secretary of the Army for Civil Works to (1) develop a plan with resource estimates to address the remaining WRRDA 2014 provisions, (2) develop a plan to comprehensively evaluate the impacts of the agency's acceleration reforms, and (3) clarify its policy to help ensure district officials enter data on required milestones for feasibility studies in its central data system. The agency concurred with the recommendations.
gao_GAO-19-385
gao_GAO-19-385_0
Background The National Defense Strategy is DOD’s primary strategy document, providing a foundation for all other strategic guidance in the department. The National Defense Authorization Act for Fiscal Year 2017 required DOD to develop a national defense strategy and update it at least once every 4 years and, during the years without an update, to assess the implementation of the strategy and whether any revision is necessary. The National Defense Strategy replaces the Quadrennial Defense Review, which the Armed Services Committees concluded had become too slow and ineffective to provide relevant strategic direction to the department. For each new strategy, DOD is required to identify, among other things: DOD’s strategic priority missions; the force structure, readiness, posture, and capabilities needed to support the strategy; and major investments required by the strategy. A separate provision in the act also established a Commission to assess the 2018 National Defense Strategy. The provision required the Commission to review the assumptions, missions, force posture and structure, and risks associated with the strategy. Congress expressed continued interest in DOD’s strategy implementation and assessment in the John S. McCain National Defense Authorization Act for Fiscal Year 2019, which included several provisions related to these matters. The National Defense Strategy falls under the President’s National Security Strategy, which outlines the overarching security strategy for the federal government. The National Defense Strategy is above the National Military Strategy, which provides more detailed military direction. Figure 1 provides the hierarchy and description of key U.S. strategic guidance documents. Organizations across DOD play a role in providing analytic support to senior leaders as they make force structure decisions to support the National Defense Strategy. Table 1 provides a summary of the organizations with key roles and responsibilities for providing analytic support to senior leaders making force structure decisions. DOD Has Established an Approach to Provide Senior Leaders with Analytic Support for Making Force Structure Decisions DOD established its approach, Support for Strategic Analysis (SSA), in 2002 to provide analytic support to DOD senior leaders as they deliberate strategy and budget matters and to support evaluations of force structure needs across the joint force. SSA is structured to do this by providing a common set of assumptions for various military threats that form the basis for further analysis across the department. DOD guidance states that SSA is intended to provide a common starting point for the exploration of various approaches to address the threats. DOD guidance further states that analyses should provide senior leaders with insights on the relative risks of various operational approaches and force structures. Senior leaders would then have a basis to weigh options, examine tradeoffs across the joint force, and drive any force structure changes necessary to meet the strategy. For more information on the origin of SSA, see the sidebar below. Origin of Support for Strategic Analysis DOD officials told us that the department developed what became SSA because then Secretary of Defense Donald Rumsfeld was frustrated by the lack of objective measures to compare competing force structure proposals. During the 1990s, each service developed its own analytic process and assumptions for assessing force structure needs to develop requirements for budget submissions. Each service’s analytic process tended to favor its preferred force structure and operational approach. DOD officials stated that the lack of a common analytic starting point for all of the services also meant that senior leaders had difficulty getting beyond debates about the services’ respective assumptions during discussions on force structure priorities. As a result, the Secretary of Defense had no objective basis by which to decide whether, for example, a Navy proposal to buy more ships or an Air Force proposal to buy more fighter aircraft was the best way for the department to use its limited resources to support strategic priorities. SSA is led by OUSD (Policy), the Joint Staff, and CAPE—collectively referred to as the Tri-Chairs. DOD guidance assigns each Tri-Chair responsibility for creating one of three increasingly detailed products for a variety of military threats that, taken together, comprise the common starting point for additional analysis of that threat. The resultant SSA product library is then available to the services and other DOD organizations for further analysis. DOD guidance notes that the threats SSA products address are examples of the types of threats U.S. joint forces are expected to be able to address with acceptable risk. However, the guidance states that the forces described in the products are not intended to constitute DOD’s force structure requirements. Instead, analysis using these products is intended to help senior leaders establish force structure requirements that balance risk across a range of threats, within fiscal constraints. Table 2 identifies the three SSA products that are intended to form the common starting point for analysis for a given plausible threat, along with the lead Tri-Chair for each product type. According to DOD guidance, the military services are to support the Tri- Chairs in developing the SSA products and, according to DOD officials, are the primary users of these products. The guidance requires that the services use SSA products as common starting points for studies evaluating their force structure needs for implementing the defense strategy and supporting their budget development, among other things. Although the starting points are common across the department, each service uses its own analytic process to evaluate its specific force structure needs for implementing the strategy and supporting its budget development (see app. I for further details on each service’s analytic process). The services may examine any plausible threat in the SSA library that they believe may help them understand their force structure needs. However, the 2018 National Defense Strategy identifies several key threats and the principal priorities for the department that the services must prioritize when developing their force structures. Specifically, the unclassified summary of the strategy calls for the department to increase and sustain investments towards the long-term strategic competitions with China and Russia, and to concurrently sustain its efforts to deter and counter rogue regimes such as North Korea and Iran, defeat terrorist threats to the United States, and consolidate gains in Iraq and Afghanistan with a more resource-sustainable approach. Further, budget guidance—in particular the Defense Planning Guidance—directs each service on which threats it must focus as part of its budget development process. Figure 2 provides a generalized overview of how the SSA process was designed to operate. DOD’s Analytic Approach Has Not Provided Senior Leaders with Needed Support for Major Force Structure Decisions and Alternative Approaches Are Incomplete SSA has not provided senior leaders with the analytic support they need to evaluate and make fully informed decisions regarding the force structure needed to implement the National Defense Strategy. DOD has recognized this and attempted to reform SSA for several years, including exploring alternative options for providing senior leaders with better decision-making support. However, DOD has not fully developed these approaches and it is unclear whether they will provide the analytic support needed. Support for Strategic Analysis Has Not Provided Senior Leaders with Needed Analytic Support Due to Three Interrelated Challenges To date, SSA has not provided the analytic support senior leaders need to evaluate and determine the force structure required to implement the defense strategy. DOD senior leaders have documented concerns with SSA in relevant guidance. For example, DOD’s 2016 Defense Analytic Guidance stated explicitly that there were cracks in the department’s analytic foundation, many of which originate within SSA. Further, CAPE and the Joint Staff had disengaged from the SSA process by this time but, as of September 2018, the services were still using SSA products for their force structure analyses and budget development. Based on our analysis, we believe that SSA has not yielded the analytic support that it was intended to provide owing to three interrelated and persistent challenges: (1) cumbersome and inflexible products, (2) limited analysis that tends not to deviate from the services’ programmed force structures and has not tested key assumptions, and (3) an absence of joint analysis evaluating competing force structure options and cross- service tradeoffs. SSA Products Are Cumbersome and Inflexible DOD has not kept the SSA products complete and up to date because they are cumbersome and inflexible. DOD guidance states that SSA products are to be common starting points for analyses, including key threats identified in strategic guidance. DOD guidance also states that SSA products should retain consistency with DOD strategy and current intelligence and should incorporate operational approaches effective at mitigating future threats. Credible independent analysis of an issue requires a detailed, well-understood, up-to-date common basis for that analysis. As of September 2018, DOD’s library of products was incomplete and outdated. Specifically, the Detailed View was not available for any of the threats, and Joint Staff officials told us they stopped producing joint CONOPS through SSA in 2015. Moreover, the Joint Staff retired all of the existing SSA CONOPS in March 2018 because they were outdated and/or not aligned with the 2018 National Defense Strategy—though they were still available for the department to access. Service officials also told us that many of the approved Defense Planning Scenarios and CONOPS for the key threats identified in the 2018 National Defense Strategy do not reflect up-to-date military objectives and adversary capabilities. Additionally, the 2018 National Defense Strategy outlines a new force posture and employment model that could have major implications for future CONOPS. However, DOD is still developing these concepts and, as such, they are not yet reflected in any SSA products. Specific details on the status of key SSA products were omitted because the information is classified. One of the key reasons DOD did not keep the products complete and up to date was that developing and approving highly detailed and complex SSA products was cumbersome, taking a significant level of effort and time. Tri-Chair officials told us that developing the CONOPS and Detailed View, in particular, was difficult because there was a desire to gain consensus with all of the stakeholders and because the services wanted these products to have high fidelity detail in order to run their campaign models. For example, CAPE and Joint Staff officials told us that it took between 1 and 2 years to build and approve the Detailed View for one threat scenario. The officials added that the level of detail included made the product inflexible and difficult to vary. CAPE and Joint Staff officials agreed that this product became far too detailed and time-consuming and used a substantial amount of the department’s analytic capacity. As a result, the officials told us that CAPE abandoned building additional Detailed Views in 2012. The lack of agreed-upon details about the forces required has had other effects. For example, OUSD (Policy) and Joint Staff officials told us that the services still wanted the comprehensive information that the Detailed View was supposed to provide for use in their campaign models. Without CAPE producing Detailed Views, the officials noted that some of the detailed information migrated into the higher level CONOPS, making developing and analyzing that product more difficult and time-consuming as well. However, all four military services told us that they need and continue to use the SSA products—specifically, the Defense Planning Scenarios and CONOPS—to support program and budget formulation. Service officials also told us they have adapted CONOPS, as individual services or with other services, to better reflect the operational environment (e.g., updating intelligence estimates on adversary capabilities). However, CAPE and OUSD (Policy) officials told us that this results in the services’ analyses no longer being common and comparable across the department. The John S. McCain National Defense Authorization Act for Fiscal Year 2019 reiterates that OUSD (Policy) must, in coordination with the other Tri-Chairs, develop planning scenarios by which to assess joint force capabilities, among other things. Until the Tri-Chairs determine the analytic products needed and the level of detail that is sufficient to serve as a common starting point but also flexible enough to allow for variation of analysis, and ensure these products are updated, the military services will likely continue to generate budget requests based on analysis that is not comparable. As DOD’s 2016 Defense Analytic Guidance noted about the fiscal year 2017 budget review, the lack of a common basis for their analysis hampers the department’s ability to understand the relationship between future warfighting risks identified in analysis and the services’ programmatic decisions. SSA Analysis Does Not Significantly Deviate from the Services’ Programmed Force Structures or Test Key Assumptions Although DOD’s guidance stated that SSA will facilitate a broad range of analysis exploring innovative force structure approaches for mitigating future threats identified in the strategy, SSA has not done so. Innovative force structure approaches could include, for example, alternative CONOPS and deviations from programmed forces. The 2018 National Defense Strategy stated that DOD’s operational approach largely dates from the immediate post-Cold War era when U.S. military advantage was unchallenged and the threats were rogue regimes, which is no longer the case. OUSD (Policy) officials told us that SSA CONOPS also reflect this outdated approach that depends on overwhelming force for success, which is unrealistic against advanced adversaries. Similarly, DOD’s 2016 Defense Analytic Guidance called for SSA to emphasize analyzing and assessing risk against key threats rather than on defending predetermined force levels or capabilities. Rather, the 2018 strategy stated that the department must relentlessly pursue innovative solutions and devise insurmountable dilemmas for future adversaries and that incrementalism or evolutionary progress is inadequate. However, Tri-Chair and service officials told us the services have been reluctant to conduct or share these types of boundary-pushing analyses through SSA for fear that they will jeopardize their forces or limit their options. Tri-Chair officials also told us that the services have leveraged their participation in developing SSA products to ensure their favored major force structure elements are included in the common starting point. Joint Staff officials noted that they were able to do this because SSA did not constrain what force structure the services could use for their analysis. That is, if the force structure was programmed, they could use it because the goal was to overwhelm the adversary. However, by not significantly deviating from the starting points, the services were able to ensure that their analytic outcomes support the need for the already- programmed force. Additionally, several questionable assumptions underpin the analysis. Sensitivity analysis examines the effects that changes to key assumptions have on the analytic outcome and are helpful to understand risk. It can therefore provide insight to decision makers of how risk levels would change if conditions did not match the assumptions. However, Tri-Chair officials told us that the services, using SSA products as a starting point, generally have not conducted sensitivity analyses on key operational assumptions or on factors that may not be static (or at least have some uncertainty) and, if varied, may raise or lower the risk of completing assigned tasks or missions. According to these officials, as well as our past work, certain questionable assumptions have not been analyzed through sensitivity analysis as part of SSA. For example, all four services tend to assume that their readiness for a conflict will be high, consistent with the level directed in guidance. However, we reported in 2018 that at the individual service level, the military services continue to report readiness challenges and readiness rebuilding is anticipated to take 4 years or more. Specific details of service-specific assumptions that are problematic were omitted because the information is classified. The services have been reluctant to independently examine a broad range of innovative force structure options and conduct sensitivity analysis on key operational assumptions through SSA because, according to service officials, due to competing priorities they believe they can generally only affect marginal changes in their budgets from year to year and have limited analytic capacity. Service officials noted how the majority of their service’s budget each year is constrained by must pay bills, including personnel costs, supporting existing force structure, established contracts, sustaining the industrial base, and statutory mandates. As such, unless directed to by senior leaders, service officials told us that they typically do not use their limited analytic resources to conduct sensitivity analysis or explore alternative approaches. The sensitivity analyses they have been directed to conduct have generally been focused on smaller force structure changes, but have provided useful insights. For example, the Air Force conducted an analysis for its fiscal year 2019 budget request of how risk would be affected with various F-35 buy-rates and investments in munitions and base defense. The Air Force found that it could reduce risk by keeping its F-35 buy-rate steady instead of increasing it and could use the resulting savings to bolster its munitions stocks. DOD stated in its 2016 Defense Analytic Guidance that SSA is not adequately exploring innovative approaches to meet future challenges, and called for OUSD (Policy) to identify key operational assumptions for the services to use to conduct sensitivity analyses. However, the direction provided by the department has thus far been limited and has generally not provided specific guidance requiring the services to explore a range of innovative force structure approaches or identified key assumptions on which the services must conduct sensitivity analyses. For example, the three Defense Planning Scenarios updated in 2018 for the purposes of analysis in support of the fiscal years 2020 and 2021 budget requests included a number of parameters for further analytic exploration. However, the guidance encourages, but does not require, the services to conduct these analyses. As previously discussed, officials said the services are reluctant to conduct or share this analysis and are unlikely to do so without specific direction. As a result, SSA analysis largely reflects the services’ programmed force structures and has not driven any significant changes to force structure or resource allocation within DOD and lacks credibility with senior leaders, as documented in DOD guidance. Until DOD provides specific guidance requiring the services to explore a range of innovative force structure approaches relevant to the threats identified in the 2018 National Defense Strategy, including identifying key assumptions for sensitivity analyses, DOD senior leaders may not have full visibility into the risks in the joint force’s ability to execute the missions set out in the National Defense Strategy. DOD Lacks Joint Analytic Capabilities to Assess Force Structure A key stated goal of SSA was to create a common analytic foundation so that the services’ force structures could be evaluated as a joint force—as it would fight. However, SSA has not resulted in this type of joint analysis. Specifically, DOD guidance states that SSA is intended to facilitate the comparison and evaluation of competing force structure options and cross-service tradeoffs. DOD guidance also states that assessments of the aggregate capacity of the joint force can provide an analytic foundation to identify risk and understand tradeoffs across competing demands for the force. According to the services, SSA products provide a valuable resource and are critical to informing programmatic decisions. However, DOD’s 2016 Defense Analytic Guidance noted that there was a dearth of joint analysis at the operational and strategic levels; the department lacks a body or process to conduct or review joint force analysis; and the department’s SSA efforts were focused on developing, versus analyzing, the common starting points. Accordingly, it reiterated the need for SSA to free up time and resources to conduct joint analysis and review competing analyses. Tri-Chair officials told us that DOD currently compares and makes decisions on force structure options primarily through the budget process; however, such budget reviews are typically limited to specific areas of interest. The officials added that program and budget review is not the best place to evaluate joint force structure tradeoffs because the kinds of issues examined in the budget process are more limited in scope and generally do not include comprehensive cross-service comparisons. Lacking joint analytic capability to assess force structure needs could be problematic as the department moves forward to implement the 2018 National Defense Strategy. The John S. McCain National Defense Authorization Act for Fiscal Year 2019 directed OUSD (Policy), in coordination with the other Tri-Chairs, to conduct assessments of the capabilities of the joint force to achieve required objectives. However, Tri-Chair officials also told us that, as of 2018, there was not a mechanism in place for DOD to routinely assess joint force needs and force structure tradeoffs across the military services. As previously discussed, in 2016 this was identified as an issue, and limited progress has been made since then to ensure adequate joint analysis to support senior leader decision-making. Further, OUSD (Policy) officials told us that SSA has not been responsive to senior leaders because it has not provided timely and comprehensive answers to important questions that only joint analysis can provide, such as the extent to which the joint force can successfully meet a campaign’s overall objectives (e.g., win the war) or the extent to which cross-service tradeoffs would affect a specific campaign. As a result, force structure decisions in the department based on SSA have remained largely relegated to marginal changes through program and budget review, according to DOD. The department’s gap in a joint analytic capability is particularly problematic in light of the National Defense Strategy’s call for urgent change at a significant scale and recent proposals by the services to greatly expand their force structure—including the Navy’s plan to grow the fleet by as much as 25 percent and the Air Force’s plan to grow squadrons by 24 percent. Based on our discussions with officials and our analysis, there are a number of different options the department has for conducting such joint analyses, including establishing a separate body with these capabilities or specifying the organizational responsibilities and processes for conducting these comparisons and analyses. Until the department has an approach for conducting joint analyses or comparing competing analyses, DOD senior leaders will not have a robust joint analytic foundation to rely on to evaluate competing force structure options and cross-service tradeoffs. DOD Is Exploring Options for Revising Its Analytic Approach for Making Force Structure Decisions, but These Efforts Are Incomplete The department has recognized that SSA has shortcomings and made repeated efforts to address them, including specific intervention and supplemental guidance promulgated in 2014 and 2016. However, Tri- Chair officials told us that these prior efforts fell short, and the department’s struggles with SSA led to two of the three Tri-Chairs disengaging from the process—CAPE in 2012 and the Joint Staff in 2015. The Tri-Chairs agree that DOD continues to need a process and products that are current, more responsive to senior leader needs, and able to provide insights on alternative approaches and force structures that span the joint force. In addition, Joint Staff officials noted that SSA was too focused on force sizing, which is not consistent with the 2018 National Defense Strategy’s focus on innovation, modernization, and readiness. In order to address this, the Joint Staff is pursuing an alternative approach to SSA that would largely eliminate a separate formal analytic process. Instead, the Joint Staff believes that the Tri-Chairs and the services can address senior leader needs more efficiently by continuing to execute their existing statutory roles and responsibilities within their own individual organizations in lieu of SSA. Since 2016, the Joint Staff has reinvigorated its own analytic capability to support the Chairman of the Joint Chiefs of Staff and other senior DOD leaders, according to Joint Staff officials. Although officials from other DOD organizations have supported the Joint Staff’s reinvigoration of its analytic support, they told us that this approach is focused on the Chairman’s responsibility rather than on wider departmental needs and does not address key shortfalls in providing analytic support to senior leaders, including the need for a common, flexible starting point. Further, the Joint Staff’s alternative approach would rely on CAPE’s analysis in the budget process as the culminating point for final DOD force structure decisions. CAPE officials told us that the program review can assist DOD leadership in optimizing relatively limited changes to DOD’s force structure by evaluating service budget submissions and identifying alternatives for consideration. However, budget cycle time constraints mean that little analysis occurs within program review and, as a result, program review relies on the foundational analysis SSA was intended to provide. As such, CAPE’s annual program review is inadequate for comprehensively examining needs and making major tradeoffs across the joint force, according to the officials. Finally, the department originally created SSA as a separate analytic process to address a shortfall not addressed by key DOD entities pursuing their statutory responsibilities. The Tri-Chairs have also undertaken an effort to identify an alternative approach to SSA. Specifically, shortly after the new strategy was released in 2018, CAPE initiated a Tri-Chair “blank slate” review of DOD’s analytic process in order to thoroughly review—without preconceived solutions— how to best provide analytic support to senior leaders. According to Tri- Chair officials, this effort is in the early stages of development and has not yet identified solutions to the challenges that hampered SSA or documented any aspects of a new approach. While the department’s recognition of the challenges confronting SSA is promising, the two efforts underway to identify alternatives to SSA are not complete and it is unclear the degree to which these efforts will address the challenges that have been long-standing with SSA. Addressing these challenges is critical to being able to provide needed information for senior leaders to make decisions on how best to implement and execute the National Defense Strategy. Conclusions The 2018 National Defense Strategy calls for the department to make difficult choices to prioritize what is most important to field a lethal, resilient, and rapidly adapting joint force needed to address the growing threats to U.S. security. It also emphasizes that this environment demands analysis that accepts uncertainty and complexity and can drive innovation among rapidly changing threats. To prepare the joint force for the threats identified in the strategy, the department’s leadership needs to be supported by timely and comprehensive analyses. However, SSA—DOD’s current approach for providing such analytic support—has not provided the timely and comprehensive analyses that senior leaders need to make informed decisions about the joint force structure needed to implement the National Defense Strategy. Senior leaders have documented in relevant DOD guidance that there are cracks in the department’s analytic foundation, many of which originate with SSA. This is due in part to highly detailed and complex products that are difficult to produce and lack flexibility to analyze, insufficient guidance to overcome the interests of the services to protect their force structure equities, and the lack of a joint analytic capability. Congress, in the John S. McCain National Defense Authorization Act for Fiscal Year 2019, required OUSD (Policy), in coordination with the other Tri-Chairs, to develop joint force objectives and conduct assessments of the joint force’s capability to meet those objectives. The department has demonstrated a desire to fix SSA’s deficiencies but has thus far been unable to overcome these challenges. Without determining the analytic products needed and updating them, issuing specific guidance requiring alternatives and key assumptions to be fully analyzed, and developing an approach for conducting joint analysis, DOD may not be providing its leaders with the analytic support they need to prioritize force structure investments that would best manage risk and address the threats outlined in the National Defense Strategy. Recommendations for Executive Action We are making three recommendations to DOD as it reevaluates its analytic approach. The Secretary of Defense should ensure that OUSD (Policy), the Joint Staff, and CAPE—in consultation with the services—determine the analytic products needed and the level of detail that is sufficient to serve as a common starting point but flexible to allow for variation of analysis to support senior leader decisions, and update these products to reflect current strategy and intelligence estimates, as well as the anticipated operational approaches needed to address future threats. (Recommendation 1) The Secretary of Defense should ensure that OUSD (Policy) provide specific guidance requiring the services to explore a range of innovative force structure approaches relevant to the key threats identified in the National Defense Strategy, including identifying key assumptions on which the services must conduct sensitivity analyses. (Recommendation 2) The Secretary of Defense should establish an approach for comparing competing analyses and conducting joint analyses for force structure to support senior leaders as they seek to implement the National Defense Strategy. This could include establishing a separate body with these capabilities and/or specifying the organizational responsibilities and processes for conducting these comparisons and analyses. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of the classified version of this report for review and comment to DOD. That draft contained the same recommendations as this unclassified version. In its written comments (reproduced in app. II), DOD concurred with our three recommendations and noted that the department has begun to address the recommendations with its new Defense Planning and Analysis Community initiative. We also received technical comments from DOD, which we incorporated as appropriate. DOD provided comments on its concurrence with the three recommendations. In its comments on the first recommendation, DOD suggested that we revise the recommendation to include that the Tri- Chairs consult with the services as they implement the recommendation. Throughout our report, we identified the important role the services play in providing analytic support to senior leaders, including supporting the development and use of the analytic products that provide the foundation of analysis in the department. As such, we agree with DOD’s proposed revision and have incorporated it to further clarify the services’ important role. In its comments on the second and third recommendations, DOD advised that we replace the term “force structure” with “force planning” to ensure that different audiences understand that we are referring to force sizing, shaping, capability, and concept development. DOD correctly stated that we were using the term “force structure” in a broad sense. However, the term force planning is not interchangeable with force structure because force planning is the act of analyzing and determining force structure needs. In order to provide further clarification, we added a note in the body of the report stating that when we refer to force structure analysis, it includes the force planning elements identified by DOD (i.e., force sizing, shaping, capability, and concept development). The department also provided some general comments on our report. Specifically, DOD noted that it has reservations about some of the report’s content because at times it seems to reflect statements based on particular organizational perspectives. DOD therefore requested that we acknowledge that Support for Strategic Analysis (SSA) suffered from poor implementation rather than being fundamentally unsound. However, DOD also stated that our report outlined that SSA failed due to overall suboptimal management and unwieldy stakeholder execution, and that the resulting failure to present analysis in a timely and responsive fashion impeded the flow of quality information to senior leaders. We believe that the three interrelated challenges we identified in our report adequately reflect that SSA faced significant challenges in being implemented as intended. Further, we identified that there are a broad range of views within the department on what the challenges have been and how to best address them. We continue to believe that it is important that these views be presented in the report and have attributed them as appropriate. DOD also commented that we reference a desire within the department to gain “consensus” amongst SSA stakeholders, but thought that “coordinated” was a more appropriate word than consensus, since consensus was not required to produce SSA products. In the report, we did not state that consensus was required, but noted that DOD officials told us that the desire for consensus amongst SSA stakeholders was a contributing factor in making SSA products cumbersome and inflexible. Further, DOD’s 2016 Defense Analytic Guidance similarly identifies the “degree of consensus” as an area requiring SSA process reform. DOD’s final comment noted that the military services used SSA products and routinely conducted sensitivity analysis for their internal use. We recognize in the report that the services conduct a variety of analyses, including some sensitivity analyses. However, we also identify important assumptions that remain untested. As we reported, service officials told us that they have limited analytic capacity and so tend not to do sensitivity analyses on topics unless specifically directed to do so. Further, we noted that the services have been reluctant to conduct or share boundary- pushing analyses through SSA for fear that they will jeopardize their forces or limit their options. As a result of this and the other challenges we identified in this report, the quality of SSA products and analysis and the information provided to senior leaders to inform decision-making has been limited. As DOD moves forward with implementing our recommendations, it will be important that it take the necessary steps to ensure that any future analytic processes thoroughly examine and test key assumptions and look across the joint force. Doing so would help ensure any new process can overcome the constraints that limited the effectiveness of SSA. We are sending copies of this report to congressional committees; the Acting Secretary of Defense; the Acting Under Secretary of Defense for Personnel and Readiness; the Under Secretary of Defense for Policy; the Chairman of the Joint Chiefs of Staff; the Director, Cost Assessment and Program Evaluation; the Secretaries of the Army, the Navy, and the Air Force; and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in app. III. Appendix I: Military Services’ Analytic Processes for Assessing Force Structure Needs Each military service has its own process for determining its force structure requirements using national strategies, defense planning guidance, and Support for Strategic Analysis (SSA) products. Below is a description of each service’s process as of September 2018. Army. The process the Army uses for identifying its force structure needs has two phases: (1) “Capability Demand Analysis” where the Army uses SSA-approved Defense Planning Scenarios to determine how large a force is needed to support the National Defense Strategy and with what mix of units and (2) “Resourcing and Approval” where senior Army leaders assess each capability within the Army to determine where reductions and growth need to occur given available resources. The Secretary of the Army approves changes to force structure through the end of the Future Years Defense Program in a decision memorandum, and these decisions are documented in an Army Structure Memorandum. Navy. The process the Navy uses for identifying its force structure needs begins with the identification of the Navy’s steady-state, peacetime operations requirements. The Navy then conducts campaign and warfighting risk analyses to determine the force’s ability to fight and win SSA-approved Defense Planning Scenarios. Specifically, the Navy tests each force element against the most stressing Defense Planning Scenario, which provides the Navy with its battle force warfighting—to include surge—requirements. These warfighting requirements are compared with steady-state requirements and the more stressing forms the basis of the Force Structure Assessment, which establishes the long-term force structure goals of the Navy’s 30-year shipbuilding plan and aviation plan, and informs the programming and budget processes, among other things. Air Force. The Air Force has a largely decentralized process for identifying its force structure needs that is part of the Air Force’s annual budget development process. The Air Force manages its activities and budgets primarily across 12 Core Functions—the broad capabilities the Air Force provides to the combatant commanders. Much of the force structure analysis that informs budget decisions is also conducted at the Core Function level. The Air Force also conducts occasional leadership-directed studies on future capability needs in certain mission areas (e.g., air superiority needs beyond 2030) as well as a unified risk analysis of its entire force structure that is intended to inform senior leader budget decisions. The Air Force is currently revising its approach to better integrate its capability development and analysis earlier in the process. Marine Corps. The Marine Corps conducts service-level reviews of its force structure at the discretion of the Marine Corps Commandant. A Force Structure Review is typically directed as a result of major service-level issues, such as end strength or capability changes. Marine Corps Force 2025 is the most recent comprehensive assessment of the Marine Corps’ force structure and organization. This was a three-phased effort that relied on one Defense Planning Scenario to develop alternative force structures and evaluate them against a near-peer adversary. The Commandant directed this review to emphasize growing information warfare capabilities. The Marine Corps also conducts Force Optimization Reviews, which are biennial reviews designed to optimize the current and planned future force, taking into consideration new and emerging requirements. Table 3 shows some of the comparable elements of the individual service force structure development processes. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, Patricia Lentini, Assistant Director; Nicolaas Cornelisse; Martin De Alteriis; Carly Gerbig; Mae Jones; Amie Lesser; Shahrzad Nikoo; Carol Petersen; and Alex Winograd made key contributions to this report.
Why GAO Did This Study DOD's 2018 National Defense Strategy continues the department's shift toward focusing on the challenges posed by major powers—China and Russia. The strategy concludes that DOD must pursue urgent change at a significant scale and starkly warns that failure to properly implement the strategy will rapidly result in a force that is irrelevant to the threats it will face. To implement the change DOD envisions, senior leaders must have quality information. Senate Report 115-125 includes a provision for GAO to review DOD's analytic approach for informing force structure decisions to implement the National Defense Strategy. This report assesses, among other things, whether DOD's analytic approach has provided senior leaders with the support needed. GAO reviewed DOD guidance, assessed whether DOD was meeting the objectives identified in its guidance, and interviewed agency officials. This is an unclassified version of a classified report issued in February 2019. Information that DOD deemed classified has been omitted. What GAO Found The Department of Defense's (DOD) analytic approach has not provided senior leaders with the support they need to evaluate and determine the force structure necessary to implement the National Defense Strategy. DOD's analytic approach—Support for Strategic Analysis (SSA)—is used by the services to evaluate their force structure needs and develop their budgets. However, GAO found that SSA has been hindered by three interrelated challenges: Products are cumbersome and inflexible. Although DOD guidance states that SSA products are to be common starting points for analysis on plausible threats, including threats identified in strategic guidance, DOD has not kept the products complete and up to date in part because they were highly detailed and complex and therefore cumbersome to develop and analyze. Analysis does not significantly deviate from services' programmed force structures or test key assumptions. Although DOD's guidance states that SSA should facilitate a broad range of analysis exploring innovative approaches to mitigate threats identified in the strategy, the services generally have not conducted this type of analysis because guidance has not specifically required the services to do so. DOD lacks joint analytic capabilities to assess force structure. Although DOD guidance states that SSA is intended to facilitate the comparison and evaluation of competing force structure options and cross-service tradeoffs, the department has not conducted this type of analysis because it lacks a body or process to do so. DOD efforts to revise its analytic approach are in the early stages and have not yet identified solutions to these challenges. Moreover, DOD has attempted reforms in the past without success. Without a functioning analytic process that addresses the above challenges, senior leaders do not have the analytic support they need to prioritize force structure investments that would best manage risk and address the threats outlined in the National Defense Strategy. What GAO Recommends GAO recommends that DOD (1) determine the analytic products needed and update them, (2) provide specific guidance requiring the services to explore a range of alternative approaches and force structures, and (3) establish an approach for conducting joint force structure analysis across the department. DOD concurred with the recommendations and noted the department has begun addressing them.
gao_GAO-20-43
gao_GAO-20-43_0
Background Most communities in the nation experience some kind of flooding, which may occur after substantial spring rains, heavy thunderstorms, winter snow thaws, or heavy storms over a large body of water. Flood risk management includes the appropriate use of structures such as levees and floodwalls, as well as nonstructural measures such as land acquisition and structure relocation, to reduce the risk of loss of life, reduce long-term economic damage to the public and private sectors, and improve the natural environment. Flood risk management is one of the Corps’ three primary missions. For fiscal years 2015 through 2017, the Corps requested more than $3 billion for 71 construction projects that fell within its three missions, of which the largest amount—$1.33 billion—was for 33 construction projects in the flood risk management mission. Corps of Engineers Organization Located within the Department of Defense, the Corps has both military and civilian responsibilities. Through the Civil Works Program, the Corps plans, constructs, operates, and maintains a wide range of water resources development projects such as navigation and flood risk projects. The Assistant Secretary of the Army for Civil Works, appointed by the President and confirmed by the Senate, sets the strategic direction for the program and has principal responsibility for the overall supervision of functions relating to the Army’s Civil Works Program. The Chief of Engineers, a military officer, is responsible for execution of the civil works and military missions. The Civil Works Program is organized into three tiers: headquarters in Washington, D.C.; eight regional divisions; and 38 local district offices. (See fig. 2.) Corps Water Resources Development Projects and Nonfederal Sponsors The Corps develops water resource projects, including flood risk management projects, in conjunction with nonfederal sponsors such as state and local governments. According to Corps guidance, the planning process for these projects begins with the nonfederal sponsor identifying a problem and approaching the Corps to help develop a solution. Upon congressional authorization for a study and appropriations to fund it, the Corps and the nonfederal sponsor establish an agreement to conduct a feasibility study for a potential project. The Corps initiates a feasibility study by forming a project team comprised of Corps engineers, economists, planners, and possibly other specialists such as nonfederal consultants to conduct the study. The planning process the Corps uses to carry out feasibility studies is described later in our report. Nonfederal sponsors are to participate in the planning process, as well as remain involved through project design, construction, and post-project operations and maintenance. For example, for projects in which the Corps constructs infrastructure such as a flood wall, the nonfederal sponsor is to assume responsibility for monitoring and maintenance costs associated with the flood wall after its construction. Corps Water Resources Development Planning Guidance The U.S. Water Resources Council’s Principles and Guidelines outlines the principles and procedures the Corps is to follow for planning water resources development projects, including those with flood risk management objectives. The Principles and Guidelines states that the federal objective of water resources development projects is to contribute to national economic development while protecting the nation’s environment. The Corps implements the planning process outlined in the Principles and Guidelines by conducting feasibility studies for proposed water resources development projects. The Corps’ Planning Guidance provides detailed guidance on how to implement the general process outlined in the Principles and Guidelines for planning water resource projects. The Corps’ National Economic Development manuals provide supplemental guidance for the economic analysis of different types of projects—including flood risk management—and how to evaluate the benefits and costs associated with each type of project. To identify the beneficial and adverse effects of each alternative plan considered for a project, the Corps uses four categories of analysis established in the Principles and Guidelines: (1) National Economic Development, (2) Environmental Quality, (3) Regional Economic Development, and (4) Other Social Effects, as shown in table 1. The Corps’ Planning Guidance states that feasibility studies may evaluate the effects of alternative plans using the four categories of analysis, but the evaluations under two categories—National Economic Development and Environmental Quality—must be presented in each feasibility study. According to the Corps’ Planning Guidance, the National Economic Development category requires an economic analysis of each plan’s potential economic benefits and costs in monetary terms, while the Environmental Quality category evaluates each plan’s potential nonmonetary effects such as effects on habitat quality and quantity. The Planning Guidance states that using these categories of analysis provides a basis for determining which alternative plans should be eliminated from consideration, modified, or selected for further analysis. The Corps’ Multi-step Planning Process Identified and Evaluated Benefits, Costs, and Effects of Proposed Flood Risk Management Project Alternatives The Corps Identified and Evaluated the Economic, Environmental, and Other Effects of Proposed Alternatives Using a Multi- step Feasibility Study Process The Corps’ followed the six-step planning process for water resources development projects outlined in its Planning Guidance to identify and evaluate the beneficial and adverse effects of alternative plans for flood risk management projects and select a recommended plan for the eight feasibility studies we reviewed. In the initial three steps of the planning process, the Corps (1) identified the objectives and other parameters of the project; (2) inventoried and forecasted water and related land resources conditions within the planning area; and (3) formulated alternative plans for further consideration. In the final three steps of the planning process, the Corps (1) evaluated and analyzed each alternative plan for its economic, environmental, and other effects, (2) compared the alternative plans to each other, and (3) selected a recommended plan. Corps officials told us that this six-step process is the basic template for planning water resources development projects across all Corps mission areas. (See fig. 3.) For each of the eight studies we reviewed, the Corps followed this template and addressed each of the six steps in planning the proposed flood risk management project, as we describe below. Each study identified objectives, problems, opportunities, and constraints for the project. According to the Corps’ Planning Guidance, identification of problems and opportunities is the foundation for scoping the planning process and should begin as soon as practicable after the decision to initiate a feasibility study. Planning objectives describe the desired results of the process by solving the problems and taking advantage of the opportunities identified. Constraints are restrictions that limit the planning process and are unique to each study. Such constraints can be, for example, limitations imposed by policy or law. All of the studies we reviewed had the objective of reducing or managing flood risk and damages in response to problems such as historic river or stream flooding in the planning area. The studies identified opportunities, such as improving the community’s understanding of flood risk and resiliency from flood events. The studies also identified constraints, such as the need for the plan to incorporate extensive transportation infrastructure within some of the planning areas. Step 2: Inventory The studies inventoried historic and existing water and related land resource conditions and forecasted future conditions within the planning area relevant to the identified problems and opportunities from step one. According to the Corps’ Planning Guidance, the Corps is to use quantitative and qualitative descriptions of critical resources in the planning area to define existing and future without-project conditions— that is, the conditions if no project is constructed. The defined without- project conditions provide the basis from which the Corps formulates alternative plans and assesses impacts. The studies we reviewed inventoried the existing conditions for the planning area. This inventory included geology, groundwater, surface water, hydrology, water quality, biological resources, cultural resources, land use, recreation, air quality, climate change, transportation, public health and safety, public services, utilities, socioeconomics, and environmental justice. The Corps used these existing conditions to forecast the future without-project conditions, such as increasing flood risk for residential and industrial development, culturally significant communities, or specific infrastructure such as a regional wastewater facility. Step 3: Formulate The studies formulated alternative plans for the project, including a range of structural and nonstructural measures and strategies. According to the Corps’ Planning Guidance, an alternative plan consists of a system of management measures, that is, structural and/or nonstructural measures, strategies, or programs formulated to meet the project objectives subject to the planning constraints. The Corps is to identify a range of alternative plans at the beginning of the planning process, screen the plans, and refine them in subsequent iterations throughout the planning process. The Planning Guidance also states that as the Corps develops the alternative plans, it must consider the criteria of completeness, efficiency, effectiveness, and acceptability. In the eight studies we reviewed, the Corps followed an iterative approach to identify measures and form alternative plans. For example, the studies generally identified an initial array of structural and nonstructural measures for conceptual screening, followed by the grouping of viable measures into alternative plans for screening under the criteria, resulting in an array of plan alternatives for more detailed analysis of the beneficial and adverse effects (monetary and nonmonetary) of each. According to Corps officials, flood risk management studies must consider a minimum of two plans— no action and an alternative—and one of the plans considered must be nonstructural. All eight studies we reviewed adhered to this requirement and considered a variety of alternative plans for each proposed flood risk management project. Step 4: Evaluate The studies evaluated each alternative plan—including its beneficial and adverse effects—through a comparison of the with-project and without- project conditions. According to the Corps’ Planning Guidance, evaluation consists of (1) forecasting the most likely with-project (e.g., with the alternative plan constructed) condition expected under each alternative plan; (2) comparing each with-project condition to the without-project condition and documenting the differences between the two; (3) characterizing the beneficial and adverse effects; and (4) identifying the plans that will be further considered in the planning process. The studies we reviewed used the categories established in Corps guidance— the National Economic Development and Regional Economic Development categories for monetary benefits and costs and the Environmental Quality and Other Social Effects categories for nonmonetary (quantitative and qualitative) effects—to evaluate and display the beneficial and adverse effects of plan alternatives. The categories and specific types of monetary benefits and costs and nonmonetary effects that the Corps evaluated varied for each study depending on the planning area conditions and the measures and strategies included in the alternative plans. In the studies we reviewed, the economic analyses of monetary effects generally resulted in an estimated net dollar value of benefits (benefits minus costs) expected with each alternative in place, while the analysis of nonmonetary effects generally resulted in a Corps judgment about the net qualitative effect or net quantitative effect (e.g., net units of habitat created) for each alternative. Step 5: Compare The studies compared the alternative plans based on the economic analysis of benefits and costs and on the evaluations of environmental and other effects. According to the Corps’ Planning Guidance, the alternative plans (including the no-action plan) are to be compared with each other, with emphasis on the outputs and beneficial and adverse effects that will have the most influence in the decision-making process. Such a comparison is to include monetary and nonmonetary benefits and costs and identify and document trade-offs to support the final recommendation. In the studies we reviewed, the Corps compared project effects in a variety of ways, for example, in a series of narratives describing the beneficial and adverse effects of alternative plans, or a grid for side-by-side comparison of selected effects for plan alternatives. In some studies, this comparison included an incremental process in which the Corps considered incorporating additional measures or approaches into an alternative to further optimize the trade-off between beneficial and adverse effects. The result of this step was a final group of plans that the Corps considered for recommendation. Step 6: Select The Corps recommended a plan based on the comparison of the alternative plans. According to the Corps’ Planning Guidance, the Corps should recommend a single alternative plan that must be shown to be preferable to taking no action (if no action is not recommended) or implementing any of the other alternatives considered during the planning process. In the studies we reviewed, the recommended plan and the rationale for its selection were identified in the analyses and underwent internal technical review at the district, division, and headquarters levels. The Chief of Engineers signed and submitted the proposed plan for the project—known as the Chief’s Report—to the Office of the Assistant Secretary for review, and the Secretary submitted the report to Congress for authorization. The Corps Used Economic Analyses in Its Feasibility Studies to Evaluate Project-Specific Benefits and Costs and Used Additional Analyses to Evaluate Other Effects All eight of the studies we reviewed included step 4 of the Corps’ six-step planning process: an economic analysis of the benefits and costs of each proposed project as well as an Environmental Quality analysis, as called for in the Corps’ Planning Guidance. The inclusion of the other two types of analyses—Regional Economic Development and Other Social Effects—are not required, but six of the studies included them. The Principles and Guidelines provide the Corps with general flexibility to choose which benefit and cost categories to include in these analyses. The Corps’ Planning Guidance states the federal government’s and project’s objectives guide the planning process, which includes benefit and cost category selection. The monetary benefits most commonly included in the economic analyses of the Corps feasibility studies we reviewed were reduced damages and emergency costs avoided, as shown in table 2. The Corps included reduced damage benefits in each of the eight studies we reviewed. Reduced damages result from actions such as performing physical modifications to property designed to reduce the frequency of flood damage, relocating structures, or installing flood warning and preparedness systems. For example, a feasibility study for a proposed project in the New York District outlined a plan to modify channels that line the Mamaroneck and Sheldrake Rivers with the goal of reducing the risk of life and property damage within the Village of Mamaroneck. The Corps also included emergency costs avoided as benefits in four of the eight studies we reviewed. Emergency costs include expenses resulting from a flood that otherwise would not be incurred. For example, some of the emergency costs avoided for this proposed project in the New York District included the costs of evacuation, reoccupation, flood fighting, and increased operations, police, fire, and military patrol. Depending on the potential effects of the plan alternatives considered, some studies included monetary benefits from recreation, reduced maintenance costs, flood insurance administrative savings, or reduced transportation disruptions in their economic analyses, but these were not commonly considered in the studies we reviewed. The Corps considered a variety of monetized costs in its economic analyses for feasibility studies we reviewed, as shown in table 3. Among the most commonly included costs in each of the eight studies were for construction; operation, maintenance, repair, replacement, and rehabilitation (OMRR&R); and real estate. Specifically: Construction costs. These are the direct costs of installing project measures. For example, the Honolulu District study included the costs of constructing six in-stream debris and detention basins above a watershed, floodwalls along a canal, an earthen levee, and two pump stations. OMRR&R costs. These represent the current monetary value of materials, equipment, services, and facilities needed to operate the project and make repairs, rehabilitations, and replacements necessary to maintain project measures in sound operating condition during the period of analysis. For example, the Wilmington District study included OMRR&R costs for conducting visual inspections of the levee, mowing twice a year, and conducting video inspections of pipes and culverts every 5 years. Real estate costs. These include activities such as buying out residential structures and demolishing them. For example, the San Francisco District study included real estate costs to acquire approximately 900 acres of city-owned land for ecosystem restoration and levee, road, and temporary work easements. Depending on the potential effects of the plan alternatives considered, some of the studies we reviewed included environmental costs; relocations; planning, engineering, and design; and the costs for cultural resource preservation, recreation, and flood warning systems. In addition to the required economic analysis of benefits and costs, the Corps included other analyses to evaluate monetary and nonmonetary project effects in the flood risk management feasibility studies we reviewed. These included the Environmental Quality, Regional Economic Development, and Other Social Effects analyses. All the studies we reviewed included the Environmental Quality analysis; six studies included the Regional Economic Development or Other Social Effects analyses, as shown in table 4. Corps officials said the additional analyses were included in studies because the analyses were needed to determine the best project design, help make planning decisions, or respond to local sponsors’ preferences. Examples of some additional analyses conducted in different districts include the following: Regional Economic Development effects. In the Sacramento District study, the Corps considered ways reduced flooding could increase local business revenue and short-term construction employment but reduce employment because of loss of damage to businesses, among other effects. The Corps also considered how its expenditures for various services and products during the project were expected to generate additional economic activity, such as through additional jobs, income, and sales. In this case, the Corps estimated the project might add 18,930 jobs in the region. According to a 2011 Corps handbook, considering Regional Economic Development effects can provide a better understanding of the overall impact to the region. Doing so also examines the potential impacts mainly to the localized or regional economic area, instead of the nation as a whole. Other Social Effects. In the Wilmington District study, the Corps considered security of life, health, and safety; preservation of historic significance; and the impacts to cultural resources. According to a 2009 Corps handbook, considering the Other Social Effects analysis has great potential value for better ensuring that water resources solutions address a broad array of issues and concerns that better meet stakeholder needs and expectations. The Corps’ Evaluations Used Economic Analyses to Identify Project Alternatives with Greatest Net Benefits but Relied on Other Analyses for Some Recommendations In most of the studies we reviewed, the Corps recommended the alternative plan with the greatest net economic benefits based on the results of its economic analyses. In some cases, however, the Corps relied on other analyses to address different project objectives or the preferences of the local nonfederal sponsors. The Corps’ Planning Guidance directs that the project alternative with the greatest net economic benefit, consistent with protecting the nation’s environment, be selected for recommendation unless an exception is granted. The Assistant Secretary of the Army for Civil Works has the authority to grant exceptions if federal, state, local, or international concerns exist. The Planning Guidance states that projects may deviate from the alternative plan with the maximum net benefits if requested by the nonfederal sponsor and approved by the Assistant Secretary of the Army for Civil Works. Such plan alternatives are referred to by the Corps as the locally preferred plan, with the nonfederal sponsor responsible for any project costs in excess of the costs of the plan with the highest net benefits. The Corps conducted economic analyses in each of the eight studies we reviewed, resulting in a wide range of monetary benefits and costs for the recommended project plan alternatives. Table 5 shows the monetized benefit and cost information that helped the Corps select recommended plans in the eight studies. The annualized project benefits ranged from approximately $500,000 to $210.6 million, and annualized project costs ranged from about $1 million to $65 million, resulting in annual net benefit estimates ranging from approximately -$500,000 to $146 million. For five of the eight studies we reviewed, the Corps primarily used the results of the economic analysis of benefits and costs to recommend a plan with the greatest net benefits from among the alternatives, in accordance with the Planning Guidance. These five studies were with the New York, Honolulu, Sacramento, Nashville, and Kansas City Corps districts. Three of the eight studies we reviewed relied on other analyses as allowed under the Planning Guidance to address different project objectives or the preferences of the local nonfederal sponsors. Corps officials said they relied on other analyses when needed to determine the best project design, help make decisions, or respond to local nonfederal sponsors’ preferences. Specifically: Chicago District. The Chicago District recommended a project based on two separate analyses. Specifically, the project team recommended an alternative plan based on an economic analysis for the flood risk management objective and separate analyses for an ecosystem restoration objective. A Corps document stated that by doing so, the proposed project would help both manage flood risks and restore ecosystems in the watershed. In addition, the study said the recommended plan attempts to maximize the net benefits and find balance between both objectives. Wilmington District. The Wilmington District study indicated that the Corps recommended the locally preferred alternative plan, after receiving approval to do so, instead of the alternative plan with the greatest net benefits at the request of the nonfederal sponsor. The locally preferred alternative plan was recommended so it could incorporate consideration of potential other social effects, such as life and safety risk, and regional economic development, such as employment created during and after construction. By doing so, the study indicated Corps officials responded to local priorities and the recommendations provided by the President’s Council on the Future of Princeville, North Carolina. According to the study, the Corps considered impacts to community cohesion, cultural and historical values, local per capita and household incomes in comparison to national averages, and other factors not captured in an economic analysis. San Francisco District. The San Francisco District study indicated that the Corps based its alternative plan recommendation on a combination of multiple objectives and local preference. The recommended alternative plan’s objectives included reducing the risk of tidal floods as well as restoring the ecosystem to tidal marsh habitat. The Corps selected the recommended alternative plan because the nonfederal sponsor wanted to provide additional transitional habitat and greater flood risk management for Federal Emergency Management Agency accreditation over the 50-year study period. Specifically, the local preference was to build the levee about 3 feet higher than the plan with the greatest net benefits— thereby potentially reducing public health and safety risks associated with flooding more than the alternative plan with the greatest net benefit. Selected Corps Economic Analyses Were Generally Consistent with Best Practices, Although Some Practices Were Not Fully Used The economic analyses for the eight studies we reviewed generally met three of the five key methodological elements and partly met two key elements—analysis of effects and transparency. Our Assessment Methodology for Economic Analysis (Assessment Methodology) identifies five key methodological elements to the baseline structure of an economic analysis. For the analysis of effects element, the Corps has either taken steps to address certain best practices or indicated the agency is limited in adopting other practices due to statutory requirements. For the transparency element, Corps officials acknowledged that transparency could be improved through its review process. The Economic Analyses in All Eight Studies Generally Met Three Key Methodological Elements Objective and Scope According to our Assessment Methodology, an economic analysis should state the action examined and the justification for the action. In addition, the objective of the analysis should be stated; the scope of the analysis should be designed to address the objective; and the analysis period should be long enough to encompass the important economic effects of the proposed action. We found that all eight analyses generally met this key element. For example, all eight economic analyses indicated that the actions examined included the evaluation of flood risk management improvements for resolving flooding problems. In addition, the analyses provided specific planning objectives, such as to reduce flood risks in the relevant watershed over the 50-year analysis period and to improve the quality of life for local neighborhoods. Furthermore, all eight analyses used a 50- year analysis period to analyze benefits and costs—a period that should be long enough to encompass important economic effects, though several studies assumed that economic conditions would remain the same over that time period. For example, the analysis for the Honolulu District’s flood risk management study assumed that the inventory of homes and businesses in the flood plain would not change over the 50- year analysis period. According to the analysis, the project area includes sites that are underutilized or not fully developed, but uncertainty about how development might proceed made it difficult to project what changes might occur. The study acknowledged that changes in the business and residential makeup of the watershed over the 50-year period would occur but that the exact nature of these changes could not be projected with any degree of certainty. In addition, two of the eight studies involved multipurpose projects and specified additional economic-related objectives for ecosystem restoration. For example, the analysis for the San Francisco District’s feasibility study indicated that it was designed to evaluate and compare the economic justification and cost effectiveness of various measures to reduce flood risk and provide ecosystem restoration in South San Francisco Bay. Similarly, the Chicago District’s study indicated that in developing an ecosystem restoration plan, undeveloped lands throughout the watershed were evaluated to determine whether cost-effective aquatic ecosystem restoration at that site was possible and what measures would provide the lowest incremental cost per unit of habitat output. Alternative Identification and Description Our Assessment Methodology recommends that an analysis used to examine economic effects should identify and compare alternatives. In addition, the analysis should consider a range of relevant alternatives and should justify that the economic conditions specified under each alternative considered represent the best assessment of conditions under that alternative. We found that all eight economic analyses generally met this key element. For example, all eight economic analyses examined the economic effect of the proposed flood control actions by comparing a range of alternatives, including various structures such as levees or bridge modifications, as well as nonstructural measures such as floodplain management activities or acquisition of land and removal of people from the flood plain. Moreover, the economic analyses in the studies generally described and justified the economic conditions that would be expected under each alternative. For the two studies that also evaluated ecosystem restoration alternatives, the studies considered alternatives for restoring ecosystems. Documentation Our Assessment Methodology recommends that the economic analysis be clearly written, include a plain language summary, and provide clearly labeled tables that describe the data used and the results. Also, the analysis should document that it complies with a robust quality assurance process. We found that all eight economic analyses generally met this key element. For example, all eight economic analyses were generally clearly written and included tables that generally described data and results. In addition, seven of the feasibility studies included a plain language summary. Six of the studies indicated that the analyses complied with a robust quality assurance process, in which the analyses were reviewed at the Corps district and by technical and policy experts in headquarters. Corps guidance indicates that the quality assurance process for feasibility studies involves reviews for technical quality and policy compliance, among other considerations, at the Corps district and in headquarters. Further, three studies indicated that an independent external peer review had been conducted. While one study completed in the Nashville District did not indicate whether the study complied with a quality assurance process, district officials told us a thorough review was conducted that included multiple district quality control reviews, agency technical review and headquarters policy reviews, and an independent external peer review. In addition, a study completed in the Chicago District did not indicate that it had undergone an independent external peer review. The Economic Analyses in All Eight Studies Partly Met Two Key Methodological Elements for the Analysis of Effects and Transparency Analysis of Effects Our Assessment Methodology recommends that an economic analysis quantify the important costs and benefits and monetize these quantitative effects using the concept of opportunity cost—the maximum worth of a good or input among possible alternatives. The criterion of net present value, or related outcome measures, should be applied to compare these effects across alternatives. In addition, the analysis should control for inflation and use economically justified discount rates. Where important costs and benefits cannot be quantified, the analysis should show how they affect the comparison of alternatives. We identified areas in which the studies did not fully align with certain best practices for various reasons, such as the Corps’ concerns about the reliability of available methods and statutory requirements regarding the use of discount rates. These best practices included: Quantifying and monetizing important benefits and costs. The economic analyses in all eight studies quantified and monetized important benefits and costs associated with each alternative, such as property damage reductions and construction costs. The Corps’ Planning Guidance indicates that studies should consider analyzing loss of life in the Other Social Effects category, in either monetary, quantitative, or qualitative terms. Project alternatives that reduce the risk of flooding or that relocate people from the flood plain may lower the risk that individuals living or working in a flood plain will drown or become injured during flood events. However, the analyses in the eight studies we reviewed generally did not quantify and monetize the effect of project alternatives on loss of life. One of the studies we reviewed quantified these effects, but only for the recommended plan. Specifically, the Sacramento District’s flood risk management study found that the recommended plan, which involved the improvement of an existing levee system, could reduce fatalities during flood events by about 67 percent. Of the other seven studies that we reviewed, six analyses included a qualitative discussion of the effects of alternatives on loss of life, and one analysis did not include an assessment of these effects. A recent National Academy of Sciences study on coastal storm flooding indicated that the practice of quantifying and valuing reductions in loss of life is widespread in the federal government, allowing these risk reductions to be included in the economic analysis. In July 2017, after the eight studies that GAO reviewed were completed, the Corps issued revised guidance requiring flood risk management studies to include a quantitative assessment of loss of life for each alternative when it is a significant factor. Corps officials said they had not attempted to monetize loss of life because of concerns about the reliability of available valuation methods but are monitoring other agencies’ efforts to value these effects and following economic research in the area. Using net present value criterion. Analyses for seven studies we reviewed compared the flood risk management alternatives and identified the alternative expected to maximize net benefits on a comparable, present-value basis (that is, on an “annualized” basis). However, one economic analysis did not clearly indicate whether the costs associated with the flood risk management alternatives were annualized and therefore comparable to the annualized benefits. Controlling for inflation and use of economically justified discount rates. Although all the economic analyses in all eight Corps studies we reviewed controlled for inflation by expressing benefits and costs in “real” terms, the discount rates that the studies used to convert future benefits and costs to present values were in nominal terms. In general, real and nominal values are not combined in the same analysis. Specifically, discounting real benefits and costs with a nominal discount rate understates present values when holding all else the same. Corps officials said that they are aware of this inconsistency, but they have no latitude to use a real discount rate because the Water Resources Development Act of 1974 requires the Corps to use nominal discount rates. Corps officials acknowledged areas in which the eight Corps studies we reviewed partly met the Analysis of Effects key methodological element. However, as noted, the Corps has taken some steps to address one best practice. Specifically, the Corps’ recently revised guidance, which requires quantification of loss of life effects when significant, should allow the Corps to provide decision makers and stakeholders with more precise information about the relative magnitude of these effects in future economic analyses. In terms of the best practice regarding economically justified discount rates, the Corps has not taken steps because it is required to use the statutorily specified nominal discount rates. Transparency Our Assessment Methodology recommends that analyses be transparent with respect to their analytical choices, assumptions, and data used. The methodology further recommends (1) evaluating how plausible adjustments to each choice and assumption may impact the estimates of the cost-and-benefit effects and results of the comparison of alternatives and (2) clearly explaining the implications of the key limitations in the data and models used. Where feasible, to ensure transparency, the analysis is to adequately quantify how the statistical variability of the key data elements underlying the estimates of the economic analysis impacts these estimates and the results of the comparison of alternatives. We found that the studies we reviewed did not fully use some best practices related to transparency. Specifically: Being transparent with respect to analytical choices, assumptions, and data used. The economic analyses in the eight studies described and justified several, but not all of the analytical choices, assumptions, and data. For example, to approximate the amount of damages to structures at different flood depths, the Wilmington District’s feasibility study relied on standardized “depth- damage curves” developed by the Corps’ New Orleans District. Corps guidance indicates that standardized curves can be used in the absence of regionally developed data. According to the study, data for structures in the study area were unavailable, and flooding characteristics were similar in the two areas, with both study areas covering urbanized and rural areas representing a mix of residential, commercial, and industrial development with similar types of construction. However, other data and assumptions used by the studies in our review were not fully described or justified. For example, in presenting its results for an initial screening of several flood risk management alternatives, the Sacramento District’s economic analysis relied on cost estimates from several different sources, including prior studies and private consultants. The analysis, however, did not explain how the estimates were developed or justify why the estimates were sufficiently reliable for evaluating alternatives. Clearly explaining the implications of key limitations in the data and models used. With one exception, the economic analyses we reviewed generally did not discuss the implications of key limitations in the models used in the studies. Specifically, the economic analysis for the Sacramento District’s study indicated that the Corps’ Hydrologic Engineering Center-Flood Damage Analysis computer program can overstate damage reduction benefits because of an inability to account for the reduced floodplain occupancy and reduced value of damageable property following a flood event. According to the analysis, by not taking into account the potential for reduced floodplain occupancy, the estimated damage reduction benefits may be overstated, particularly in areas that experience more frequent or severe flooding. To account for this limitation, the Sacramento District’s study reduced the overall value of properties in the floodplain, lowering the average annual benefits for the recommended alternative by about 29 percent. All the other studies used the same program to estimate damage reduction benefits but did not indicate whether this limitation would affect the estimated benefits of the alternatives evaluated in those studies. In accordance with best practices, the Corps’ Planning Guidance indicates that studies should provide adequate supporting documentation to allow reviewers to understand the models and assumptions used to estimate benefits and costs. Corps officials stated that a project team’s analysis may not document every step it took because these are understood among team members, although they may not be apparent to others. Quantifying the statistical variability underlying the results of the comparison of alternatives. Although the economic analyses for the eight studies analyzed the effects of uncertainty associated with several key inputs in the economic analysis, the studies generally did not report the key estimates (for example, benefits, costs, and net benefits) on a probabilistic basis. For example, the Chicago District’s flood risk management study presented damage reduction benefits for each alternative in terms of its expected values as well as the probability that the benefit estimate would exceed a particular value. However, estimates for costs and net benefits were presented as point estimates, which may imply a greater sense of precision than is warranted. In accordance with best practices, the Corps’ Planning Guidance requires economic analyses to report net benefits and benefit-to-cost ratios both as expected (mean) values and on a probabilistic basis for each alternative; also, for each alternative, the analyses are to present the probability that net benefits are positive and that the benefit-to-cost ratio is at or above one. Corps officials said the analyses generally did not follow this guidance because it may not have been useful in helping to select a project alternative. Nonetheless, Corps guidance states that information about the probability distributions can help decision making by local sponsors, stakeholders, and federal officials by helping to increase their understanding of the uncertainty inherent in each alternative. In addition, for only one Corps study, the economic analyses included a sensitivity analysis on the discount rate, which is used to convert benefits and costs of the alternatives to present values. Generally, when benefits or costs are separated in time from each other, the difference in timing should be accounted for by discounting benefits and costs. In addition, the specific discount rate may affect the comparison of alternatives. Corps officials told us that they are required to use the statutorily designated discount rate, and their guidance does not require a sensitivity analysis using an alternative discount rate. The officials added that the Office of Management and Budget requires the Corps to compute the benefit-to-cost ratios for recommended plans using a 7 percent discount rate, for budgeting purposes. The results, though, are not reported in the studies, and the 7 percent rate is not applied in the assessment of the net benefits of the alternatives, according to these officials. Corps officials stated that in general there is a high level of transparency within the project team and with the nonfederal sponsor, but they acknowledged that transparency may not always exist for those outside the team. For example, a project team’s analysis may not document every step it took or assumption it made because these are understood among team members, although they may not be apparent to others. As a result, Corps officials acknowledged that some inconsistency exists in the transparency of the analyses across feasibility studies. Corps officials told us that teams rely on the Corps’ internal process for reviewing all planning products to help ensure the quality of its feasibility studies and analyses. The officials stated that to improve transparency, the Corps could strengthen its internal review process, for example, by adding steps so that all of the important decisions and assumptions made in the analyses are consistently and clearly described. By conducting future economic analyses for potential flood risk management projects so they are more consistent with best practices for transparency, the Corps can better ensure that decision makers and stakeholders are clearly and fully informed about potential economic effects associated with such projects. Conclusions The economic analyses included in Corps feasibility studies provide important information about the potential economic effects of flood risk management projects. While the economic analyses the Corps conducted for the eight studies we reviewed were generally consistent with several best practices, the Corps did not fully employ best practices pertaining to transparency. Because the information in the economic analyses can be complex and technical, following best practices for transparency helps ensure that the methods used to develop estimates and conclusions are clearly and fully presented. By conducting future economic analyses for potential flood risk management projects so they are more consistent with transparency best practices, the Corps can better ensure that decision makers and stakeholders are clearly and fully informed about the potential economic effects associated with flood risk management projects. Recommendation for Executive Action We are making the following recommendation to the Department of Defense: The Assistant Secretary of the Army for Civil Works should direct the Chief of Engineers and the Commanding General of the U.S. Army Corps of Engineers to strengthen the Corps’ internal review process for feasibility studies by including steps to ensure consistency with best practices for transparency, such as verifying that all of the important assumptions and limitations in models and their implications for the economic analysis are consistently, clearly, and fully described. (Recommendation 1) Agency Comments We provided a draft of this report to the Department of Defense for its review and comment. In its written comments, reproduced in appendix I, the Department concurred with our recommendation. The Department further stated that guidance related to ensuring transparency in feasibility studies and reviews already exists, but acknowledged that it can be strengthened and enforced more consistently by specifically identifying transparency as a review criterion. For example, they stated that the Corps plans to establish systematic guidance for meeting the transparency objective in preparing reports, assure transparency through the agency’s quality assurance process, and assess the degree of transparency as part of agency technical review and quality control assessment. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Assistant Secretary of the Army for Civil Works, the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and of Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Comments from the U.S. Army Corps of Engineers Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Vondalee R. Hunt (Assistant Director), Brad C. Dobbins (Analyst in Charge), Tim Carr, David Dornisch, Juan Garay, Tim Guinane, Gwen Kirby, Keesha Luebke, Jeanette Soares, Sara Sullivan, and Kiki Theodoropoulos made key contributions to this report.
Why GAO Did This Study The Corps, among other things, constructs flood risk management projects to reduce flood damage in threatened communities nationwide in collaboration with nonfederal sponsors. The Corps prepares feasibility studies to inform decision makers whether a proposed project warrants federal investment. In the studies, the Corps formulates and evaluates alternative plans for achieving the project's objectives and assesses whether the benefits of constructing it outweigh its costs. GAO was asked to review the methodology the Corps used in feasibility studies. This report examines, for 2015 through 2017, (1) the Corps' process for identifying and evaluating the benefits, costs, and effects of project alternatives; (2) the analyses the Corps used to recommend projects; and (3) the extent to which the Corps' economic analyses of benefits and costs are consistent with best practices. GAO reviewed Corps guidance; examined planning documents and economic analyses in flood risk studies that the Corps had most recently completed from 2015 through 2017 from eight districts; and compared the Corps' economic analyses with best practices in GAO's Assessment Methodology. What GAO Found In the eight flood risk management feasibility studies GAO reviewed (see figure), the U.S. Army Corps of Engineers (Corps) followed a six-step planning process consistent with its guidance to, among other things, identify and evaluate the beneficial and adverse effects of alternative plans for proposed projects. In doing so, the Corps used economic analyses to evaluate project-specific categories of potential monetary benefits and costs of alternative plans, such as flood damage reduction benefits and project construction costs. The studies also used separate analyses to evaluate other effects, such as on wildlife habitat and the health and safety of communities. In the eight studies GAO reviewed, the Corps typically recommended the alternative plan with the greatest net benefit, but also relied on other analyses in certain cases, as allowed under Corps guidance. Corps officials said they relied on other analyses to determine the best project design, help make decisions, or respond to local sponsors' preferences. For example, in one study, the Corps recommended a plan that provided a levee 3 feet higher than the plan with the greatest net benefits, in response to the nonfederal sponsor's request. The Corps' economic analyses in the eight studies were generally consistent with best practices, but did not fully adhere to practices for transparency. For example, most analyses did not discuss the implications of key limitations in the models and data used. Corps officials acknowledged that transparency could be improved through their review process. By having future analyses align with transparency best practices, the Corps can better inform decision makers about potential economic effects of flood risk projects. What GAO Recommends GAO recommends that the Corps strengthen its feasibility study review process by including steps to ensure consistency with transparency best practices. The agency concurred with the recommendation.
gao_GAO-20-333
gao_GAO-20-333_0
Background Transnational criminal organizations and terrorist organizations use a variety of money laundering schemes to disguise the origin and destination of their illicit proceeds and integrate their assets in legitimate financial entities. According to the U.S. government’s 2018 National Strategy for Combating Terrorist and Other Illicit Financing, the criminal activities in the United States that generate the largest share of illicit proceeds for laundering are fraud, drug trafficking, human smuggling, human trafficking, organized crime, and government corruption. FATF has identified three primary methods of money laundering: the laundering of money through the financial system, the physical movement of money (such as through cash couriers), and TBML. FATF has defined TBML as “the process of disguising the proceeds of crime and moving value through the use of trade transactions in an attempt to legitimize their illicit origins.” The volume of international trade is significant and has grown over time. According to the World Trade Organization, in 2018, there was $19.67 trillion in international merchandise trade and $5.63 trillion in international services trade. Although international trade offers many economic opportunities for the United States and other countries around the world, the number and complexity of international trade transactions present a number of risks and vulnerabilities that make them susceptible to abuse by criminal and terrorist organizations. For example, the large volume of international trade complicates detection of individual illicit transactions. In the United States alone, on a typical day in fiscal year 2019, almost 79,000 containers and $7.3 billion worth of goods entered the country through ports of entry, according to U.S. Customs and Border Protection (CBP). Similarly, different studies have noted that the increasingly complex nature of international trade—with the movement of goods and services around the world and the use of various financing and payment structures—makes detecting suspicious transactions difficult. The Use of TBML to Launder Funds and Transfer Value TBML schemes can involve misrepresenting the price, quantity, or type of goods or services in trade transactions, but other types of TBML schemes, such as the Black Market Peso Exchange, do not need to rely on this type of misrepresentation. In misrepresentation schemes, the parties involved in the trade transaction may under or over invoice goods or services; issue multiple invoices for the same goods or services; provide more or less goods or services than the declared amount, including in some cases providing no goods or services; or falsely describe the types of goods or services provided. Through these types of misrepresentation, value can be transferred from one party to another and the illicit origins of criminal proceeds obscured. In a hypothetical TBML scheme involving the misrepresentation of the price of goods, a criminal organization in Country A needs to launder the proceeds from its criminal activity and move these proceeds to Country B. To accomplish this, the criminal organization will use the illicit proceeds to purchase 100,000 cell phones worth $100 each. The criminal organizations will then make arrangements to export the 100,000 cell phones to a co-conspirator in Country B. However, the criminal organization in Country A, will fraudulently invoice the cell phones at $10 each rather than $100 each. Thus, the co-conspirator in Country B pays a total of $1 million for the cell phones, rather than their true value of $10 million. The co-conspirator then sells the cell phones at their true market value of $10 million in Country B resulting in the criminal organization having successfully transferred $9 million in value from Country A to Country B through TBML. Figure 1 illustrates how such a price misrepresentation scheme works. Similarly, the criminal organization can transfer value through misrepresentation of the quantity or type of goods being exported. For example, the criminal organization can invoice its co-conspirator for 50,000 cell phones, but actually ship 100,000 phones, or it can claim that it is shipping different, lower value items such as USB flash drives. Under a hypothetical Black Market Peso Exchange scheme, a criminal organization operating in Country A, which uses dollars, will take the dollar proceeds of its criminal activities to a currency broker’s representative that has access to currency reserves from Country B (pesos). At the same time, in Country B, an import company will contact the currency broker seeking dollars to pay for goods that it wishes to import from Country A. The currency broker uses the dollars provided by the criminal organization to pay exporters in Country A on behalf of the importer in Country B. The importer receives and sells the goods in Country B and pays the currency broker in pesos. The currency broker then pays the criminal organization in Country B in pesos, completing the transfer of its proceeds. Thus, the criminal organization has successfully shifted the value of its proceeds from Country A to Country B without having to physically move money, or transfer funds through the banking system, from Country A to Country B. Figure 2 shows such a Black Market Peso Exchange scheme involving the United States and Colombia. TBML differs from other crimes, such as trade or customs fraud, that may occur in connection with trade and the movement of goods, according to Treasury officials. Organizations and individuals involved in TBML exploit vulnerabilities in international trade to move value across international borders in an attempt to disguise the origin, nature, or source of illicit proceeds, which may derive from a variety of predicate crimes. According to Treasury officials, while offenses like smuggling and fraud may resemble TBML, they differ in purpose. For example, smugglers attempt to evade detection or the payment of custom fees, duties or taxes while moving legitimate, illicit, or restricted goods across borders. Similarly, in frauds involving the (purported) purchase or sale of goods, one of the parties to the transaction seeks to deceive another one for financial gain. In TBML, the scheme may be accomplished using fraudulent documents, such as false invoices, but this is not a necessary part of the scheme, nor does it alone represent TBML. In TBML schemes that involve misrepresenting the price, quantity, or type of goods, both the buyer and seller normally understand that the goods shipped or funds paid may differ from what is stated in the supporting documents. Legal and Regulatory Framework for Combating TBML Within the United States, a number of laws and regulations are used to combat TBML. The Bank Secrecy Act, which was passed in 1970, and implementing anti-money laundering (AML) regulations provide the legal and regulatory framework for preventing, detecting, and deterring money laundering in the United States. The Bank Secrecy Act regulations generally require banks and other financial institutions, such as money service businesses, securities broker-dealers, and certain types of insurance companies, among others, to, for example, collect and retain various records of customer transactions, verify customers’ identities at the time of account opening, maintain AML programs, and report suspicious transactions or cash transactions over a certain amount. In addition, the Trade Facilitation and Trade Enforcement Act of 2015, signed into law in 2016, addressed trade facilitation and trade enforcement issues such as import safety, the protection of intellectual property, and the prevention of the evasion of duties, among other things. Further, individuals can be prosecuted under U.S. law, such as section 1956 of title 18 of the United States Code, for money laundering, including TBML schemes. For example, under section 1956, defendants can be prosecuted for money laundering activities, including those involving falsely classifying goods or entering goods by means of false statements. U.S. Agencies Involved in Efforts to Combat TBML Internationally Within the U.S. government, a number of agencies play a role in working with international partners to combat money laundering more broadly, as well as TBML specifically. These include DHS, DOJ, State, and Treasury and their component agencies and offices. DHS: Within DHS, ICE’s Homeland Security Investigations (HSI) investigates financial crimes and money laundering cases, including those involving TBML. HSI has established a TTU that seeks to identify global TBML trends, provide investigation support to HSI and other law enforcement efforts, and conduct ongoing analysis of trade data provided through partnerships with TTUs that it has helped establish in other countries. CBP is responsible for enforcing U.S. trade laws, facilitating compliant trade, collecting revenue, and protecting the U.S. economy and consumers from harmful imports and unfair trade practices. As part of its mission, CBP conducts targeting of high-risk shipments that may involve trade violations, including violations linked to TBML schemes. DOJ: The Drug Enforcement Administration (DEA) and the Federal Bureau of Investigation both conduct investigations of criminal organizations that may use TBML to launder their illicit proceeds. In addition, the DOJ Criminal Division’s Money Laundering and Asset Recovery Section and U.S. Attorney’s Offices throughout the country prosecute cases involving money laundering crimes, including TBML schemes. State: State’s Bureau of International Narcotics and Law Enforcement Affairs (INL) leads State’s AML technical assistance efforts with international partners. In this role, INL works in global and regional forums to promote the implementation of international AML standards. INL also funds AML assistance programs in countries around the world. Finally, INL publishes the annual International Narcotics Control Strategy Report, which includes an analysis of countries identified as “major money laundering countries.” In addition to INL, State’s Bureau of Economic and Business Affairs and Bureau of Counterterrorism also play a role in State’s AML and countering the financing of terrorism (CFT) efforts. Treasury: Treasury’s Financial Crimes Enforcement Network (FinCEN) collects, analyzes, and disseminates the financial intelligence information it collects pursuant to the Bank Secrecy Act to support efforts to combat financial crime, including money laundering. FinCEN is responsible for administering the Bank Secrecy Act and coordinating with federal and state regulatory agencies on AML/CFT efforts. Additionally, FinCEN serves as the Financial Intelligence Unit (FIU) of the United States, which entails gathering and analyzing Suspicious Activity Reports (SAR) and other financial information relevant to money laundering, terrorist financing, and other financial crimes, as well as disseminating the results of this analysis to law enforcement and other competent authorities. A number of other Treasury agencies and offices also play a role in efforts to combat money laundering, including TBML. For example, Treasury’s Office of Technical Assistance (OTA) provides assistance to partner countries to help strengthen their efforts to combat economic crimes. Treasury’s Office of Terrorist Financing and Financial Crimes is the policy coordination office for illicit finance and develops and implements U.S. government strategies to combat all forms of illicit finance domestically and internationally. Internal Revenue Service Criminal Investigation investigates tax crimes and other financial crimes, including those associated with TBML schemes. It has lead authority for investigating criminal violations of the Bank Secrecy Act. International Bodies Involved in Efforts to Combat TBML Internationally, the U.S. government participates in a number of bodies that address issues related to TBML, including the Egmont Group, FATF, UNODC, and the WCO. The Egmont Group: The Egmont Group, formed in 1995, is composed of FIUs from 164 jurisdictions. The organization seeks to foster information exchange among its members to support efforts to combat money laundering and terrorist financing. In addition, the Egmont Group provides training and technical assistance to its member FIUs. FinCEN represents the United States at the Egmont Group. The Egmont Group’s Secretariat is located in Canada. FATF: FATF is an intergovernmental body, formed in 1989, that sets internationally recognized standards for developing AML/CFT regimes and assesses the ability of member jurisdictions to meet these standards. In addition, FATF works to identify specific money laundering methods and promotes international cooperation in disrupting and dismantling those money laundering schemes. FATF’s membership includes 37 jurisdictions and two regional organizations—the European Commission and the Gulf Cooperation Council. Treasury’s Office of Terrorist Financing and Financial Crimes heads the United States delegation to FATF. The FATF Secretariat is located in Paris, France. UNODC: UNODC is an agency within the United Nations, formed in 1997, that works to combat illicit drugs and other international crime in more than 150 countries throughout the world. As part of its mandate, UNODC carries out the Global Program against Money Laundering, Proceeds of Crime and the Financing of Terrorism. Through this program, UNODC seeks to strengthen the ability of United Nations member states to implement measures against money laundering and the financing of terrorism and to assist them in detecting, seizing, and confiscating illicit proceeds. State is the lead agency representing the United States at UNODC. UNODC is headquartered in Vienna, Austria and has field offices in 20 countries, as well as liaison offices in New York and Brussels, Belgium. WCO: The WCO, established in 1952, is an intergovernmental body whose mission is to enhance the effectiveness and efficiency of customs administrations around the world and to help them in their dual role of facilitating international trade while also promoting security. WCO’s membership includes customs agencies from 183 countries. CBP is the lead agency representing the United States at WCO. The WCO’s Secretariat is located in Brussels, Belgium. Criminal and Terrorist Organizations Use a Variety of TBML Schemes, but Specific Estimates of TBML’s Extent Are Unavailable A Variety of Criminal and Terrorist Organizations Use TBML to Disguise the Origins of Their Illicit Proceeds and to Fund Their Operations Different types of criminal and terrorist organizations use TBML to disguise the origins of their illicit proceeds and fund their operations. In some cases, these organizations may manage the TBML schemes directly, and in other cases, they may enlist the services of professional money launderers. Drug trafficking organizations. Drug trafficking organizations throughout Latin America, including in Colombia and Mexico, have used TBML schemes for decades to launder the proceeds from illegal drug sales. These organizations make billions of dollars from the sale of illegal drugs in the United States and elsewhere. Although much of these revenues remain with the ultimate sellers of the illegal drugs in the United States, significant amounts of illicit proceeds are sent back to drug trafficking organizations in supplier countries, including through TBML schemes. For example, in a 2017 reporting cable on Colombia’s cocaine economy, State noted that U.S. law enforcement agencies and independent economists have estimated that somewhere between $5 billion to $10 billion in cocaine proceeds are laundered back to Colombia each year, frequently using TBML schemes. U.S. government reporting, including Treasury’s 2020 National Strategy for Combating Terrorist and Other Illicit Financing and DEA’s 2019 National Drug Threat Assessment, and various U.S. officials noted that a key trend related to TBML that has occurred in recent years is the increasing involvement of Chinese criminal organizations in TBML globally, including in the United States. Chinese money laundering networks are working increasingly with Mexican drug cartels to assist the cartels in laundering drug proceeds. In addition, U.S. government reporting, including the 2018 National Money Laundering Risk Assessment, and U.S. officials noted Chinese criminal gangs are using TBML schemes to repatriate proceeds from the sale of synthetic opioids in the United States and around the globe. Other criminal organizations. In addition to drug trafficking, criminal organizations have used TBML schemes to launder proceeds from a range of other crimes, including illegal mining, human trafficking, and the sale of counterfeit goods. For example, criminal organizations in Colombia have used TBML to disguise the origins of illegally mined gold, in exchange for funds, according to U.S. Embassy Bogotá and Colombian government officials we interviewed. Corrupt government officials. In certain countries, senior government officials and government entities have used TBML schemes to disguise profits derived from corrupt practices, according to U.S. government reporting. For example, FinCEN has reported that senior government officials in Venezuela have used TBML as part of schemes to steal money from the Venezuelan government’s food distribution program. Terrorist organizations. Terrorist organizations, including Hezbollah and the Revolutionary Armed Forces of Colombia (known by its Spanish acronym FARC), have also used TBML schemes to launder funds. For example, a number of U.S. officials and knowledgeable sources have noted that Hezbollah operates a number of TBML schemes in the Tri-Border Area in South America, where Argentina, Brazil, and Paraguay meet, which help to fund the terrorist organization’s activities around the world. Criminal and Terrorist Organizations Use a Range of TBML Schemes Involving Many Different Goods and Services Criminal and terrorist organizations use a range of TBML schemes with varying levels of complexity. In many instances, these organizations combine TBML techniques with other forms of money laundering, such as bulk cash smuggling and the laundering of funds through the banking system. The U.S. government, foreign governments, and international bodies have identified a number of different examples of the types of TBML schemes that occur. For example: In one case described in Treasury’s 2018 National Money Laundering Risk Assessment and ICE press releases, HSI led an investigation, known as Operation Fashion Police, which targeted businesses in the Los Angeles Fashion District that were suspected of being involved in Black Market Peso Exchange schemes to launder the proceeds of illegal drug sales on the behalf of international drug cartels. As a result of the investigation, two owners of a textile company pled guilty to using the business to receive bulk cash that they knew or believed to be the proceeds of narcotics trafficking and part of a Black Market Peso Exchange scheme. The two individuals received approximately $370,000 in cash delivered on four separate occasions as payment for goods shipped to Mexico, Guatemala, and other countries in Latin America. Operation Fashion Police, along with several related investigations, also resulted in the seizure of tens of millions of dollars in bulk cash stashed at warehouses in the Los Angeles area. In one case identified by Treasury, DOJ indicted seven co- conspirators for participating in an international TBML scheme. The individuals are alleged to have used family-owned import-export businesses in Long Island and Miami and to launder millions of dollars in illegal drug proceeds. As part of the scheme, the defendants are alleged to have taken in bulk cash deliveries from drug dealers in the United States and disguised the transfer of money to South America and elsewhere through the actual and purported purchase and export of mobile phones. In another case, according to U.S. government information provided to FATF, Colombian drug cartel representatives in the United States deposited proceeds from illegal drug sales into the U.S. financial system. The cartel then used these funds to buy gold from Colombia, which it imported into the United States. The cartel representatives in the United States then melted down the gold and recast and enameled the gold to disguise it as low value items such as nuts and bolts. The cartel then exported the disguised gold back to Colombia where it was melted down once again and the process was repeated. Through this scheme, the cartel was able to use the same gold to justify multiple payments to its representatives in Colombia, thus transferring proceeds from its U.S. operations. In Australia, according to U.S. Embassy Canberra officials, Chinese criminal organizations give Australian dollars from drug sales to individual Chinese nationals, known as Daigou shoppers, who pose as retail shoppers and use the funds to purchase various items in Australia on behalf of buyers in China who want to purchase higher quality foreign goods. The Daigou shoppers then ship the items to the buyer or deliver them by hand. The buyers in China then pay the Chinese criminal organizations, in Chinese yuan, for the items. Through this TBML scheme, the criminal organizations are able to move their proceeds to China without going through the financial system. Finally, in Benin, Lebanese financial institutions linked to Hezbollah were involved in schemes that used TBML to launder funds and move criminal proceeds through West Africa and back to Lebanon, according to State reporting in its 2015 International Narcotics Control Strategy Report. The criminals using these schemes wired funds from Lebanon to the United States to buy used cars, which were then shipped to Benin and sold throughout West Africa. The criminals then combined the profits from the sale of these cars with the proceeds from drug sales in Europe and subsequently sent the funds back to Lebanon via bulk cash smuggling and deposited the funds into the Lebanese financial system. According to information from different U.S. agencies, international bodies, and partner countries, criminal and terrorist organizations use a wide variety of goods in TBML schemes, but HSI analysis has found the most common items are precious metals, automobiles, clothes and textiles, and electronics (see fig. 3). As of 2018, HSI reported that approximately 70 percent of its TBML-related casework involved these four types of goods. However, criminal and terrorist organizations use any number of different goods in TBML scje,es. For example, U.K. government officials told us about a scheme involving the misrepresentation of dental equipment as books in a series of exports from the United States to the United Kingdom. In addition to international trade in goods, available evidence indicates that TBML schemes, at times, involve international trade in services. According to HSI, under some TBML schemes, shell companies are created that issue invoices for consulting or other professional services which are used to justify the international movement of funds as payment for the invoiced services. U.S. agencies and other sources have noted the potential for TBML schemes involving services such as consulting, accounting, and web design, among others. Various U.S. agencies, international bodies, and knowledgeable sources have identified a number of “red flags” that may indicate TBML schemes. For example, table 1 includes a list of nine red flag indicators that HSI has identified related to TBML schemes. Many Countries around the World Face TBML Risks U.S. agencies have identified a number of countries around the world as being at risk for money laundering more generally and TBML specifically. For example, State’s annual International Narcotics Control Strategy Report (INCSR) identifies “major money laundering countries,” as required by the Foreign Assistance Act. Over the last 5 years, the INCSR has identified, on average, almost 80 countries as being major money laundering countries. In addition, State has identified countries that face TBML-specific risks in the country reports included within the INCSR each year. For example, in our review of the 2019 INCSR, we found that State had cited TBML risks in 26 countries or territories in a number of different regions of the world. Previously, HSI conducted an analysis of TBML- related SARs filed by financial institutions with FinCEN in fiscal year 2012. Of the 474 TBML-related SARs that financial institutions filed during this period, HSI found that 93 different countries or territories were referenced with the five most frequently mentioned being Nigeria, Hong Kong, Mexico, Venezuela, and Panama. More recently, in 2019, HSI identified Mexico, China, Colombia, the United Arab Emirates, Ecuador, Peru, Venezuela, and the United Kingdom as its key countries of TBML concern. In addition to identifying different countries that are vulnerable to money laundering, the U.S. government and FATF, among others, have identified free trade zones as particular areas of risk for TBML. In a 2010 report on money laundering vulnerabilities in free trade zones, FATF identified approximately 3,000 free trade zones located in 135 countries and noted they had systemic weaknesses making them susceptible to money laundering and terrorist financing. These weaknesses included less stringent AML/CFT reporting requirements, relaxed oversight by responsible government authorities, and weak procedures for inspecting goods, among other things. Similarly, the 2019 INCSR notes that the 114 free trade zones in Colombia are vulnerable to TBML due to inadequate regulation, supervision, and transparency. Specific Estimates of the Amount of TBML Globally Are Unavailable, but Evidence Suggests It Is Likely Substantial and Has Increased in Recent Years Available evidence from the U.S. government, international bodies, and knowledgeable sources suggests that the amount of TBML occurring globally is substantial and has increased in recent years. State has reported that the amount of money laundered through TBML schemes may potentially be up to hundreds of billions of dollars globally, every year. Some U.S. officials and knowledgeable sources believe that, based upon available evidence, TBML is likely one of the largest forms of money laundering. In addition, as countries have strengthened their controls to combat other forms of money laundering, various U.S. government reports and officials, as well as knowledgeable sources have stated that there are indications that criminal organizations and terrorist organizations have increased their use of TBML to launder their funds. For example, FinCEN has reported that since the Mexican government increased restrictions on U.S. dollar cash deposits at Mexican financial institutions in 2010, Mexican drug cartels appear to have increasingly turned to TBML as an alternative means of repatriating profits from U.S. drug sales. Similarly, in Australia, as controls on large cash deposits at ATMs have increased since 2017, criminals have increased their use of TBML to hide their profits, according U.S. officials at Embassy Canberra. In addition, the 2020 National Strategy for Combating Terrorist and Other Illicit Financing notes that there has been a steady decrease in seizures related to bulk cash smuggling from 2012 through 2018 and states that this decrease could indicate that criminal organizations are increasingly turning to other means to move illicit money, including TBML. Although various observers believe the magnitude of TBML is large, specific estimates of the amount of TBML occurring around the world are unavailable. A number of academic studies have sought to quantify various aspects of illicit financial flows and money laundering. Although the results of such studies can shed light on the potential volume of TBML, none of those we identified in our literature review sought to develop estimates of TBML specifically. In addition, the studies we reviewed all had certain methodological limitations. We found, based upon our review of relevant literature, that academic studies seeking to quantity potential illicit financial flows do not provide the exact extent of TBML. These studies capture activities that are generally broader than TBML, such as tax avoidance, trade price manipulation, or trade misinvoicing, which demonstrates the difficulty in estimating the exact magnitude of TBML activity. For example, one academic researcher analyzed U.S. Census Bureau trade data over time to estimate money moved in and out of the United States through trade price manipulation, which involves prices showing up outside of an expected range. The stated objectives of trade price manipulation in this study include not only TBML, but also income tax avoidance or evasion, among other things. Therefore, measurement of trade price manipulation is generally broader than that of TBML. For 2018 alone, this researcher estimated that trade price manipulation accounted for approximately $278 billion moved out of and $435 billion moved into the United States. Global Financial Integrity, a nonprofit organization dedicated to studying the cross-border flow of illegal money, has analyzed International Monetary Fund and United Nations data to develop an estimate of potential trade misinvoicing between developing and advanced economies. In a 2019 report, it calculated the illicit financial flows to and from 148 developing countries from 2006 to 2015. For 2015, it estimated that potential trade misinvoicing to and from these 148 developing countries were between $0.9 trillion and $1.7 trillion. Global Financial Integrity defines trade misinvoicing as a method for moving money illicitly across borders that involves the deliberate falsification of the value, volume, or type of commodity in an international commercial transaction of goods or services by at least one party to the transaction. Therefore, measurement of trade misinvoicing is generally broader than that of TBML. Appendix II provides additional details on our literature review and efforts to quantify illicit financial flows, including TBML. Certain international bodies, such as UNODC, and other organizations have produced estimates on the amount of criminal proceeds and the volume of money laundering more broadly. For example, in 2011, UNODC conducted a meta-analysis of the results of various studies and estimated that in 2009 the amount of funds available for laundering, including TBML, was likely around 2.7 percent of global gross domestic product, or $1.6 trillion. However, the report’s authors noted that the studies reviewed in the meta-analysis contained a range of methodological issues and information gaps. FinCEN data on SARs related to TBML can also provide an indication of the potential volume of TBML activity that financial institutions have detected. In 2010, FinCEN issued an advisory on TBML that found that financial institutions had filed over 17,000 SARs related to potential TBML between January 2004 and May 2009, involving over $276 billion worth of transactions. In addition, we analyzed FinCEN data from more recent years, using a different methodology, and found financial institutions had filed 7,044 SARs related to TBML from 2014 to 2018, including 1,673 in 2018. FinCEN officials noted that the number of TBML-related SARs is a small portion of the total of 9.6 million SARs it received over this period. However, FinCEN officials also acknowledged that financial institutions may not have enough information on many trade transactions to determine whether there is suspicious activity and whether that suspicious activity is potentially related to TBML schemes. In addition, FinCEN officials noted that suspicious activity related to TBML schemes could be reported under different categories. Officials and Studies Recommended Various Practices that Countries Could Adopt to Detect and Combat TBML Officials and reporting from relevant international bodies and selected partner countries, and knowledgeable sources have recommended that governments consider a number of different practices to strengthen their efforts to detect and combat TBML. After reviewing and analyzing these sources, we identified and grouped these recommended practices into the following five categories: (1) partnerships between governments and the private sector, (2) training in detecting and combatting TBML, (3) sharing information through interagency collaboration, (4) international cooperation through information and knowledge sharing, and (5) further research on challenges, such as potential impediments to combatting TBML. In addition, we identified examples of steps the United States and other countries have taken in line with these practices. Officials and knowledgeable sources also noted some potential difficulties to implementing some of the recommended practices that have been identified. Partnerships between Governments and the Private Sector Reporting from relevant international bodies and certain partner countries, and knowledgeable sources have proposed that governments develop partnerships with the private sector to combine and collectively analyze information needed to identify potential TBML schemes and trends. Through these partnerships, representatives from the private and public sector could meet on a regular basis to share information on suspicious activity that may warrant further investigation. For example, FATF’s guidance paper Best Practices on Trade Based Money Laundering stated that governments should consider conducting periodic joint meetings with the private sector to discuss emerging TBML trends. Governments can also provide feedback to private sector entities on what information is helpful as they conduct investigative work. FATF standards on information sharing state that anti-money laundering authorities should provide feedback to financial institutions to assist them with complying with AML requirements in the countries in which they are operating. For example: U.S. example: In 2017, FinCEN publicly launched the “FinCEN Exchange” to enhance information sharing between FinCEN, law enforcement agencies, and financial institutions. FinCEN invites financial institutions to voluntarily participate. As of December 2018, FinCEN had convened more than a dozen briefings with law enforcement agencies across the country, involving more than 40 financial institutions. According to FinCEN officials, through the FinCEN Exchange, the U.S. government and the private sector are able to exchange information on priority illicit finance threats, including TBML. For example, according to Treasury officials, FinCEN convened a FinCEN Exchange focused on TBML in San Antonio, Texas in April 2018. According to Treasury’s 2018 National Strategy for Combating Terrorist and Other Illicit Financing, the information provided by financial institutions through the FinCEN Exchange briefings has assisted FinCEN in targeting TBML networks. Other country example: In 2015, the United Kingdom established the Joint Money Laundering Intelligence Task Force as a collaborative mechanism between the U.K. government and the private sector to share and collectively analyze information on money laundering and economic crime threats. The task force brings together a range of private and public sector organizations, including law enforcement agencies and financial institutions. According to U.K. officials, TBML is one of the four priority areas of the task force. The task force has established six expert working groups led by representatives of the financial sector, including a TBML expert working group. Among other things, the TBML expert working group offers experts witness statements on TBML to support criminal prosecutions. In addition to sharing information with and providing feedback to financial institutions, several knowledgeable sources and reports from international bodies stated that these partnerships should also include a broad range of private sector entities involved in international trade. Several knowledgeable sources have highlighted the need for other private sector entities involved in international trade, such as shipping companies, freight forwarders, and customs brokers, to play a role in working with governments to identify TBML activities. One knowledgeable source noted that broader partnerships are important because banks and other financial institutions have a limited ability to detect indicators of potential TBML in a majority of trade transactions. For example, according to the Wolfsberg Group, 80 percent of international trade is conducted through open-account trade. With open-account trade, the transaction is not financed by a bank. Banks are generally not involved beyond processing the buyer’s payment to the seller and do not typically receive supporting documentation related to the transaction. Thus, financial institutions have limited visibility over open-account transactions and thus limited ability to identify suspicious activity. Several knowledgeable sources and reports from certain partner countries also acknowledged that challenges exist to creating partnerships with the private sector. They emphasized that for these partnerships to be successful, governments should ensure all participants trust that any information they share will be handled appropriately. For example, one knowledgeable source noted that countries could develop standards for information sharing between banks, while providing assurances about data security, privacy, and confidential commercial information. In addition, several knowledgeable sources and reports from partner countries stated that countries should address challenges related to privacy laws that prohibit banks from sharing client information or barriers restricting government agencies from sharing intelligence information with private sector partners. Training for Government Agencies and Private Sector Entities Involved in Detecting and Combating TBML Relevant international bodies, including FATF, and knowledgeable sources stated that given the complexity of and difficulty in detecting TBML, governments could consider providing additional training to relevant government officials on techniques to detect and counter the threat. Governments would provide the training to government agencies, such as customs and tax collection agencies, tailored to meet the specific requirements and needs of different government authorities. Several knowledgeable sources and reports from international bodies noted that governments should also conduct events and other outreach activities to educate private sector entities. Some stated that such events and outreach activities could help increase the capacity of personnel at banks and other financial institutions to identify the characteristics, emerging trends, and new methods of TBML. According to FATF’s guidance paper on TBML, governments could organize conferences on the topic, or develop materials to help inform staff of various private sector organizations who monitor suspicious financial activity and potential TBML risks. For example: U.S. example: In 2018, FinCEN organized a conference on TBML for several U.S. agencies involved in combatting TBML, including HSI, CBP, and Internal Revenue Service Criminal Investigation, in addition to government officials from partner countries and non-government participants. The conference provided presentations on a range of issues related to TBML, such as the vulnerabilities in the gold industry that make it susceptible to TBML and the evolution of the Black Market Peso Exchange. In 2019, FinCEN organized an additional conference focused on TBML and bulk cash smuggling. Other country example: The Mexican government is working with State/INL to develop anti-money laundering experts and to build an AML task force. INL also created a training program to certify compliance officers, state auditors, prosecutors, analysts, and regulators in Mexico City on TBML. Several U.S. embassy officials noted that some partner countries needed to account for additional factors when creating TBML-specific training. They stated that before receiving TBML training, some partner countries needed to build more basic foundational skills. For example, U.S. embassy officials in Colombia stated that their priority is to provide Colombian prosecutors with more basic training on prosecutorial skills, such as presenting oral arguments, before offering advanced training, such as how to build a TBML case. Sharing Information through Interagency Collaboration Several knowledgeable sources, partner country officials, and international body reports we reviewed recommended that governments share information and data through domestic interagency collaboration to combat TBML. According to United Kingdom officials and an international body report, sharing trade data and relevant financial information, such as SARs, through an interagency approach is critical because TBML and its predicate crimes often cut across multiple agencies and their authorities and responsibilities. Agencies also bring different skill sets to investigations, such as expertise on customs enforcement, financial crimes, and trade data analysis. To foster interagency collaboration, several knowledgeable sources stated that governments could consider creating multi-agency task forces or mechanisms to address the challenges posed by TBML. For example: U.S. example: The El Dorado Task Force is an interagency investigative body that consists of 55 law enforcement agencies in New York and New Jersey, including federal agents, state and local police investigators, intelligence analysts, and federal prosecutors. The task force contains 12 groups, including one focused specifically on TBML. Officials from the El Dorado Task Force stated that as an interagency task force, it is able to utilize the respective expertise of various agencies and analyze multiple sources of information, such as international trade and Bank Secrecy Act data, in its investigative work. Other country example: The United Kingdom created the National Economic Crime Centre, which involves officials from multiple agencies, including law enforcement and regulatory bodies. The National Economic Crime Centre’s mission is to strengthen and prioritize the U.K. government’s coordination efforts by combining operational capabilities, data, and intelligence to target economic crime. To target specific crimes, the National Economic Crime Centre has created working groups, including a TBML one, to further cooperation and build expertise. Several U.S. embassy officials and host country officials stated that some countries may be hesitant to share information with all of the agencies involved in combatting TBML. These officials noted that issues such as corruption and lack of trust between agencies might limit the willingness and ability of countries to share information. For example, several Colombian government officials stated that corruption in their government limits the number of counterparts from other agencies that they can trust to collaborate with on combatting TBML. International Cooperation through Information and Knowledge Sharing Several officials from certain partner countries, knowledgeable sources, and reports we read stated that trade partners could share trade data and relevant financial information with each other through bilateral or multilateral partnerships. Officials and international body reports also emphasized how important it is for countries to see both sides of trade transactions in order to detect anomalies that might reveal TBML activities. FATF reports noted governments could work together to create a secure system or mechanism that countries could use to exchange trade data and financial information. According to the Asia/Pacific Group on Money Laundering’s APG Typology Report on Trade Based Money Laundering, governments could coordinate international capacity building efforts with partner country counterparts, such as sharing strategies on combatting TBML and emerging trends related to TBML. For example: U.S. example: As part of its TTU program, HSI has established a formalized bilateral mechanism with a number of partner countries, particularly in the Western Hemisphere, to exchange and conduct ongoing analysis of trade data to facilitate the detection of suspicious TBML-related activities. By sharing these data, HSI and each of its partner TTUs are able to see import and export data for goods moving between the United States and the partner country. Other country example: The Paraguayan government has taken initial steps to coordinate with several countries in the region to try to increase the sharing of trade information, including Chile, Uruguay, and Argentina. According to a U.S. embassy official in Paraguay, the Paraguayan government also participates in a regional security mechanism with Brazil, Argentina, and the United States to address broader regional security threats, including money laundering activities. Figure 4 shows photos from Ciudad del Este, Paraguay, on Paraguay’s border with Brazil and Argentina, a region that has been identified by U.S. and Paraguayan officials as a key hub of TBML activity. U.S. officials and knowledgeable sources, however, noted several challenges to international cooperation related to technology and data uniformity. For example, officials from HSI stated that while international cooperation is critical to combat TBML, changes in government administration and technological limitations affect the continuity and the commitment to information sharing with foreign partners. In addition, U.S. officials and reports we reviewed stated that countries could consider enhancing and creating more uniformity in their data collection efforts so that they could use the data more effectively to combat TBML. For example, U.S. embassy officials and knowledgeable sources stated that countries need a common formatting or trade transactions identifier to allow countries to match import and export data more easily. HSI and partner country officials noted that, without a common identifier, they have faced difficulties connecting the import and export sides of trade transactions as they have sought to analyze trade data to identify potential cases of TBML. In addition, while some U.S. officials and knowledgeable sources see arrangements for sharing trade data between multiple countries as a possible means of improving detection of TBML-related activities, U.S. officials said that a lack of trust among countries complicates such efforts. U.S. officials and officials from countries we visited noted that countries might be reluctant to share their trade data more widely through multilateral mechanisms due to perceived risks the sharing of such important information might have on their commercial competitiveness. These officials noted the difficulty in creating a multilateral TTU because of these limitations. Conducting Further Research on Challenges to Combatting TBML Multiple knowledgeable sources, as well as reports from international bodies, stated that governments could conduct further research on challenges that reduce their ability to combat TBML effectively, including potential impediments. According to the Asia/Pacific Group on Money Laundering’s report on TBML, developing a comprehensive strategy would help governments to address key challenges to combat TBML while also facilitating legitimate trade. In addition, one partner country report highlighted the need for an ongoing assessment of TBML to address challenges as the threat continues to evolve. For example: U.S. example: In 2015 and 2018, Treasury produced the National Money Laundering Risk Assessment, identifying the money laundering threats and risks, including TBML, which confront the United States. The assessments also identify the challenges U.S. agencies face in combating money laundering. For example, the 2018 assessment found that merchants sometimes knowingly accept illicit payments in exchange for trade goods without reporting the transactions and individuals can abuse their professional position at financial institutions by ignoring suspicious transactions. Other country example: In 2017, the Government of Singapore worked with private sector entities to identify and assess key issues that Singapore faced related to money laundering. As a result of that study, in 2018, the government produced the Best Practices for Countering Trade-Based Money Laundering report. The study found that, for example, banks should periodically conduct a risk assessment on risk factors related to TBML and test TBML red flags for effectiveness. Several knowledgeable sources stated that international bodies could examine any challenges and provide additional guidance to member countries on combatting TBML. According to Treasury officials, FATF is currently examining operational challenges related to TBML to provide additional guidance to member countries on combatting it. These officials indicated that this new study should provide an updated definition of TBML to better distinguish money laundering activity from other criminal activity. Additionally, an official from Treasury’s Office of Terrorism and Financial Intelligence said the best practices in FATF’s 2008 report were still relevant and that FATF has produced other reports since then related to TBML, such as its 2010 report on money laundering vulnerabilities in free trade zones. In the report, FATF noted a number of challenges related to combating TBML in these zones. For example, it reported that relaxed oversight and lack of data collection in free trade zones make them vulnerable to these schemes. Knowledgeable sources and reports from international bodies and a partner country also recommended further research about other impediments that challenge the ability of governments to combat TBML. For example, reports from international bodies and a partner country highlighted the ease with which shell companies can be established in many jurisdictions and the lack of transparency regarding the beneficial owners of such shell companies. According to FATF and various U.S. officials, criminal organizations can use shell companies to funnel illicit money through accounts that obscure the source of the funds. FATF recommends in its international standards that countries take measures to ensure relevant authorities have timely access to information on the ownership and control of legal persons. U.S. Agencies Have Taken Steps to Partner with Countries and International Bodies to Combat TBML, but Opportunities Exist to Enhance the TTU Program U.S. Agencies Provide a Range of Support to Partner Countries Related to Combating TBML Establishing Information- Sharing Methods DHS, DOJ, State, and Treasury provide a variety of support to partner countries to assist in combating TBML, including establishing information- sharing methods, funding training and technical assistance, and providing ongoing law enforcement cooperation. The U.S. government’s primary partnership effort focused specifically on combating TBML is HSI’s TTU program. Under the program, HSI has set up TTUs in 17 partner countries. HSI established the first TTU with Colombia in 2005 and the most recent one with New Zealand in 2019. HSI’s goal with the TTU program is to exchange trade data with its partner TTUs to allow agencies in each country to work together to better identify anomalies in trade data that may indicate TBML. For example, through the analysis of shared trade data, HSI and a partner TTU may be able to determine if there is a discrepancy between the reported value of goods when they leave the United States and the reported value of the goods when they arrive in the partner country (and vice versa). There are four key steps that HSI and a partner country undertake in establishing a TTU, according to HSI officials: As a precondition for setting up a TTU, a country must have a Customs Mutual Assistance Agreement or similar information sharing agreement in place with the United States. HSI then negotiates a memorandum of understanding (MOU) with the relevant counterpart agency setting out the details of the partnership. Once the partner country signs the MOU, HSI provides the partner TTU access to its specialized system for analyzing trade data—the Data Analysis and Research for Trade Transparency System (DARTTS). HSI also provides training to the partner TTU on the system’s use. Table 2 shows the partner countries participating in the TTU program and how often HSI and each country share data. In addition to the TTU program, U.S. agencies have established other methods for sharing information with partners overseas that support efforts to combat money laundering, including TBML. For example, U.S. officials at Embassy Canberra reported that HSI had set up a pilot program in which the U.S. government shares its Reports of International Transportation of Currency or Monetary Instruments with the Australian Border Force. By comparing the U.S. information with what the Australian Border Force collects, the Australian Border Force has been able to identify and apprehend a number of bulk cash smugglers, according to Embassy Canberra officials. U.S. agencies have also worked to organize a number of ongoing or ad hoc forums for sharing information related to transnational crime, including money laundering and other economic crime. For example, DOJ’s Office of Overseas Prosecutorial Development, Assistance and Training has organized, with State support, two sessions of the Transnational Criminal Organizations Working Group, which brings together officials from the United States, Colombia, and Mexico to participate in specialized training and to develop joint strategies and best practices for combating transnational criminal organizations that threaten the three countries. According to an Office of Overseas Prosecutorial Development, Assistance and Training official at Embassy Bogotá, combating money laundering, including TBML, was a focus of the group’s most recent session in June 2019. Funding Training and Technical Assistance State and Treasury’s OTA have funded a range of foreign assistance programs in partner countries that provide training and technical assistance related to combating money laundering and economic crimes. State allocated approximately $90 million in fiscal years 2014 through 2018 to programs to counter financial crimes and money laundering throughout the world. According to State, this funding supported a range of programs, including programs to assist countries in drafting legislation and regulations; training bank regulators and examiners, financial investigators, prosecutors, and judges; and strengthening the ability of FIUs in partner countries to receive, analyze, and disseminate suspicious activity reports, among other things. Although State has not funded any programming that focused exclusively on TBML during this period, it reported that it allocated approximately $5 million in fiscal years 2014 through 2018 for programs that included a substantial amount of information on the investigation, enforcement, or prosecution of TBML. For example, according to State, it has funded a series of projects to reform Peru’s criminal justice system that, among other things, helped strengthen the country’s ability to fight TBML. More recently, in fiscal year 2019, State noted that it has allocated approximately $5 million to the WCO for a project focused specifically on TBML. According to State, through this program, WCO will build the capacity of customs agencies to detect and deter smuggling and misreporting used to facilitate TBML. Treasury’s OTA allocated approximately $20 million in fiscal years 2014 through 2018 for projects to counter economic crimes throughout the world. Through these projects, OTA funds advisors—either a resident advisor who remains in the host country for several years, or a group of intermittent advisor who travel to the host country for short-term assignments. According to Treasury, these projects support the implementation of AML/CFT legal and regulatory regimes, as well as host government institutions, that are able to combat economic crimes. Although OTA has not funded any projects focused specifically on TBML, it stated that OTA advisors routinely discuss with their country partners the different methods that criminals use to launder money, including TBML. According to OTA, its assistance has addressed TBML to varying degrees in a number of projects. For example, OTA helped Peru’s tax and customs authorities to develop training for the Peruvian National Police Money Laundering Unit on how to best use customs databases to identify potential leads in TBML cases. Providing Law Enforcement Cooperation Law enforcement agencies, including DEA, HSI, the Federal Bureau of Investigation, and Internal Revenue Service Criminal Investigation, have also posted personnel overseas that collaborate with law enforcement officials from the host country to work on cases related to TBML. For example, according to HSI data, the agency has opened TBML investigations supported by its personnel at embassies in a number of countries, including Colombia, Mexico, the United Kingdom, the Netherlands, the Dominican Republic, Singapore, and Spain. U.S. law enforcement personnel have also set up U.S.-supported vetted units in partner countries. For example, DEA has established Sensitive Investigative Units in a number of countries, such as Colombia and Paraguay. DEA partners with these units to investigate and disrupt various aspects of drug trafficking organizations’ operations, including money laundering activities. HSI Has Shared and Analyzed Data with Partner TTUs, but the TTU Program Faces Various Challenges that Limit Results Over time, HSI’s work with partner TTUs has helped in the successful disruption of certain TBML schemes. For example, HSI reported that the Panamanian TTU provided analysis to support an investigation that successfully disrupted an illicit tobacco smuggling ring involving several Panamanian companies. The investigation led to four arrests and the seizure of over $10 million in cigarettes. In another case, HSI reported that HSI and the Peruvian TTU worked together to support an investigation that disrupted a TBML scheme involving the import of illegally mined gold into the United States from Peru. While HSI and other U.S. government officials have stated the TTUs in some countries have played an important role in certain investigations, the TTU program has faced challenges that limited its results in disrupting TBML schemes, including: Insufficient resources or support for the partner TTUs. In recent years, the U.S. government has not provided any funding directly to partner TTUs to support their activities, according to HSI officials. These officials noted that while HSI does not obligate funds to directly support partner TTUs, the agency will fund the travel expenses for its personnel to travel to a foreign country to provide training to a partner TTU. Previously, State had provided a limited amount of funding to certain partner TTUs, including for training and the purchase of computer software, according to State officials. However, State officials reported that State has not provided any funding for partner TTUs since fiscal year 2013, because insufficient evidence of the program’s effectiveness and various programming obstacles have led the department to prioritize funding for other anti-money laundering and crime prevention programs over the TTU program. For example, State officials noted that limited support from some U.S. embassies and a lack of HSI staff posted at them negatively affected the TTU program at times. However, State officials noted that they are generally supportive of the TTU concept and would consider providing further funding for the program, if HSI can demonstrate program results. HSI officials noted that they have not sought State funding for the TTU program in recent years, but would be interested in discussing State’s expectations regarding program results and pursuing State funding going forward. U.S. and partner country officials also noted that host governments have not always dedicated the necessary personnel and information technology resources to ensure the effective operations of the TTUs. For example, HSI officials stated that a lack of funding for partner TTUs has contributed to technology gaps between U.S. and partner country systems. Slow expansion of program and limited geographic range. Although HSI has established the goal of expanding the TTU program, the expansion has slowed over the last few years and it operates mainly in Latin America, despite the range of countries around the world that face risks related to TBML. HSI officials stated they have had discussions with several additional countries about establishing TTUs, but have not yet been able to finalize agreements with a number of these countries, resulting in only two new TTUs being set up over the last 3 years. Delays in launching partner TTUs and lapses in their operation. The TTU program has experienced delays in launching TTUs after HSI and the partner governments have signed the MOUs. For example, HSI officials at Embassy Canberra noted that HSI signed the MOU with Australia to establish its TTU in 2012, but it did not become fully operational until 2017. According to HSI officials, this delay was due to significant coordination challenges within the Australian government. Several TTUs have also experienced lapses in their operations. For example, the TTU in Argentina launched in 2006, but the two countries halted information sharing between 2011 and 2015. According to HSI officials, this halt in information sharing was because of U.S. concerns with corruption in the Argentinian government at that time. Differences in objectives between HSI and partner TTUs. HSI officials noted that one limitation in the TTU program is that partner TTUs frequently focus on revenue collection issues and place less priority on disrupting TBML schemes than HSI does. For example, partner TTUs may seek to identify instances of customs fraud, which can reduce duties collected by customs agencies on imported goods, but they may not pursue the investigation further to disrupt the criminal organizations involved in the scheme. Limited authorities and lack of interagency coordination in TTU partner countries. Partner TTUs generally operate within their countries’ custom agencies, which frequently do not have their own law enforcement authorities, according to HSI and other U.S. officials. As a result, they must coordinate with law enforcement partners within their countries to be effective. However, HSI officials noted that such coordination does not always take place. For example, HSI officials in Mexico stated that the Mexico TTU has had limited effectiveness because of a lack of sufficient cooperation between Mexican customs and law enforcement officials. Similarly, in Brazil, HSI officials noted information sharing with that country’s TTU has been delayed because the TTU lacks ready access to trade data and must purchase it from a different Brazilian government agency. Data sharing and connectivity. HSI and partner government officials have also noted issues about uploading partner trade data into DARTTS and ensuring these data are in a compatible format. For example, an HSI official in the United Kingdom described a delay of several months in uploading data from the United Kingdom into DARTTS because of data formatting issues. In addition, U.S. officials at Embassy Canberra noted that the Australian TTU has frequently experienced connectivity problems with DARTTS that have challenged the TTU’s ability to upload its data to the system. In addition, HSI and partner TTU officials noted that there are certain limitations in DARTTS, including difficulties in working with cross-border data, that reduce its effectiveness as a tool for HSI and partner TTUs to use in identifying potential cases of TBML. DHS noted that details on these limitations are sensitive and we did not include the specifics in this report. HSI Has Not Taken Key Management Steps Related to the TTU Program Although the TTU program has faced a number of challenges, HSI has not taken key management steps that could help guide its efforts, including developing a strategy and a performance monitoring framework. Because the TTU program involves partnerships between HSI and foreign governments, HSI has varying levels of ability to address these challenges through independent action. However, by developing a strategy and a performance monitoring framework, HSI could assess how best to plan for and address these challenges in order to maximize the program’s effectiveness. HSI officials stated that they have not produced any sort of planning or strategy documents specifically for the TTU program. HSI has produced a strategic plan for fiscal years 2016 through 2020 that references the TTU program. For example, the strategy notes that HSI plans to, “continue to provide operational, analytical, technical, and targeting support on trade- based money laundering and illicit funding investigations being conducted by HSI field offices and partner TTUs.” However, the strategy includes only limited references to the TTU program’s operations. According to HSI officials, for the TTU program specifically, they only conduct informal, periodic planning, such as identifying countries that they would like to prioritize for inclusion in the TTU program. DHS Directive 101-01 establishes requirements for planning, budgeting, programming, and executing for the department and its component agencies. Among other things, the directive requires agency heads, including the Director of ICE, to establish planning processes and methods to oversee program management and risk management activities for the programs and operations under their purview. HSI officials noted that in addition to the HSI strategic plan, they have used some documents, such as FATF’s 2008 report on best practices for combating TBML, to guide the TTU program, but have not prioritized the development of a strategy for the TTU program because of resource constraints. Without such a strategy, however, HSI lacks an important tool to guide its operations, including how best to work with its partner TTUs to identify potential cases of TBML, prioritize potential cases for further investigation, and successfully conduct these investigations. In addition, without a strategy, HSI cannot effectively plan how to grow the TTU program, where appropriate, and establish TTUs in additional priority countries. Although developing a strategy would require an investment of resources, a strategy would help ensure HSI is utilizing its limited resources effectively to achieve the TTU program’s goals over the long term. According to HSI officials, the HSI TTU tracks some information on the results of domestic investigations, including the number of TTU-related cases initiated and arrests made, but it does not have a performance monitoring framework, with specified metrics, that allows it to track the results of its work with partner TTUs. HSI officials also stated they have not conducted any evaluations of the factors that increase or decrease the TTUs’ effectiveness. As part of its requirement on planning, programming, budgeting, and execution, DHS Directive 101-01 states that, among other things, the objective of the execution phase is to account for cost and performance to determine if value has been delivered to stakeholders. The directive also notes that annual analysis and reporting of financial expenditures and performance measure results are key deliverables during the execution phase. HSI officials acknowledged that a performance monitoring framework would be beneficial, but they have prioritized other operational issues because of limited resources. In addition, they noted designing a performance monitoring framework that would allow HSI to measure and evaluate the results achieved through its work with partner TTUs would be challenging because, among other things, enforcement efforts of partner TTUs are not within their control and they do not have access to all partner country information. According to HSI officials, they instead rely on measures such as the number of trade records uploaded into DARTTS and the number of foreign users of DARTTS, among other things. However, without a performance monitoring framework for the TTU program, HSI lacks important information on what successes the program has achieved and how to replicate them with other partner TTUs. In addition, HSI lacks key information on areas where the program is not achieving its intended results and what adjustments to make in response. As with the development of a strategy, working to establish a performance monitoring framework would entail an investment of resources, but once completed it could help HSI in assessing how to maximize the impact of its resource investments in the TTU program. In addition, the performance monitoring framework could help demonstrate results to other stakeholders, such as State, that may wish to consider providing support to the TTUs in partner countries. U.S. Agencies Have Worked With International Bodies to Develop International Anti-Money Laundering Standards, Share Information, and Strengthen Countries’ Ability to Combat TBML FATF The U.S. government has worked with FATF, the Egmont Group, UNODC, and the WCO to combat TBML. Among other things, the U.S. government has worked with these international bodies to develop anti- money laundering standards, share information regarding TBML methods and specific cases, and provide training and technical assistance to strengthen the ability of countries to combat TBML. As a member of FATF, the U.S. government has supported the organization’s efforts to develop internationally recognized standards for combating money laundering, terrorist financing, and the financing of the proliferation of weapons of mass destruction. FATF’s standards, updated in 2019, include 40 recommendations. According to FATF, it designed these recommendations to set out the critical measures that countries should establish to: identify the risks, and develop policies and domestic coordination; pursue money laundering, terrorist financing, and the financing of apply preventive measures for the financial sector and other establish powers and responsibilities for the competent authorities (such as investigative, law enforcement and supervisory authorities) and other institutional measures; enhance the transparency and availability of beneficial ownership information of legal persons and arrangements; and facilitate international cooperation. To date, FATF’s standards do not include any specific reference to TBML. However, Treasury officials from the U.S. government’s delegation to FATF stated that the standards are designed to provide a robust framework to help competent authorities prevent, detect, and mitigate against the misuse of global trade and combat all forms of money laundering, including TBML. For example, the officials noted that FATF’s third recommendation identifies the need for countries to criminalize money laundering, which would include TBML activity. The U.S. government also works with FATF to conduct mutual evaluations of member countries. FATF designed these evaluations, which are periodic peer reviews for each country, to provide a detailed assessment of a country’s technical compliance with the FATF standards and the effectiveness of its AML/CFT systems. These evaluations may at times highlight issues related to TBML in countries. For example, FATF’s 2014 mutual evaluation of Spain found a significant number of cases involving TBML, particularly those associated with value added tax or other tax fraud schemes. The U.S. government has also supported FATF’s development of several reports on TBML, including a 2006 report on types of TBML schemes and a 2008 report on best practices for detecting TBML. More recently, FATF published various other reports addressing issues relevant to combating TBML, including the 2010 Money Laundering Vulnerabilities of Free Trade Zones, the 2015 Money Laundering/Terrorist Financing Risks and Vulnerabilities Associated with Gold, and the 2018 Professional Money Laundering. These reports provide a range of guidance to countries on how to detect and combat TBML. The Egmont Group FinCEN has worked with its fellow FIUs in the Egmont Group to exchange tactical, operational, and strategic information to assist in efforts to combat money laundering, including TBML. As part of its work with Egmont Group partners, FinCEN shares information on particular cases in response to requests from fellow FIUs, proactively shares relevant information with other FIUs, and requests information from FIUs. According to FinCEN officials, Egmont Group membership is critical to information sharing in support of FinCEN analysis and U.S. law enforcement cases because it provides assurances that members have the appropriate policies and procedures in place to respond to and protect sensitive information. FinCEN and its FIU counterparts follow the Egmont Group’s Principles for Information Exchange Between Financial Intelligence Units, in addition to the law of each jurisdiction, to foster cooperation while sharing information securely. Generally, Egmont Group members use a dedicated computer system that the organization has developed, the Egmont Secure Web, to share information securely. FinCEN officials stated that they respond to about 1,000 information requests a year from other Egmont Group members. For example, at the request of a foreign FIU, FinCEN conducted research on an import/export company suspected of involvement in TBML, summarizing relevant SARs and identifying other relevant information on the subjects. FinCEN’s assessment determined the potential use of a TBML scheme and use of shell companies to obfuscate the flow of funds. FinCEN has also supported the Egmont Group’s efforts to provide training to member FIUs on issues related to money laundering and terrorism financing. For example, FinCEN has helped develop and deliver Egmont Group-sponsored training to FIU analysts on how to understand complex financial data. However, Treasury officials stated that the Egmont Group has not provided any TBML-specific training. Although the Egmont Group has not sponsored TBML-specific training for FIUs, FinCEN officials noted that FinCEN has hosted officials from several partner FIUs at the TBML conferences it held in 2018 and 2019 and has provided its own TBML- related training to partner FIUs. For example, in October 2019, FinCEN provided TBML-related training to Mexico’s FIU. Finally, FinCEN has supported the Egmont Group’s development of relevant guidance documents. For example, the Egmont Group developed, in partnership with FATF, a 2013 report called Money Laundering and Terrorist Financing through Trade in Diamonds. According to the report, the two bodies decided to undertake the research because they had (1) never conducted in-depth research on the diamond trade and associated money laundering and terrorist financing risks and (2) a number of participants in the bodies had noted indications that the diamond trade was being exploited for money laundering and terrorist financing purposes. More recently, in July 2018, the Egmont Group produced an additional report with FATF, Concealment of Beneficial Ownership, which also discussed certain TBML schemes. The U.S. government also partners with UNODC in its work to combat illicit drugs and international crime, including TBML. Among other things, State has provided funding to UNODC’s Global Program against Money Laundering, Proceeds of Crime and the Financing of Terrorism. Through the program, UNODC has provided training and technical assistance to a range of member states throughout the world. For example, as part of the program, UNODC places AML experts in countries for up to a year to serve as mentors. These mentors provide a range of support, such as helping countries establish functioning FIUs. UNODC also conducts shorter-term workshops and training sessions, such as mock trial training for law enforcement officers, prosecutors, and judges to enhance their ability to investigate and prosecute money laundering cases. In addition, according to UNODC, under the program, it has developed model legislation that United Nations members can use in setting AML/CFT legal regimes in their countries that are consistent with FATF standards. The U.S. government has also supported certain UNODC programs that have specifically addressed issues related to TBML. According to a UNODC official in Colombia, UNODC has worked with State INL and HSI to provide training for governments in the region to increase expertise on TBML. The official said that UNODC is prioritizing TBML-specific trainings, particularly to build TBML knowledge amongst new prosecutors. In addition, UNODC headquarters officials noted that State INL has supported the development of a program on TBML that UNODC is planning in the Caribbean. The U.S. government also works with the WCO to develop and strengthen the role of customs administrations in tackling TBML. Among other things, CBP has supported WCO’s efforts to develop enforcement tools, guidance and best practices, and training for member countries. For example, CBP has supported the WCO’s development of its Cargo Targeting System. The system, which is available to all WCO members, is designed to assist customs agencies in conducting automated risk assessments of import, export, and transshipment cargo in order to identify high risk shipments that warrant further investigation. With WCO support, several customs agencies also developed the “Compendium of Customs Operational Practices for Enforcement and Seizures,” a tool that provides practical examples for improving enforcement and seizure practices. With CBP support, WCO has produced a number of guidance and best practices documents that can support efforts to combat TBML. For example, in a 2018 report, the WCO described a number of best practices that customs administrations could consider for combating illicit financial flows via trade misinvoicing. In addition, in 2019, the WCO and the Egmont Group developed a Customs-FIU Cooperation Handbook that provides their members guidance and best practices for enhancing global collaboration efforts between customs agencies and FIUs. Finally, the WCO has provided training for its member countries to deter illicit activities and combat TBML. For example, through the WCO, HSI special agents with AML and TBML expertise have conducted workshops to assist WCO member countries in their operational efforts. The WCO also organized a joint workshop with the Organization for Economic Co- operation and Development in 2019 that was designed to raise awareness among customs agencies, FIUs, and law enforcement agencies about TBML related to gems and precious metals. In 2019, the WCO also agreed to launch a two-year counter-TBML effort entitled “Project TENTACLE,” according to CBP officials. The project will include the delivery of TBML workshops to WCO members through 2021, as well as five operational customs activities that follow each workshop. This project will focus on the Asia/Pacific, Africa, and South America regions. State INL has provided funding for Project TENTACLE, in coordination with experts from ICE and CBP. WCO officials noted the lack of training that many customs administrations have on TBML, and the need for regularized training on the subject. Conclusions TBML poses significant national security risks to the United States. Criminal and terrorist organizations use TBML schemes to disguise the origins of billions of dollars in funds generated by their illicit activities. Given the national security threat that TBML poses, it is crucial that the U.S. government develop an effective response to combat it. Because TBML is international in nature and frequently involves complex, difficult to detect schemes that cut across international borders, it is important that the U.S. government respond through domestic efforts and collaborate with partner countries and international bodies to address the problem. As the U.S. government’s primary partnership program focused on combating TBML, the TTU program plays a key role in these efforts to collaborate with other countries. Although the TTU program has achieved some successes, it has also faced a number of challenges. However, HSI has not taken key management steps to address those challenges and to strengthen the TTU program. HSI, for example, has not established a strategy for the TTU program. Because HSI does not have such a strategy, it lacks an important guide for its efforts to maximize the effectiveness of its existing TTU partnerships and to prioritize efforts to expand the program to other countries. HSI also does not have a performance monitoring framework that tracks the results of its work with partner TTUs. Without such a framework, HSI does not have a means of systematically tracking progress toward program goals and identifying areas that need adjustments to improve program results. Recommendations for Executive Action We are making two recommendations to DHS: The Secretary of Homeland Security should direct the Director of ICE to develop a strategy for the TTU program to ensure that ICE has a plan to guide its efforts to effectively partner with existing TTUs, and to expand the program, where appropriate, into additional countries. (Recommendation 1) The Secretary of Homeland Security should direct the Director of ICE to develop a performance monitoring framework for the TTU program that would enable the agency to systematically track program results and how effectively it is achieving the program’s goals. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of the report to DHS, DOJ, State, and Treasury. DHS, State, and Treasury provided technical comments, which we incorporated as appropriate. DOJ noted that it had no comments on the draft. DHS also provided written comments, which are reproduced in appendix III. In its comments, DHS stated it concurred with our recommendation that the Secretary of Homeland Security direct the Director of ICE develop a strategy for the TTU program, but did not concur with our recommendation to develop a performance monitoring framework for the program. In its response to our recommendation regarding a strategy for the TTU program, DHS noted that HSI has a strategic plan for fiscal years 2016 through 2020 that addresses the TTU program. However, it stated that the TTU program would develop, as a complement to the HSI strategic plan, a document that outlines emerging threats and challenges, as well as existing metrics that are used to track program results for the TTU. In noting it did not concur with our recommendation to develop a performance monitoring framework for the TTU, DHS stated the TTU program already collects a number of statistics each fiscal year related to its program results and can use these statistics to demonstrate program results. DHS also stated that while the TTU program’s primary mission is to establish partnerships and provide foreign law enforcement with information tools to facilitate the exchange of data between TTUs, HSI has limited ability to track the activities of partner TTUs and cannot dictate the enforcement actions partner countries take. In our report, we acknowledge that the HSI TTU tracks some information on the results of domestic investigations, as well as other information, such as the number of records in DARTTS. We also acknowledge that because the TTU program involves partnerships between HSI and foreign governments, HSI does not have the ability to independently control all aspects of the program’s performance. However, we believe that further action by HSI to establish a performance monitoring framework is warranted for the following reasons. First, although HSI has noted examples of statistics it can use to measure the performance of the TTU program, it does not have a formally documented framework or process for measuring its performance or reporting performance results. Second, while the TTU program has identified a few indicators it uses in assessing performance, it has not established any indicators with goals for which to measure its results against, making it challenging to assess whether HSI is making progress to achieve the program’s goals. Third, even though HSI has some measures, such as the number of TTU-related cases it has initiated or arrests made, HSI officials acknowledged that the agency does not track information on what role the TTU actually played in these cases. As a result, HSI cannot establish the extent to which the TTU, rather than a different HSI office, has contributed to any of the measures. Fourth, although we recognize that HSI does not have the ability to dictate what actions partner TTUs will take and may not have access to all relevant partner country information, HSI does have opportunities to take further action to monitor the outputs of its work with partner TTUs. For example, HSI could work with partner TTUs to collect information more systematically on successful cases that they have initiated. HSI could also collect information on factors that reduced the ability of partner TTUs to successfully pursue cases. Other U.S. agencies have conducted performance monitoring and evaluations on programs that rely on partnership and collaboration with foreign governments. We continue to believe in the need for a rigorous performance monitoring framework for the TTU program, a key U.S. government effort in combatting TBML. We note that HSI could potentially integrate a performance monitoring framework into the strategy it plans to develop in response to our first recommendation. For example, DHS stated in its comments that HSI plans to document the metrics it will use to measure the TTU program’s results in that strategy. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Acting Secretary of Homeland Security, the Secretary of State, the Secretary of the Treasury, and the Attorney General. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-8612 or GianopoulosK@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines (1) what the available evidence indicates about the types and extent of international trade-based money laundering (TBML) activities, (2) the practices international bodies, selected countries, and knowledgeable sources have recommended for detecting and combating TBML, and (3) the extent to which U.S. Immigration and Customs Enforcement has effectively implemented the TTU program and the steps the U.S. government has taken to collaborate with international partners to combat TBML. To address all three objectives, we analyzed relevant data and documentation from the Departments of Homeland Security (DHS), Justice (DOJ), State (State), and the Treasury (Treasury). For example, we reviewed U.S. government documents that discuss risks associated with TBML, including Treasury’s 2015 and 2018 National Money Laundering Risk Assessment and 2015 and 2018 National Terrorist Financing Risk Assessment and the Drug Enforcement Administration’s annual National Drug Threat Assessment. In addition, we reviewed U.S. government strategy documents that provide information on the extent and types of TBML, including Treasury’s 2018 National Strategy for Combating Terrorist and Other Illicit Financing and State’s annual International Narcotics Control Strategy Report (Volume II). We also analyzed other U.S. government reporting on TBML, including TBML- related advisories from Treasury’s Financial Crimes Enforcement Network (FinCEN), selected cables from U.S. embassies describing TBML issues in their host country, and summary analyses from Immigration and Customs Enforcement Homeland Security Investigation’s (HSI) Trade Transparency Unit (TTU). Finally, we conducted interviews with officials from DHS, DOJ, State, and Treasury in Washington, D.C. We also selected a nongeneralizable sample of six countries to study in greater depth. We conducted fieldwork in three of these countries: Colombia, Paraguay, and the United Kingdom. During our fieldwork in each country, we interviewed U.S. embassy officials from DHS, DOJ, State, and Treasury. In each country, we also interviewed host country officials, including TTU, law enforcement, financial intelligence unit, and financial regulatory agency officials. In addition, in Paraguay, we traveled to Ciudad del Este to observe commercial activity and border operations on Paraguay’s border with Brazil and Argentina. For the other three countries we selected—Australia, Mexico, and Singapore—we conducted work remotely. We interviewed, via telephone, U.S. embassy officials in Australia and Mexico, and obtained written responses from U.S. officials at Embassy Singapore. To select these six countries, we considered several criteria, including (1) the type and extent of TBML risk, (2) the types and level of U.S. collaboration with the country, (3) the presence of U.S. agencies that work on TBML in the country, (4) the extent to which the country had implemented recommended practices to identify and combat TBML (with a goal of covering a range of levels of adoption), and (5) the country’s location (with a goal of covering a range of geographic regions). The team also considered additional factors based on recommendations from knowledgeable sources, such as selecting countries with differing levels of capacity to respond to the TBML threat. To determine what available evidence indicates about the types and extent of international TBML, we analyzed documentation from relevant international bodies including the Egmont Group of Financial Intelligence Units (the Egmont Group) the Financial Action Task Force (FATF), the United Nations Office on Drugs and Crime (UNODC) and the World Customs Organization (WCO). For example, we reviewed these reports: FATF’s 2006 Trade Based Money Laundering and 2008 Best Practices Paper on Trade Based Money Laundering; the Egmont Group’s and FATF’s 2013 Money Laundering and Terrorist Financing through Trade in Diamonds; UNODC’s 2011 Estimating Illicit Financial Flows Resulting from Drug Trafficking and Other Transnational Organized Crimes; and WCO’s 2018 Illicit Financial Flows via Trade Mis-invoicing. To gather further information regarding the types and extent of international TBML activities, we conducted 15 interviews, covering a nongeneralizable sample of individuals knowledgeable about TBML and efforts to combat it, including academic researchers, think tank officials, private sector representatives from trade organizations and individual companies, and former U.S. government officials. Throughout this report, we refer to these individuals as “knowledgeable sources.” In selecting these knowledgeable sources, we conducted initial research to identify individuals or organizations that had conducted research related to TBML and prioritized those whose work was frequently cited by other sources. We also requested recommendations from U.S. agencies and the knowledgeable sources we spoke with regarding other individuals or organizations we should meet with during our work. In selecting these knowledgeable sources, we sought to choose people with different types of experiences studying and working on issues related to TBML to get a range of perspectives. We also conducted a literature search for studies from peer-reviewed journals, conference papers, dissertations, government reports, industry articles, and think tank publications that sought to quantify the amount of TBML activities. We also asked for recommendations on relevant publications as part of our initial meetings with U.S. agencies and knowledgeable sources. We examined summary level information about each piece of literature, and then from this review, identified articles that were germane to our report. A GAO economist then evaluated the methods used in the research and a GAO methodologist performed a secondary review and confirmed the summarized research findings. We reviewed 10 studies published between January 2009 and July 2019 that were relevant to our research objective on what the available evidence indicates about the extent of international TBML activities. We also reviewed one additional article published in 1999, which was frequently cited in other articles as a pioneer of measuring money laundering and included it in our review. To identify the practices international bodies, selected countries, and knowledgeable sources have recommended for detecting and combating TBML, we conducted a literature review to find relevant studies and other reports prepared by international bodies, industry groups, think tanks, academics, and foreign governments. We then analyzed these studies and reports to identify recommendations they made regarding practices for detecting and combating TBML. To gather further information regarding recommended practices for detecting and combating TBML and potential challenges in implementing such practices, we interviewed U.S. representatives of FATF and the Egmont Group, conducted interviews with UNODC officials, and obtained written responses to a set of questions from the WCO. We also spoke with U.S. embassy officials in five of the countries we selected for our nongeneralizable sample and obtained written responses from U.S. embassy officials in the sixth country. In addition, we spoke with host country officials in three of those countries. Finally, we spoke with selected knowledgeable sources. Through our work, we identified a range of recommended practices related to detecting and combating TBML. We grouped these recommended practices into five categories. We also identified examples of the steps that the U.S. government and other countries have taken to implement practices in each of these five categories. To examine the extent to which U.S. Immigration and Customs Enforcement has effectively implemented the TTU program, we collected information on HSI’s TTU program, including data on HSI’s TTU partner countries, the details on the TTU program’s operations, and documentation on the data system HSI developed to support the TTU program—the Data Analysis and Research for Trade Transparency System (DARTTS). We also evaluated HSI’s management of the TTU program by comparing the steps it had taken to establish a strategy and performance monitoring framework to requirements that DHS has established related to planning, programming, budgeting, and execution. To identify the steps HSI had taken, we interviewed HSI officials and reviewed relevant documentation on the TTU program. To examine the steps U.S. agencies have taken to collaborate with international partners to combat TBML, we also obtained and analyzed foreign assistance data, for fiscal years 2014 through 2018, from State on financial crimes and money laundering assistance programs it funded and from Treasury’s Office of Technical Assistance (OTA) on economic crimes assistance programs it funded. To assess the reliability of these data, we reviewed available documentation and interviewed knowledgeable U.S. officials. We determined that the State and Treasury OTA assistance data were sufficiently reliable for our purposes to present summary information on funding for assistance programs. We also reviewed other relevant U.S. government documentation describing training, technical assistance, or other support that U.S. agencies provided to partner countries to assist them in combating TBML or money laundering more broadly. For example, we reviewed selected performance reports for State anti-money laundering programs and selected end-of-project reports for Treasury OTA economic crimes programs. To gather information on the U.S. government’s collaboration with international bodies, we reviewed documentation from the Egmont Group, FATF, UNODC, and WCO describing the key activities of the bodies. Finally, as part of our work for this objective, to learn more about U.S. agencies’ work with partner countries and international bodies to combat TBML, we also interviewed U.S. officials in Washington, D.C. and interviewed U.S. embassy and host government officials in partner countries. We conducted this performance audit from January 2019 to April 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Results of Literature Review on Trade-Based Money Laundering To determine the extent of trade-based money laundering (TBML) activities, we conducted a literature search for studies that sought to quantify potential illicit financial flows, including TBML. We considered existing studies from peer-reviewed journals, conference papers, dissertations, government reports, industry articles, and think-tank publications identified through searches the GAO librarian conducted of various databases, such as EconLit, Social SciSearch, and Scopus. We also asked for recommendations on relevant publications as part of our initial meetings with U.S. agencies and knowledgeable sources. After conducting the searches and relying on recommendations, we started the review with 82 studies. To assess the methodological quality of the studies, we relied on generally accepted social science standards. We examined summary level information about each piece of literature, and then from this review, identified 14 articles that sought to quantify potential illicit financial flows, including TBML. A GAO economist evaluated the methods used in the research, eliminated some research if the methods were not appropriate or not rigorous, and then summarized the research findings. In addition, a GAO methodologist performed a secondary review and confirmed our reported analysis of the finding. We further eliminated four studies and eventually identified 10 studies published between 2009 and 2019 that were relevant to our research objective on what the available evidence indicates about the extent of international TBML activities. We also identified one additional article published in 1999, which other articles frequently cited as a pioneer method of measuring money laundering, and included it in our review. See table 3 below for the list of studies included in our analysis. We found that estimating the extent of money laundering is a challenging task given that criminals seek to hide their illegal activities. Still, economic and statistical models have been developed that attempt to quantify the extent of such activities using various published datasets. However, none of the studies we identified in our literature review sought to develop estimates of TBML specifically and all the studies we reviewed capture activities that are generally broader than TBML to include tax avoidance, trade price manipulation, or trade misinvoicing, which demonstrates the difficulty in estimating the magnitude of TBML activity. In addition, according to the literature we reviewed, the studies we identified all had certain methodological limitations. We found that studies seeking to quantify potential money laundering activities, including TBML, have typically relied on one of four methods: (1) Walker gravity model, (2) unit price analysis, (3) trade mirror analysis, or (4) a theoretical model. Walker Gravity Model One of the first researchers that attempted to measure money laundering is John Walker. In a paper published in 1999, he used what became known as the Walker gravity model to estimate the amount of money laundering globally. The gravity model states that the amount of trade from place A to place B depends on the size of the population in A, the “attractiveness” of B to people based in A, and the distance between the two places. The Walker model based the “attractiveness” of a place on four assumptions: (1) foreign countries with a tolerant attitude towards money laundering will attract a greater proportion of the funds than more vigilant countries; (2) high levels of corruption or conflict will deter money launderers, because of the risks of losing their funds; (3) countries with high levels of gross national product per capita will be preferred by money launderers, since it would be easier to “hide” their transaction; and (4) other things being equal, geographic distance, and linguistic or cultural differences, work as deterrents to money launderers. According to the literature we reviewed, the Walker gravity model has several limitations. First, because the flows of money laundering are unobservable, it is not possible to assess the quality of the formula. Second, although some factors in the attractiveness indicators are plausible, they are still arbitrary. Third, the researcher acknowledged that these figures represent only an interim set of results to show the types of output that would be derived from a fully developed model. These estimates are not his best and final estimates of money laundering around the world. Because of these limitations and considering the estimates are based on data that date to 1995, we did not present the estimates in the report. However, considering the importance of the Walker gravity model in the literature on measuring money laundering, we discussed this model in the report to provide context on methods used to quantify potential money laundering activities. Unit Price Analysis A researcher used the unit price analysis to analyze U.S. trade data to quantify the magnitude of suspicious trade transactions. The database contains information at the transaction level that is reported to the U.S. Census Bureau from Shipper’s Export Declarations and U.S. Customs Service Entry Summary forms. The model follows the International Revenue Service’s definition of suspicious prices, which, according to the researcher, is defined as prices that are outside of the upper- or lower- quartile price range for each commodity in each country. He then aggregated the total dollar amount to come up with an estimate of the amount of suspicious trade. The researcher found that in 2018, total money moved out of United States through under-valued exports and over-valued imports was approximately $278 billion. Total money moved into the United States through over-valued exports and under-valued imports was approximately $435 billion. According to the literature we reviewed and information we received from the Census Bureau, we found that the unit price analysis approach has several limitations. First, the Census Bureau edits raw trade data received from Customs and Border Protection by automatically correcting unit prices that fall outside of its price parameters, which it establishes using industry analysis, input from public and private entities, and trend data. Of the total amount of export and import records in a specific month, roughly 18 percent to 22 percent contain some type of editing, according to the Census Bureau. The edited data with some extreme unit prices (those that fall outside of price parameters set by the Census Bureau) already “corrected” creates issues for the unit price analysis, which relies on identification of extreme unit prices. Second, the use of lower- or upper-quartile as price filters is somewhat arbitrary. For example, another study noted a fundamental weakness is that unit price analysis depends on the existence of a benchmark against which “abnormality” can be assessed. A lower benchmark would, in most product categories, produce more prices flagged as suspicious. Moreover, estimates from the unit price analysis also include other types of illicit activities in addition to TBML, such as income tax avoidance or evasion, among others. Therefore, this measurement of suspicious trade is generally broader than that of TBML. In addition, because of their focus on identifying suspicious prices, these estimates exclude other types of TBML that may not utilize over- or under-invoicing techniques, such as the Black Market Peso Exchange. Trade Mirror Analysis The third approach, adopted by Global Financial Integrity and several other scholars, uses trade mirror analysis to estimate the amount of trade misinvoicing. This approach compares what country A reports as an export to country B and what B reports as an import from A (or vice versa). The calculation assumes the price and volume declared to both countries authorities would match after accounting for insurance and freight costs, and that any further difference between the trades reported by the countries indicates trade misinvoicing. In its latest report, Global Financial Integrity measured trade misinvoicing using two datasets. First, Global Financial Integrity relied on the International Monetary Fund’s (IMF) Direction of Trade Statistics and selected bilateral trade reports for 148 developing countries trading with 36 advanced economies from 2006 to 2015. Global Financial Integrity calculated potential trade misinvoicing as the import and export gaps, netted of the insurance and freight costs differentials. Second, Global Financial Integrity used United Nations Comtrade data to calculate trade gaps, where Comtrade gaps are calculated for each of the Harmonized System six-digit commodity classes available. Global Financial Integrity found that over the 10-year period of this study, potential trade misinvoicing amounted to between 19 and 24 percent of developing country trade on average. For 2015, it estimated that potential trade misinvoicing to and from these 148 developing countries were between $0.9 trillion and $1.7 trillion. According to the literature we reviewed, the Trade Mirror Analysis approach also has several limitations. First, alternative, legitimate reasons for import and export gaps may exist. For example, a researcher noted that “price volatility, transit and merchant trade, and the use of bonded warehouses can result in large trade data discrepancies arising from legitimate trade.” Another researcher also noted that major differences in customs import valuation methodologies and customs administration fees could contribute to trade data discrepancies. Moreover, accurate records may not always exist, especially in developing economies. Second, according to one researcher, the IMF and the United Nations, whose data these studies draw on, warn that the statistics cannot be reliably used in this way. The IMF says, “we caution against attempting to measure by using discrepancies in macroeconomic datasets…. fficial estimates of trade misinvoicing cannot be derived by transforming trade data from the IMF Trade Statistics and/or United Nations Comtrade, either by individual country or in aggregate.” Moreover, Global Financial Integrity defines trade misinvoicing as the fraudulent manipulation of the price, quantity, or quality of a good or service to shift money across international borders. Therefore, this measurement of trade misinvoicing is generally broader than that of TBML. However, certain types of TBML schemes are likely not included in the estimate of trade misinvoicing. For example, Black Market Peso Exchange schemes are likely not included because they do not require falsification of the price, quantity, or quality of a good or service. Another study sought to account for various factors that may lead to simple import-export discrepancies. The analysis focuses on under- reporting of Italian exports and over-reporting of Italian imports. The authors used a linear mixed model, where the dependent variable is the discrepancy in mirror statistics. The authors adopted a “residual approach,” in which the model controls for the main legal determinants of mirror statistics gaps, and the estimate residuals are proxy measures of the illegal component of such discrepancies. Using this approach, the authors were able to calculate irregular trade flows at country-sector level and rank countries and sectors by their risk levels. Theoretical Model This approach uses economic theory to determine how much launderers would launder if they acted in an economic rationally manner. One study developed a theoretical model for estimating money laundering in the United States and the 15 countries that were in the European Union at the time. According to a researcher, the model assumes that “agents have the option to work partly in the legal economy and partly in the illegal economy. They face transaction costs in the legal sector and costs of being detected in the illegal sector. Two types of firms produce with two different technologies a legal good and an illegal good. The government sets fines, can influence the probability of detection, and can influence the liquidity of the economy. There is a liquidity constraint. If households want more liquid funds, they must engage in the illegal sector. The ‘optimal’ money laundered depends on the labor services allocated to the legal and illegal sector and on the prices and on the quantities of both goods.” The model uses parameters for the U.S. economy and for the European Union macro area and creates simulations to generate equilibrium allocations for money laundering. According to one study, this model has the advantage of having a solid micro-foundation, which helps to identify rational laundering behavior. However, the model is highly theoretical and has various unrealistic assumptions. For example, according to the model, without liquidity constraint in the economy, there would be no money laundering. Moreover, one of the parameters used in the model—the probability of being detected—is calibrated using data for the Italian economy from 1998 through 2000. Given the limitations discussed above and because the data date to 1998, we did not present the estimates in the report. However, considering that the theoretical model is one of the methods frequently discussed in the literature on measuring money laundering, we discussed this model in the report to provide context on methods used to quantify potential money laundering activities. Appendix III: Comments from the Department of Homeland Security Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the contact named above, Juan Gobel (Assistant Director), Ming Chen (Assistant Director), Ryan Vaughan (Analyst-in-Charge), Joyce Y. Kang, Pamela Davidson, Leia Dickerson, Neil Doherty, Toni Gillich, Jeff Harner, Georgette Hagans, Grace Lui, Dan Luo, and Aldo Salerno made key contributions to this report.
Why GAO Did This Study TBML involves the exploitation of international trade transactions to transfer value and obscure the origins of illicit funds. Various observers have noted that although TBML is a common form of international money laundering, it is one of the most difficult to detect due to the complexities of trade transactions and the sheer volume of international trade, among other things. This report examines (1) what the available evidence indicates about the types and extent of international TBML activities, (2) the practices international bodies, selected countries, and knowledgeable sources have recommended for detecting and combating TBML, and (3) the extent to which ICE has effectively implemented the TTU program and steps the U.S. government has taken to collaborate with international partners to combat TBML. GAO analyzed U.S. agency and international body data and documentation, conducted a literature review, and interviewed U.S. officials and selected knowledgeable sources. What GAO Found Different types of criminal and terrorist organizations use trade-based money laundering (TBML) to disguise the origins of their illicit proceeds and fund their operations. TBML schemes can rely on misrepresenting the price, quantity, or type of goods in trade transactions, but other methods are also used. For example, some drug trafficking organizations from Latin America have used a type of TBML scheme known as the Black Market Peso Exchange (BMPE) to launder funds. BMPE schemes involve merchants who—wittingly or not—accept payment in illicitly derived funds, often from third parties to a trade transaction, for exports of goods. In carrying out TBML schemes, criminal and terrorist organizations use various goods, including precious metals and automobiles (see fig.). U.S. officials and other sources have identified a number of countries as being at particular risk for TBML schemes. Available evidence indicates that the amount of TBML occurring globally is likely substantial. However, specific estimates of the amount of TBML occurring around the world are not available. Officials and reporting from relevant international bodies and selected partner countries, and knowledgeable sources recommended various practices for countries to consider to combat TBML, which GAO grouped into five categories: (1) partnerships between governments and the private sector, (2) training, (3) sharing information through domestic interagency collaboration, (4) international cooperation, and (5) further research on challenges to combating TBML. The U.S. government's key international effort to counter TBML is the Trade Transparency Unit (TTU) program under the Department of Homeland Security's (DHS) Immigration and Customs Enforcement (ICE). ICE set up TTUs in 17 partner countries with the goal of exchanging and analyzing trade data to identify potential cases of TBML. While TTUs have played a role in some TBML investigations, the TTU program has experienced various challenges, including lapses in information sharing between ICE and the partner TTUs, differing priorities between ICE and partner TTUs in pursuing TBML investigations, and limitations in the data system that ICE and the TTUs use. However, ICE has not developed a strategy to increase the effectiveness of the TTU program or a performance monitoring framework to assess the results of its work with partner TTUs. As a result, ICE does not have a clear guide on how best to operate the TTU program and cannot make management decisions based on program results. In addition to the TTU program, the U.S. government collaborates with partner countries and international bodies through a range of other activities, such as developing international anti-money laundering standards, providing training and technical assistance, establishing information-sharing methods, and providing ongoing law enforcement cooperation. What GAO Recommends GAO recommends that DHS develop (1) a strategy to maximize TTU program effectiveness and (2) a performance monitoring framework for the TTU program. DHS concurred with the first, but did not concur with the second recommendation, citing data it already collects and challenges it faces. GAO continues to believe the recommendation is valid, as discussed in the report.
gao_GAO-20-294
gao_GAO-20-294_0
Background In 2004, FEMA initiated the IPAWS program to integrate EAS and other public-alerting systems into a larger, more comprehensive public-alerting system. As shown in figure 1, IPAWS serves as a centralized gateway to deliver alerts to the public. After an alerting authority creates and sends an alert to IPAWS, the system then routes the alert to the public using one or more of the following pathways: Emergency Alert System. Allows authorized federal, state, territorial, tribal, and local government agencies to use EAS media platforms— including radio and television—to send alerts. IPAWS also allows the U.S. President to activate EAS to communicate to the public through all EAS media platforms during a national emergency. Wireless Emergency Alerts. Allows authorized federal, state, territorial, tribal, and local government agencies to send text-like messages to mobile devices in geographically selected areas as one-way cellular broadcasts. Various factors affect whether a WEA message will be received on a mobile device, such as whether the device is WEA-capable and within range of a cell tower where a participating wireless carrier provides WEA services to its customers. According to CTIA, a wireless industry association, more than 100 nationwide and regional wireless carriers participate and have the capability to provide WEA messages to 99 percent of American wireless subscribers. IPAWS alert feed for internet services. Allows internet companies authorized by FEMA—such as Google, Facebook, and The Weather Channel—to retrieve IPAWS alerts and distribute them to social media, websites, applications, and subscription services. Government agencies and industry organizations play different roles in providing, protecting, and leveraging the nation’s emergency alerting capability. FEMA. FEMA is responsible for operating, maintaining, and administering access to IPAWS, including managing the application process. As discussed earlier, public safety agencies that wish to use IPAWS must apply to FEMA to become approved alerting authorities. FEMA, in consultation and coordination with FCC, must carry out various actions to modernize and implement IPAWS. For example, FEMA must ensure IPAWS can send alerts to a specific geographic location and to multiple communications systems and technologies, educate government users of IPAWS and provide training opportunities to them, and conduct nationwide tests of IPAWS, among other things. Legislation was enacted that expands FEMA’s responsibilities for IPAWS. FCC. FCC creates the rules for EAS and WEA, the two primary alerting pathways authorities use to send public alerts through IPAWS. FCC establishes the technical standards, procedures, and testing protocols for EAS participants. FCC also manages an online system used to collect and analyze results of nationwide EAS tests. FCC establishes technical requirements participating wireless carriers must follow for delivering WEA messages to WEA-capable mobile devices. Federal alerting authorities. Authorized federal alerting authorities may create alerts and use IPAWS to send alerts to the public. For example, the National Weather Service (NWS), within NOAA, uses software NWS developed to issue WEAs for severe weather risks such as flash floods and tornadoes. USGS intends to send earthquake-related alerts through IPAWS but as of September 2019, had yet to send such an alert through IPAWS. USGS has partnered with Washington, Oregon, and California to test and implement a West Coast earthquake early warning system called “ShakeAlert” that is intended to send WEA messages to mobile devices several seconds after the initiation of an earthquake. State, territorial, tribal, and local alerting authorities. According to FEMA policy, state, territorial, tribal, and local government agencies first complete FEMA’s application process to gain access to IPAWS and obtain the proper authorization to issue alerts for specific geographic jurisdictions. As discussed earlier, government agencies that issue alerts through IPAWS can include emergency management or law enforcement agencies at the state, county, or city government level. Non-governmental organizations such as a local emergency management association may be granted an authority to issue alerts through IPAWS with approval from FEMA or an alerting authority. For information on FEMA’s IPAWS application process, see figure 2 below. Industry. Industry develops and owns the infrastructure that enables alerts to be created, authenticated, and delivered to the public. Alerting software companies provide software tools that allow alerting authorities to create and send alert messages via the internet to IPAWS. Alerting software companies also provide “opt-in” or subscription-based alerting services to public safety agencies that allow the public to sign up to receive alerts. EAS participants that transmit EAS messages include radio and television broadcasters, cable operators, wireline video service providers, satellite radio providers, and direct broadcast satellite providers. Wireless carriers operate wireless networks that allow alerting authorities to send one- way geographically targeted WEA messages to WEA-capable mobile devices. Manufacturers develop, test, and provide WEA-capable mobile devices, in coordination with participating wireless carriers, to consumers. Internet and web services companies may also distribute alert information from an IPAWS alert feed to internet applications, websites, or social media. We have previously reviewed FEMA’s progress in implementing IPAWS. In 2013, we found that FEMA had taken actions to improve the capabilities for IPAWS and to increase federal, state, and local capabilities to alert the public, but barriers remained to fully implement an integrated system. We made six recommendations, including that FEMA work with FCC to establish guidance for states to fully implement and test IPAWS components and implement a strategy for regular nationwide testing. The agencies implemented all of the report’s recommendations. IPAWS Usage and Testing Have Increased but Parts of the Country Lack IPAWS Access at the Local Level Substantially More Local Authorities Have Access to IPAWS since 2013, but Gaps Remain at the Local Level Our analysis of FEMA data found 1,401 alerting authorities at the federal, state, local, territorial, and tribal levels had access to IPAWS to send alerts as of September 2019, a substantial increase from 2013 (soon after it became operational) when fewer than 100 authorities had access. According to FEMA officials, nearly 70 percent of the nation’s population is covered by a local alerting authority that can use IPAWS to send alerts. Further, according to FEMA documentation, from a state authority perspective, all 50 states, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands have at least one state-level authority that can use IPAWS to send alerts to any locality within that state or territory. Local authority access to IPAWS to send alerts varies, however, as FEMA officials stated that about two-thirds of the nation’s 3,000 counties do not have access to IPAWS to send alerts. Although access to IPAWS at the state level enables alerts to be sent, for example, to jurisdictions that may have lost their capability during an emergency, gaps in access to IPAWS for local officials could limit the timeliness of alerts as emergencies occur. For example, officials from an alerting authority told us that with the exception of alerts issued by NWS, all emergencies start locally. If a locality does not have access to issue an alert through IPAWS, information must be communicated from the locality to an authorized state official to issue the alert, which could result in delays in getting critical information to the public. Reasons for this gap at the local level could be related to a variety of factors. For example, some counties may still be in the process of applying for access. Other counties may not be able to gain access to IPAWS due to state or local laws, or a state’s EAS communications plan may specify that only certain types of agencies can issue alerts. For example, state EAS communications plans may authorize the governor of the state, an emergency management office, state law enforcement agency, or a non-governmental organization as the authorized agencies for sending alerts. In addition, an academic who specializes in rural emergency management told us that unfunded staff positions in emergency management are commonplace in rural areas and the areas may lack funding to apply for IPAWS access. Figure 3 highlights areas of the country that were covered by a local or tribal alerting authority as of September 2019. Wireless Emergency Alerts Have Become the Primary Alerting Method and Usage Has Increased Alerting authorities at the state, territorial, and local levels have increasingly used WEA messages since 2012 (see fig. 4). In addition, these authorities used more WEA messages than EAS alerts each year, with a large difference occurring between 2017 to 2018, when WEA messages increased by 89 percent while EAS alerts increased by 35 percent. While usage of WEA and EAS by state, territorial, and local authorities has generally increased since 2012, our analysis of FEMA data found that this increase was driven by a small group of alerting authorities in certain parts of the country. Some locations may be more prone to experience certain types of emergencies, particularly weather related emergencies such as hurricanes. However, the potential exists in any location for an alert to be sent to the public if an alerting authority determines an imminent threat to public safety exists. Specifically, our analysis of WEA alert data from April 1, 2012 to October 1, 2019 found: A total of 236 of the 1,372 state, territorial, and local alerting authorities sent a WEA message. A total of 69 of the 1,372 state, territorial, and local alerting authorities accounted for nearly 80 percent of WEA messages sent at those levels. Most of the country has received a low number (fewer than 10) or no alerts sent by state, territorial, and local authorities, while limited parts of the country have received higher numbers of alerts (see fig. 5). At the federal level, our analysis of FEMA data found that NWS sends the vast majority of WEA messages sent through IPAWS, a number that from April 1, 2012 to October 1, 2019 totaled more than 46,000. The most common WEA messages sent by NWS were related to flash flooding (28,640), tornadoes (15,985), hurricanes (571), and dust storms (386). An academic we interviewed said it is important to note that local alerting authorities use the NWS warnings to issue alerts instructing the public to take specific protective actions, for example, to evacuate using certain roads. For more information on when a person may receive a WEA message on a WEA-capable mobile device through IPAWS, see appendix II. NWS uses multiple alerting mechanisms to send alert messages to people around the country. As one of its mechanisms, NWS uses the Integrated Public Alert and Warning System to send Wireless Emergency Alert messages to mobile devices in areas facing w eather risks, such as this geographically targeted message to a cell phone in Washington D.C. in July 2019. To test the capability and effectiveness of IPAWS, FEMA, FCC, NWS, and state and local public safety agencies have carried out nationwide and localized alert tests since 2016. Nationwide EAS Tests: FEMA, in coordination with FCC, conducted four annual nationwide EAS tests from 2016 to 2019. The tests assessed how well EAS alerts were received and retransmitted using the two ways an EAS alert can be delivered: (1) over the internet via IPAWS and (2) through the legacy “over the air” radio and television broadcast stations. According to FCC’s analysis, about 76 percent of an estimated 26,000 EAS participants took part in the 2018 test, with about 96 percent of participants reporting they received the test alert. While the vast majority of EAS participants reported no complications, FCC’s analysis identified some problems with the 2018 test, including EAS participants reporting audio quality issues (less than 2 percent), EAS equipment issues, out-of- date software, user error, and complications accessing IPAWS (less than 1 percent each). To help address such issues, FCC provided advisories in advance of the next nationwide EAS test. In addition, FEMA has publicly identified how FCC could improve future nationwide tests, including improving the accuracy of reporting and other audio and visual technical issues. FEMA officials said they are working with FCC to resolve technical issues found in recent tests. Nationwide WEA Tests: FEMA, in coordination with FCC, carried out the first nationwide WEA test in October 2018. FEMA sent a test alert through IPAWS to participating wireless carriers, which then transmitted the alert to their subscribers’ WEA-capable devices across the country. FEMA officials viewed the first nationwide WEA test as a success with regard to the technical execution of delivering a nationwide WEA message via IPAWS. However, officials acknowledged a main lesson from the test was a need to collect data on how effectively WEA messages are being received. While FCC collects EAS test data to assess how well the EAS test was received and retransmitted, a similar mechanism does not exist for the WEA pathway. According to wireless industry representatives we interviewed, the WEA system was designed to use a one-way broadcast cellular technology that prevents the wireless network from collecting data from mobile devices. Instead, FCC has used voluntary public responses, media reports, and informal surveys conducted by state and local public safety agencies to assess results. For example, FCC’s report on the 2018 WEA test found that media sources reported inconsistent WEA delivery in different parts of the country and that informal surveys conducted by state and local agencies showed variability in WEA delivery. FCC also reported that issues were found during the WEA test related to duplicate messages and audio and vibration cadence that could have affected individuals with disabilities. At the time of our review, FEMA officials said they were preparing to conduct the next nationwide WEA test in late 2020 and developing a survey to accompany the test to collect data on WEA message delivery. The District of Columbia Homeland Security and Emergency Management Agency used the Integrated Public Alert and Warning System to send a geographically targeted WEA test to mobile devices in Washington, D.C. in June 2019. owned cell phones received the test alert within a range of 6 seconds and several minutes. In May 2019, FCC rules initially went into effect that will allow alerting authorities to send WEA tests to the public without FCC approval—called State/Local WEA Test. Participating wireless carriers are required to provide the capability, but subscribers must manually opt-in to receive these alerts on their mobile devices. In November 2019, a major wireless carrier obtained a waiver from FCC to conduct two WEA tests under these rules to assess the carriers’ ability to perform enhanced geo- targeting for WEA messages. Alerting Authorities Cited Benefits and Limitations of Using IPAWS during Recent Emergencies Officials from alerting authorities we contacted for seven case studies on the use of IPAWS cited benefits and limitations of using the system during recent disasters such as wildfires, a hurricane, a flood, an earthquake, a chemical fire, a power shortage, and a law enforcement event. Benefits. Officials from authorities we interviewed said that IPAWS has a wide reach because most people have mobile devices to receive WEA messages, and WEA can also reach visitors to their area. Compared with opt-in alerting systems that can have a low percentage of subscribers, officials from alerting authorities we interviewed said that IPAWS provides an opportunity to reach more people during emergencies. In addition, they said that states can act as back-ups for local authorities that have lost their alerting capabilities to help ensure that alerts can be sent. Our analysis found that state and local alerting authorities used IPAWS to send alerts regarding a variety of emergencies, examples of which are shown in table 1. Alerting authority officials also said they plan to use IPAWS in a variety of ways in the future, including for mudslides, rip currents, hazardous materials incidents, and law enforcement emergencies such as terrorism or active shootings. Limitations. Officials from alerting authorities we contacted cited three main limitations. First, they said it was difficult to write effective WEAs within the 90 character limit. For example, officials from an authority said that within the character limit it is difficult to explain the risk, who the alert is from, and what the public should do. As we discuss later, FCC has expanded the character limit. Second, officials expressed concerns about the ability to target WEAs to specific geographic areas, which caused some to lack confidence in the system or not use it at all. Third, officials from alerting authorities said that because WEA is a one-way communication system, they do not know if the alerts reached the intended public. For example, officials from one authority described sending an evacuation order but not knowing whether people in the intended area received it. In another example, while an alert was helpful in alerting the public about a suspicious package, officials from one authority said the alert was received 4 miles beyond its intended target, which led them to speculate about the number of people who received the alert. More information about the use of IPAWS during events we selected as case studies is provided in appendix III. FEMA and FCC Have Taken Steps to Improve Alerting but Face Challenges Monitoring New Capabilities and Managing Pending Applications FEMA and FCC Have Taken Actions to Improve Alerting Capabilities FEMA has taken recent steps to modernize IPAWS by implementing various improvements and exploring new technologies. For example, FEMA is moving IPAWS to a cloud-based data center to enhance the system’s availability and is modernizing the stations that serve as the main broadcast source for national emergency alerts, according to FEMA’s 2018 performance report for IPAWS. In addition, officials described how FEMA has assisted with developing technical standards for new IPAWS capabilities and engaged the private sector to explore possibilities for integrating alerts into technologies such as digital billboards, Braille reader devices, and internet-connected devices in homes and vehicles. FCC has published rules that require participating carriers to implement new or improved capabilities for wireless alerts sent through IPAWS. Improved alert message content and capabilities. FCC required wireless carriers to support several capabilities to help alerting authorities communicate clearly and effectively, including the ability to send longer messages (expanding the limit from 90 to 360 characters) and the ability to send alerts in Spanish. Initially, FCC set a May 1, 2019, deadline for carriers to support all of these capabilities but later extended it to December 19, 2019, to allow time for carriers to complete testing with IPAWS. FEMA completed the necessary updates to support formal testing with the IPAWS gateway in mid- November 2019. Two academics we interviewed who have researched emergency alerting told us that alerts with expanded character length are more effective in prompting people to take protective actions, compared with shorter ones. Other new capabilities required include “alert prioritization,” meaning that alerts must be displayed as soon as they are received and a new “public safety message” category for advisories that prescribe one or more actions likely to save lives or safeguard property during an emergency (e.g., boil water notices, emergency shelter locations). As discussed earlier, a state/local WEA test option was also required to allow alerting authorities to send test messages to a subset of the public without prior approval from FCC. More precise geographic targeting. FCC required carriers to deliver alerts to areas that match the targeted geographic area, to within one- tenth of a mile, a capability that FCC calls enhanced geo-targeting. FCC initially required carriers to implement enhanced geo-targeting by November 30, 2019, but later extended it to December 19, 2019, to allow time for carriers to complete testing with IPAWS, as with the capabilities discussed above. FEMA completed the necessary updates to support formal testing with the IPAWS gateway in mid- November 2019. Previously carriers have been required to transmit alerts to the geographic area that best approximates the emergency area identified by the alerting authority. As FCC’s chairman has explained, these less precise geographic targeting capabilities can result in overbroad alerting, where people may receive the alert even though they are located well outside of the target area. Several local WEA tests in 2018 found overbroad alerting when targeting specific geographic locations. Officials from many alerting authorities we interviewed told us they are concerned about the inability to geographically target alerts with accuracy, which can make some reluctant to send WEA messages. According to several wireless and device industry representatives we interviewed and letters that wireless carriers have sent FCC, enhanced geo-targeting is a particularly challenging capability to implement because changes must also be made by different sectors of industry—such as manufacturers of cell phone handsets and chipsets. Some industry representatives also told us that only some, mostly newer model cell phones will be able to receive the more precise geo-targeted alerts and that many older devices currently in the population will not support this new capability because it requires a new chipset. Other recent improvements. FCC has also required implementation of new alert content and categories, such as: “Clickable” links—Embedded links in alerts so people receiving them can click on the link to see a photo of a suspect, for example. This capability has been implemented. Blue Alert—A new type of alert to notify the public of threats to law enforcement and to request help apprehending dangerous suspects. This capability has been implemented. 24-hour alert retrieval—Alerts must remain available on devices for 24 hours after receipt, or until the consumer chooses to delete them. FCC required carriers to implement this capability by November 30, 2019, and FEMA officials told us this capability did not require technical changes to the IPAWS gateway. Although FEMA and FCC are taking actions to improve alerting capabilities, developments in technology are changing the alerting landscape. Our analysis of agency documents and interviews with public- safety stakeholders indicated two emerging and unresolved areas. Multimedia. In 2018, an FCC advisory committee recommended that alerting systems should carry graphics and other multimedia. For example, four public-safety stakeholders told us it would be helpful to include multimedia (e.g., photos and maps) directly within WEA messages. Doing so would allow the public to see the information without clicking an embedded link. In 2015 and 2016, FCC sought comment on the technical feasibility of including multimedia and in 2018 issued another public notice on the topic to refresh the record. The proceeding remains open and FCC has not taken additional action. Internet streaming. The public may not receive broadcast EAS alerts when watching television that is streamed through an internet connection. A 2017 Pew Research Center survey found that 28 percent of American adults and 61 percent of adults age 18 to 29 said that streaming is their primary way of watching television. Representatives from two internet service providers told us they have developed solutions that enable customers to receive EAS alerts when the customers are using their applications to stream content. However, EAS alerts may not override other streaming services such as video and gaming because of technical limitations and the limited information that content service providers maintain about a user’s location, according to industry representatives. For example, representatives from an association representing internet companies told us that providing emergency alerts through internet streaming services presents technological challenges and that its members would have concerns about collecting locational information about their customers. The effect of potentially not receiving an EAS alert while streaming is unclear. While more Americans are streaming their television and multimedia, many use a second screen such as a cell phone while watching television and could receive any relevant alert as a WEA message. A 2018 Nielsen survey found that 45 percent of respondents very often or always use a second screen such as a smartphone while watching television. FCC has sought comment about this issue in general. FCC officials told us that extending EAS to new technologies for viewing video content raises legal and technical considerations and that they continue to evaluate the efficacy, costs, and benefits of doing so. FCC Lacks Goals and Measures for Monitoring Performance of Required Capabilities Pursuant to statute, FCC is responsible for establishing technical standards and requirements for WEA, as discussed earlier. Further, FCC’s 2018–2022 strategic plan identified a performance goal to facilitate the effectiveness and reliability of EAS and WEA, and following a nationwide test in 2018 FCC’s Public Safety and Homeland Security Bureau recommended that additional measures be taken to improve the reliability and effectiveness of WEA. Developing goals and performance measures is consistent with leading practices for performance management. GPRA, as amended and expanded by GPRAMA, creates a framework for articulating goals and measures that can provide federal agencies with a clear direction for successful implementation of activities and improve the efficiency and accountability of agencies’ efforts. Goals explain the purpose and intended results that a program seeks to achieve in its work. Performance measures that are linked to goals allow a program to track the progress it is making toward achieving its goals. While GPRA and GPRAMA apply to the department or agency level, we have previously reported that their provisions can serve as leading practices at other organizational levels, such as component agencies, offices, programs, and projects. Additionally, federal internal control standards discuss the importance of goals, stating that management should define objectives clearly. This involves defining objectives in specific and measurable terms so that they can be easily understood and performance toward achieving those objectives can be assessed. Federal internal control standards also state that measurable objectives should be specific and stated in quantitative or qualitative form. FCC has required carriers to implement new WEA capabilities and taken steps to understand more about WEA performance, but FCC has not developed goals and performance measures to help monitor how well the new capabilities perform during emergencies. Instead, we found FCC has taken an ad-hoc approach to monitoring WEA performance. In particular, when we asked whether FCC planned to develop standards or benchmarks to measure WEA performance, FCC officials said they intend to use certain test results, as discussed below, to understand more about WEA performance. Partnered geo-targeting tests. FCC intends to partner with localities to test the accuracy of participating wireless providers’ enhanced geo- targeting capabilities starting in early 2020. Four localities have applied to participate as of November 2019, according to FCC officials. To perform each test, FCC and its partner at each given location intend to use online surveys to collect information on which individuals receive the test alert and under what circumstances. However, at the time of our review we found that while FCC has broadly identified the purpose of the tests as testing the accuracy of enhanced geo-targeting, it has not defined specific, measurable goals that are specific to this testing effort. For example, FCC has not stated what would be an appropriate success rate for enhanced geo- targeting accuracy. We also found that FCC has not connected its survey questions to specific performance measures that could be compared across test locations. According to FCC officials, FCC has not announced whether it will compare results across localities or use specific performance measures to assess geo-targeting performance. FCC officials said they have no plans to test other new WEA capabilities, including the expanded message length, and that at the time of our review it was too early to say how results from the partnered tests would be analyzed and shared more broadly with public-safety stakeholders. State and local tests. As discussed earlier, FCC officials told us that 39 alerting authorities at the state and local level received approval from FCC to conduct their own WEA tests as of November 2019. FCC officials also told us that that they encourage alerting authorities that seek approval for WEA tests to share performance data with FCC. According to FCC officials, FCC has received data from nine localities as of November 2019 and will use the test results internally to develop a broader understanding of WEA performance. When we asked what FCC has learned from the data, FCC officials said they have received some results but are still in the process of analyzing them. By developing goals and performance measures for its efforts to monitor the new WEA capabilities, FCC would have clearer direction for what it plans to achieve and more specific means to assess the performance of the capabilities. For example, performance measures related to FCC’s planned survey questions for geo-targeting could include the percentage of participants who received the alert and the percentage who received the alert within the target geographic area. Another performance measure for the new capabilities could include the extent to which messages of up to 360 characters are fully or partially displayed on a mobile device, or not at all, for example. Without specific goals and performance measures, FCC will have difficulty knowing if it is making progress toward its stated strategic goal of ensuring the effectiveness of WEA. The results of data collected on performance measures could provide assurance that new WEA capabilities are working as intended during emergencies, or could point to areas where performance is lacking and where FCC might need to take other actions such as working with industry to resolve issues, updating WEA requirements, or conducting additional analysis. Monitoring performance is all the more important because of uncertainty about the extent to which all cell phones will be able to receive WEA messages with the new capabilities. In addition, new capabilities have the potential to make WEA a more powerful tool and possibly further increase its use. Our analysis shows that WEA has outpaced the use of EAS as an alerting method, and according to the Pew Research Center, Americans are increasingly connected to digital devices, with 96 percent of American adults owning cell phones in 2019 and 81 percent owning smartphones. However, as discussed earlier, officials from many alerting authorities we interviewed had concerns with WEA performance. Many officials from alerting authorities told us that they were looking forward to the new capabilities— including enhanced geo-targeting and expanded message length—which will improve their ability to alert the public. Having specific performance information about the effectiveness of these capabilities could increase alerting authorities’ confidence in the system and help make these authorities more informed users of IPAWS. FEMA Provides Training and Resources but Lacks Documented Next Steps to Address Authorities’ Challenges The IPAWS Modernization Act requires FEMA to instruct and educate federal, state, tribal, and local government officials in the use of IPAWS. FEMA has multiple efforts underway to educate and train alerting authorities about IPAWS. Training. Through FEMA’s Emergency Management Institute, FEMA offers training courses on IPAWS, including a mandatory course that IPAWS applicants must take before they can become authorized users of the system. FEMA is revising its training, according to FEMA officials, and they estimated that the new courses will be available midway through 2020. Online resources. On a regular basis, FEMA emails tips and conducts webinars, which are recorded and made available online. FEMA has developed a library of IPAWS resources, including a toolkit and fact sheets. FEMA also created an online collaboration group for IPAWS users to share information and best practices and plans to expand the capabilities of this group, according to FEMA. Testing environment. FEMA created a controlled testing environment called the IPAWS lab that alerting authorities can use to send test alerts and receive hands-on or remote assistance from FEMA staff. According to FEMA, demand for IPAWS lab support has increased, and FEMA hosted more than 200 sessions with IPAWS users in calendar year 2018. FEMA implemented a new requirement in October 2019 for all alerting authorities to send a monthly test alert using the IPAWS lab and upgraded the IPAWS lab environment to support the increased testing. In-person presentations. FEMA officials regularly present at public safety conferences and other events and use these opportunities to share information about IPAWS and encourage potential new users. FEMA has also assessed alerting authorities’ educational needs, but it has not fully addressed the recommendations it identified to support these needs or developed plans for ongoing assessments. In 2017 FEMA conducted an analysis—interviewing a sample of alerting authorities and assessing their responses to identify common challenges in using IPAWS. FEMA found that alerting authorities need more training and practice in using IPAWS and experience challenges with using their alerting software, among other things. Our interviews with selected alerting authorities and software providers revealed similar concerns, including that for some a lack of confidence is a potential barrier in using IPAWS. For example, representatives from two of the three alerting software providers we interviewed told us they have issued alerts through IPAWS at the request of their customers. According to these representatives, alerting authorities turn to their software providers as experienced users of the system because authorities have limited local staff, or if they cannot send an alert because of a technical reason. Four academics we interviewed said that FEMA should provide additional training for alerting authorities that is focused on drafting effective messages. Less than 20 percent of state, territorial, and local alerting authorities have sent WEA messages as of October 1, 2019. The limited use of IPAWS could lead to decreased proficiency or confidence. For example, an official from one alerting authority told us the jurisdiction did not use IPAWS at first because officials were not confident about using it. Our analysis of available information found that FEMA has addressed 4 of the 31 recommendations in its 2017 analysis. For example, FEMA revised its IPAWS training and added software requirements to its memorandum of agreement with alerting authorities. However, the extent to which FEMA has addressed other potentially useful recommendations is not clear because FEMA has not developed a plan to address the recommended actions. For example, one priority recommendation was to create skills checklists that provide a complete inventory of the types of skills alerting authorities need to use IPAWS. FEMA officials told us they had addressed many of the challenges identified in the 2017 analysis, including developing some timelines. However, FEMA did not provide documentation about how all the recommendations would be addressed. FEMA officials also told us they intend to periodically obtain information from alerting authorities about their needs and have begun another round of interviews with alerting authorities. However, these plans have not been documented. FEMA officials said they also use other methods to keep abreast of educational needs and challenges, such as attending conferences and reaching out to their contacts at emergency management associations that represent alerting authorities. In addition, alerting authorities send comments and feedback via email, according to FEMA officials. However, FEMA did not provide documentation about how it uses information obtained from these methods. As discussed earlier, FEMA is required by statute to educate federal, state, tribal, and local government officials. FEMA’s IPAWS strategic plan also includes a goal to make emergency alerting more effective, which as the plan explains, requires FEMA to engage non-federal alerting authorities to build competence and promote hands-on familiarity with IPAWS. The FEMA National Advisory Council has emphasized these points, recommending that FEMA improve alerting authorities’ ability to transmit effective alerts by developing and providing education, guidance, and best practices on how to use IPAWS as effective emergency messaging. Federal standards for internal control state that management should externally communicate necessary quality information. Open two-way external reporting lines allow for this communication. For example, management obtains quality information from external stakeholders—which in FEMA’s case would be information from alerting authorities—using established reporting lines. Additionally, federal internal control standards state that documentation provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel. Documenting how FEMA plans to address key recommendations from its 2017 analysis could help guide its efforts to educate alerting authorities and hold it accountable for addressing identified needs. Without a documented plan, FEMA may not systematically implement each recommendation, which could result in alerting authorities continuing to struggle with known challenges. In addition, by continuing its analytical efforts and implementing a mechanism to regularly obtain and analyze alerting authorities’ needs, FEMA could learn if these needs are changing and develop educational efforts to address them. Taking such actions will help FEMA enhance alerting authorities’ proficiency with, and confidence in, using IPAWS. FEMA Has Taken Steps to Increase IPAWS Adoption but Faces Challenges Addressing Pending Applications FEMA has identified increasing adoption of the system and assisting authorities in gaining access to IPAWS as strategic goals. In addition, in June 2019 the FEMA Administrator issued a “call to action” policy memorandum to FEMA’s regional offices to help improve IPAWS adoption at the local level. As described earlier, FEMA has taken various steps in recent years to increase the adoption of IPAWS, for example, by informing local public safety agencies about IPAWS at conferences and encouraging them to apply for access to the system. In addition, FEMA has developed resources that are available on the IPAWS website that describe the expectations and steps for how a public safety agency may apply to become an IPAWS alerting authority. The number of authorities completing an initial step in the application process to obtain access to IPAWS has increased in recent years from 52 applicants in 2017 to 104 applicants in 2018 and to 122 applicants from January 2019 to September 2019. However, while more agencies are starting the application process, our analysis of FEMA data found that 430 IPAWS applications were pending as of September 2019, some of which dated back to 2012. Our analysis found that 152 applicants, or about one-third of the 430 applications, began the process (initiated the memorandum of agreement process) from 2012 to 2016. In addition, some applicants had yet to complete the key initial steps in the process. For example, after completing the required IPAWS web-based training and procuring IPAWS compatible software, public safety agencies must return a signed memorandum of agreement to FEMA before the application can move forward. We found that FEMA sent a draft memorandum of agreement to 108 applicants between 2014 and 2017 that had not yet returned the agreement to the agency as of September 2019. This could indicate that several applicants may be stalled in the early stages of the process and may benefit from FEMA’s assistance in completing the application or answering questions. FEMA officials said that once a completed application is received, approving it should take about 30 days but that factors outside FEMA’s control can contribute to processing delays and thereby increase the number of pending applications. For example, FEMA officials said it is out of their control when applicants do not return signed memorandums of agreement because that step of the process is handled at the state and local level. Representatives from an IPAWS applicant we interviewed said the amount of time it took to receive approval from the state authority was one of the reasons that their application was delayed. Although delays involving certain applications may be out of FEMA’s control, FEMA may be able to help other applicants. However, FEMA provided no evidence that it had followed up with applicants, when it had last contacted them, or how follow up should be prioritized. FEMA officials said one employee serves as a primary lead for managing the entire application process, which would require a labor-intensive process of following up with hundreds of applicants. FEMA officials said that managing pending applications is a challenge for the IPAWS office due to resource constraints. To help address these constraints, in 2019, FEMA awarded a contract to begin developing a new tool with the goal of streamlining FEMA’s management of applications. Officials said they anticipate the tool, estimated to be available in early 2020, will help them better manage the pending applications and conduct outreach as well as move new applications through the process. In 2016 FEMA conducted a study of the IPAWS application process and highlighted certain factors that contributed to an increasing backlog and response time, including FEMA officials not knowing that a follow up task for an applicant was waiting to be addressed. The study further indicated that determining the next step was manual and often reactive. Officials also said that staff will be able to run an aging report on applications to help them prioritize follow-up efforts. However, the agency has not established procedures to prioritize and follow up with applicants. FEMA officials acknowledged that establishing procedures to prioritize and follow up on the in-process applications would be beneficial. While these applications are pending, people in areas that are not covered by IPAWS authorities may not receive critical alerts and warnings from local authorities through IPAWS. Conclusions Effective emergency alerting is vital to helping save lives and property during natural disasters and other threats to public safety, highlighting the importance of IPAWS as a way to disseminate critical information. However, FCC lacks specific goals and performance measures and FEMA lacks plans and processes, which may contribute to decreased confidence in and use of IPAWS by alerting authorities. In particular, because FCC does not have specific goals and performance measures to monitor WEA improvements, FCC will have difficulty assuring that these improvements are working as intended during emergencies and identifying areas where performance is lacking, which could undermine authorities’ confidence in using IPAWS. In addition, because FEMA has not documented next steps or plans for educating alerting authorities and establishing a process to regularly assess their educational needs, some authorities may continue to lack proficiency and confidence in using IPAWS. Furthermore, absent a strategy to address the substantial number of pending IPAWS applications, FEMA’s efforts to increase IPAWS adoption and expand alerting coverage are hindered. Recommendations for Executive Action We are making a total of three recommendations, including one to FCC and two to FEMA. Specifically: The Chairman of FCC should develop specific, measurable goals and performance measures for its efforts to monitor the performance of new WEA capabilities, such as enhanced geo-targeting and expanded alert message length. (Recommendation 1) The Director of the IPAWS program should document how it plans to address key actions needed to educate alerting authorities in their use of IPAWS and implement a mechanism that will allow FEMA to regularly and systematically obtain and analyze feedback on alerting authorities’ educational needs. (Recommendation 2) The Director of the IPAWS program should establish procedures to prioritize pending IPAWS applications and to follow up with applicants to address these applications. (Recommendation 3) Agency Comments We provided a draft of this report to FCC, the Departments of Homeland Security (FEMA), Commerce (NOAA), and the Interior (USGS) for review and comment. FCC and the Department of Homeland Security provided written comments, reprinted in appendixes IV and V respectively. FCC, FEMA, and NOAA provided technical comments, which we incorporated as appropriate. In its written comments, FCC stated that it agreed with us on the importance of gathering and assessing specific performance information about the effectiveness of WEA capabilities. Separately, FCC officials noted that FCC was taking steps to gather this data, which will help inform the development of metrics, as we recommended. In its written comments, DHS concurred with our two recommendations to FEMA and provided information about activities that FEMA would undertake to implement them. We are sending copies of this report to the appropriate congressional committees, Chairman of FCC, Secretaries of Homeland Security, Commerce, and the Interior, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or Vonaha@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology This report examines (1) the trends in how alerting authorities use and test IPAWS and their experiences using IPAWS, and (2) actions that FEMA and FCC have taken to modernize IPAWS and increase its adoption, and the challenges they face. For background information on emergency alerting, we identified key issues and federal roles and responsibilities by reviewing applicable laws and regulations, our prior work, and reports prepared by FEMA, FCC, the Department of Homeland Security’s (DHS) Office of Inspector General, the Congressional Research Service, and academics. We also identified recent trends regarding natural disasters and the use of digital devices and the internet that could affect the use and frequency of emergency alerting. To identify natural disaster trends, we reviewed our prior work, a 2018 report prepared by the U.S. Global Change Research Program, and information on wildfires prepared by the California Department of Forestry and Fire Protection. We identified trends about the use of digital devices and the internet by reviewing surveys conducted from 2017 to 2019 by the Pew Research Center and The Nielsen Company, which regularly conduct national surveys on those topics. We also reviewed proposed federal legislation on emergency alerting. To examine the use of IPAWS and selected alerting authorities’ experiences using IPAWS, we analyzed IPAWS access and usage throughout the country from 2012 to 2019. We focused on identifying the authorities that used IPAWS from 2017 to 2019 following the passage of the IPAWS Modernization Act of 2015 (enacted in 2016). We analyzed IPAWS testing by judgmentally selecting samples of authorities conducting tests. In our calculations of the number of alerts issued by state, territorial, tribal, and local authorities, we focused on alerts for disasters and threats to public safety and excluded test alerts and alerts for missing persons and child abductions. We reviewed FEMA’s processes for ensuring the completeness and reliability of these alerting data and determined that they were sufficiently reliable for the purposes of examining trends in the use of emergency alerts. We also reviewed reports by FCC and local authorities on EAS and WEA test results. To obtain information on alerting authorities’ experiences using IPAWS, we conducted seven case studies of emergency events. To select them, we analyzed alerts that local authorities issued through IPAWS, FEMA’s list of federally declared disasters from 2017 to 2019, NOAA’s list of billion dollar disasters from 2017 to 2019, and our prior work on natural disaster preparedness and recovery from 2017 to 2019; considered recommendations from stakeholders; conducted internet searches; and reviewed news reports. We selected these case studies to include various areas of the country that experienced different types of disasters and threats to public safety during this time. These included natural disasters (wildfires and an earthquake), weather events (a hurricane and a flood), manmade disasters (a chemical fire and a power shortage), and a law enforcement event (a suspicious package). We then interviewed local alerting authorities in those areas. As a test case study, we interviewed District of Columbia emergency management officials. We conducted site visits with state and local emergency management officials in Los Angeles and Ventura, California; Bristol, Panama City, and Tallahassee, Florida; and Washington, D.C. To examine the actions that FEMA and FCC have taken to modernize IPAWS and increase its adoption, and the challenges they face, we reviewed FEMA documents such as IPAWS strategic plans and a performance report; FCC regulations, notices, and comments on FCC proposed rulemakings regarding EAS and WEA; and assessed the information against statutory requirements contained in the IPAWS Modernization Act and federal internal control standards. We focused on four areas of the Act that were key in the implementation of the program. These areas required FEMA, in consultation and coordination with FCC, to: ensure that IPAWS is capable of distributing alerts on the basis of geographic location, risks, and technologies; educate state, tribal, and local governments to understand how IPAWS works, and how and when to use IPAWS; establish training opportunities for alerting authorities; and conduct nationwide tests of IPAWS alerts. We compared FCC’s actions to leading practices based on the Government Performance and Results Act of 1993 (GPRA) as enhanced by the GPRA Modernization Act of 2010 (GPRAMA), which create a framework of goal setting and performance management for federal agencies. While GPRA and GPRAMA apply to the department or agency level, we have previously reported that their provisions can serve as leading practices at other organizational levels, such as component agencies, offices, programs, and projects. We also reviewed recommendations in reports prepared by the FEMA National Advisory Council IPAWS Subcommittee and FCC’s Communications Security, Reliability, and Interoperability Council, and disaster after-action reports prepared by FEMA and state and local governments. As an additional step in assessing the challenges that FEMA faces in increasing IPAWS adoption, we analyzed FEMA’s pending IPAWS applications as of September 2019 to determine which steps in the application and approval process had been completed and how long the applications were in process. We also interviewed four selected IPAWS applicants to obtain their views on the application process. To obtain a variety of perspectives, we selected applicants that were different types of organizations (an airport, a university, a local government, and a federal agency) in different areas of the country. In addition, for both objectives, we interviewed officials from FEMA, FCC, NOAA, USGS, and 18 state, local and territorial alerting authorities; representatives from 4 industry associations, 2 advocacy groups, and 15 companies, including wireless carriers, internet service providers, internet content providers, IPAWS software providers, and mobile device manufacturers; and 7 academics. To obtain a variety of perspectives, we selected industry associations and companies that represented different telecommunications industry sectors and have different roles in emergency alerting (broadcasting, cable, wireless, internet service, and application developers) and academics with different areas of expertise (public health, engineering, natural hazards, disaster preparedness, rural emergency management, and communication). We also interviewed staff from a county board that oversees emergency management activities in that jurisdiction and officials from a city that is planning to apply for IPAWS access. The results of these interviews are not generalizable to all stakeholders, but provide insight on the use of IPAWS and related emergency alerting issues. Our interviewees are listed in table 2 below. Appendix II: Types of Wireless Emergency Alerts Appendix III: Case Studies Regarding the Use of IPAWS Selected Alert Sent by the Florida Division of Emergency Management for Bay County during Hurricane Michael: October 10, 2018: GOVERNOR EVAC ALERT 6 to 13 FT STORM SURGE EXPECTED IN BAY COUNTY Zones A,B,C EVACUATE NOW Selected alerts sent by Bay County Emergency Services during Hurricane Michael: October 10, 2018: Dangerous w inds are beginning to occur in Bay County Shelter in place now October 15, 2018: Bay County remains under a boil w ater notice. Please boil or use bottled w ater for consumption. October 15, 2018: FOOD AND OR WATER ARE AVAILABLE NORTH OF 15th AND CR 386 AT 1011 CR 386 SOUTH Bay County officials said the county lost its ability to issue alerts at this point. Hurricane Michael, October 2018: Hurricane Michael was a category 5 storm that NWS reported made a catastrophic landfall near Mexico Beach and Tyndall Air Force Base, Florida, producing devastating winds and storm surge near the coast, and rain and wind inland (see fig. 6). According to a State of Florida report, Hurricane Michael was the most powerful storm to hit the Panhandle region and the third most intense storm to make landfall in the mainland United States in recorded history. During the storm, several counties could not send alerts because of power outages and inoperable cellular towers. Officials from an alerting authority we interviewed in Florida commended the state’s ability to send IPAWS alerts on behalf of the county, which had lost its communications capabilities during the storm. Authorities also said IPAWS provides an ability to warn the public about approaching hurricanes and share critical lifesaving information such as the location of food, water, and shelter. However, authorities expressed frustration about the inability to accurately geo-target WEA messages to evacuation zones and about how the WEA text character limit forced them to issue multiple WEA messages regarding the same alert. Some officials said they were frustrated when certain local EAS alerts were not delivered by broadcasters, which could prevent some people from receiving them. Alert issued by the California Governor’s Office of Emergency Services: Dec. 6, 2017: Strong w inds over night creating extreme fire danger. Stay alert. Listen to authorities. Alert issued by the Ventura County Sheriff’s Office – Office of Emergency Services: Dec. 7, 2017: VENTURA COUNTY-FAST MOVING BRUSH FIRE NORTH OF OJAI.GO TO READYVENTURACOUNTY.ORG FOR INFO Selected alerts issued by the City of Los Angeles: Dec. 6, 2017: For information regarding the Skirball Fire in Los Angeles please go to Tw itter.com/LAFD Nov. 9, 2018: MANDATORY EVACUATION in West Hills: W of Valley Circle, N to Roscoe Blvd, S to Vanowen. Selected alert issued by Santa Barbara County: Dec. 16, 2017: EVAC ORDER: SB City: east of Hw y 154 to Mission Canyon Rd and north of 192. Leave now . Southern California Wildfires, December 2017 and November 2018: The southern California area experienced large wildfires in recent years, including the Thomas fire in December 2017 and the Woolsey fire in November 2018 (see fig. 7). The California Department of Forestry and Fire Protection reported in August 2019 that the Thomas fire, which affected Santa Barbara and Ventura Counties, was the second-largest wildfire in the state’s history and destroyed more than 1,000 structures. The Woolsey fire, which affected Los Angeles and Ventura Counties, had a footprint over 150 square miles and resulted in the evacuation of about a quarter-million people. According to Los Angeles County, the Woolsey fire was the most destructive fire in the county’s history. California officials we interviewed said IPAWS is an effective tool for wildfire evacuations and that because most people have cell phones, they do not have to subscribe to receive WEA messages. Officials also praised the capability of IPAWS to allow a state alerting authority to send alerts to at-risk counties ahead of potential wildfires. However, officials said it is a challenge to know when and where other alerting authorities in the area are sending alerts and that there may be little time. For example, an official told us that the Thomas fire moved at 60 miles per hour. Officials also said that even though WEA messages were targeted to an area during the fires, they did not know whether people received them because geo-targeting was not precise and because cell towers may have been damaged. Alerts issued by the New York City Emergency Management Department on Oct. 24, 2018: Police Activity: Residents on W 58th St btw Columbus & 8th Av shelter in place immediately The suspicious device on W 58 St & 8th Ave w as safely removed by NYPD Bomb Squad. Suspicious Package in New York City, October 24, 2018: On October 24, 2018, the New York City Emergency Management Department issued a WEA shelter-in-place order regarding a suspicious package at the Time Warner Center in Manhattan that was found to contain an improvised explosive device (see fig. 8). According to officials, police removed the device and determined it was no longer a threat. About an hour after the initial alert was issued, the city issued another WEA canceling the shelter- in-place alert. New York City officials said IPAWS is the city’s most effective alert and warning tool, compared with its own alerting system to which about 9 percent of the population has subscribed. Regarding the suspicious package, authorities were able to draw an alerting area covering a 3 to 4 block radius. The officials also said that WEA messages were instrumental in helping to capture a suspect in a bombing incident in the city’s Chelsea neighborhood in 2016. However, officials said the October 2018 alert was received as far as 4 miles from the targeted area, which led them to speculate about the number of people who received the alert. Officials also said they would like IPAWS to incorporate more languages for use in alerts and provide them with the ability to use photographs or maps in future alerts. Alerts issued by the Douglas County Emergency Management Agency: March 14, 2019: From Douglas County Sheriffs Office. Record flooding on Elkhorn River. Evacuate Now . March 15, 2019: From Douglas County Sheriff: Evacuate City of Valley NOW. Use Q Street. Hw y 275 closed. Alert Issued by Platte County Emergency Management: March 14, 2019: Travel is not advised in and around Columbus and Platte County due to extensive flooding. Flood in Nebraska, March 2019: In March 2019, Nebraska experienced one of the most devastating floods in recent history, according to the state government (see fig. 9). We interviewed officials in Douglas County and Platte County, areas that experienced torrential rain and flooding. One county sent a geo-targeted WEA evacuation alert to people living near a river while the other county sent a WEA advising the public to not travel within the county. A county official in Nebraska said that if the planned future enhancements to WEA take place and are found to be successful, WEA will ultimately be of greater value than other means of notification such as the county’s previous subscription system, which had a low participation rate. The official stressed the difficulty in explaining the threat, the source of the alert, and a protective action within the 90 character WEA limit. The official also noted that some local broadcasters were not equipped to recognize an EAS law enforcement alert for further transmission. An official in another county said that some people did not receive the WEA messages. Power shortage in Michigan, January 2019: On January 30, 2019, Consumers Energy, a primary energy supplier in Michigan, experienced a fire at a natural gas storage facility at a time when there was high energy demand because of extreme cold temperatures (see fig. 10). According to NWS, Michigan’s Lower Peninsula experienced the lowest temperatures in decades—down to minus 20 degrees with wind chills of down to minus 40 degrees. As a result, the state’s Emergency Operations Center asked the Michigan State Police, an IPAWS alerting authority, to issue WEA and EAS alerts asking people to lower their thermostats to conserve natural gas. Michigan State Police officials said that IPAWS allowed the alerting authority to send a WEA message to 68 counties, which was an effective and quick way to reach many people. However, officials said they attempted to send an EAS alert to all 68 counties in Michigan’s Lower Peninsula but were limited to a total of 31 counties per alert, per FCC regulations. They said that after the EAS alert was sent, the actual EAS broadcast message was not displayed on television because the entire list of the 31 county names, which must be read first according to FCC regulations, took up the allowable 2-minute time span for an EAS broadcast. Alerts issued by the City of Houston on Sept. 20, 2017: Shelter in Place in northw est Spring Branch due to hazardous fire. Check local media. Shelter in place is CANCELLED for Northw est Spring Branch after hazardous fire. Chemical Fire in Houston, Texas, September 20, 2017: The Houston Fire Department requested that the Houston Office of Emergency Management issue a WEA shelter-in-place order following a chemical fire at a bearing supply company that resulted in the release of potentially hazardous smoke (see fig. 11). Houston officials said they believe that IPAWS allowed the alerting authority to reach a broad area at risk using the WEA message. However, officials said it is possible that a lack of training on behalf of the alerting authority, among other things, limited their ability to issue the alert in a timely fashion. They said it took the alerting authority 43 minutes and multiple attempts to properly prepare and send the message using its IPAWS-compatible software before the message was successfully sent to the public. Earthquake in Alaska, November 30, 2018: A magnitude 7.0 earthquake struck north of Anchorage, Alaska, on November 30, 2018 (see fig. 12). We interviewed officials from three local governments that were affected by the earthquake. Officials at one borough said they did not issue an IPAWS alert because the earthquake had a short intensity and they did not receive reports of fatalities or widespread damage. However, the officials said that if the earthquake’s intensity had been greater, they would have issued used IPAWS to alert people about shelter locations. NWS used IPAWS to issue a tsunami warning but local officials did not issue any alerts through IPAWS. Officials in Alaska said that it is helpful that another government agency can be a backup alerting authority and provide alerts through IPAWS on behalf of the local government. However, an official said the inability to precisely geo-target alerts about tsunami risks to coastal areas prevented the official from sending out an alert due to concerns that people who were not affected by the earthquake would receive the alert. Another official said the cost of procuring alerting software that is compatible with IPAWS may be a challenge for some local governments. Appendix IV: Comments from the Federal Communications Commission Appendix V: Comments from the U.S. Department of Homeland Security Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Sally Moino (Assistant Director); Michael Sweet (Analyst in Charge); David Aja; Melissa Bodeau; Mark Goldstein; Bob Homan; Kate Perl; Cheryl Peterson; Sam Portnow; Malika Rice; and Andrew Stavisky made key contributions to this report.
Why GAO Did This Study Public alerts and warnings are critical to protect lives and provide information during emergencies, such as wildfires and floods. The IPAWS Modernization Act, enacted in 2016, required FEMA, in consultation and coordination with FCC, to enhance and test the capabilities of IPAWS and increase its adoption among state and local public safety agencies. GAO was asked to review the federal response to recent natural disasters. This report examines, among other things: (1) trends in the use of IPAWS and (2) actions that FEMA and FCC have taken to modernize IPAWS and increase its adoption. GAO analyzed relevant data and documentation and assessed FCC's efforts against leading government performance management practices and FEMA and FCC's efforts against internal control standards. GAO interviewed federal officials involved in emergency alerting. GAO also interviewed a non-generalizable selection of IPAWS alerting authorities and applicants, local governments, public safety and industry associations, and communications companies. GAO selected alerting authorities that experienced different types of disasters and threats to public safety from 2017 to 2019. What GAO Found Use of the Integrated Public Alert and Warning System (IPAWS) has increased since its launch in 2012. IPAWS enables authorized federal, state, territorial, tribal, and local alerting authorities to send a Wireless Emergency Alert (WEA) to mobile devices, such as cell phones and an Emergency Alert System (EAS) alert to media platforms, such as radios and television. The Federal Emergency Management Agency (FEMA) operates IPAWS and the Federal Communications Commission (FCC) establishes rules for telecommunications providers to deliver WEA and EAS alerts. A public safety agency must submit an application and receive approval from FEMA to become an IPAWS alerting authority. In September 2019, more than 1,400 alerting authorities had access to IPAWS, up from fewer than 100 authorities in 2013. All states have at least one state alerting authority, but gaps in local authority access remain (see figure) that could limit the timeliness of alerts as emergencies occur at the local level. GAO found 430 pending IPAWS applications as of September 2019, some of which dated back to 2012. FEMA has not established procedures to prioritize and follow up with applicants and FEMA officials acknowledged that doing so would be beneficial. FEMA and FCC have taken steps to modernize IPAWS and improve alerting. For example, FEMA has made system upgrades and FCC has made various WEA improvements, such as requiring wireless phone carriers to provide more precise geographic targeting of alerts. Prior to these improvements, officials from many alerting authorities said the inability to geographically target alerts with accuracy made the officials reluctant to send WEA messages. FCC intends to partner with certain localities to test geographic targeting and, according to FCC officials, plans to use other tests to learn about how the improvements perform during emergencies. However, FCC has not developed goals and performance measures for these efforts. Doing so would help FCC more clearly assess whether the WEA improvements are working as intended. Furthermore, having specific performance information could increase alerting authorities' confidence in and use of IPAWS. What GAO Recommends GAO is making three recommendations, including that FEMA establish procedures to prioritize and address pending IPAWS applications and that FCC develop goals and performance measures to monitor the WEA improvements. FEMA concurred with GAO's recommendations. FCC stated it was taking steps to gather data to inform the development of metrics as GAO recommended.
gao_GAO-20-152T
gao_GAO-20-152T_0
VA Medical Centers Took Action against Some Selected Providers with Disqualifying Information in the NPDB but Overlooked Others In our review of 57 providers selected for our February 2019 report, we found that the responsible VA medical centers took action against some providers with disqualifying information in the NPDB but overlooked others. We found that VA medical centers took administrative or disciplinary actions against some providers, such as removing them from patient care, after becoming aware of adverse information in the NPDB. However, many of these actions were taken following our review and a VHA-wide licensure review, both of which occurred in 2018, rather than at the time of the NPDB report. Specifically, the responsible VA medical centers removed five providers who they determined did not meet VA licensure requirements following our inquiries. For example, one of these five providers had surrendered a license in 2014, while employed at VA, but was not removed by the VA medical center until after our inquiries in 2018. Additionally, another provider was reported to the Drug Enforcement Administration (DEA) by a VA medical center after we inquired about the provider prescribing controlled substances without appropriate registration. We also found that VA medical centers hired or retained some of the 57 providers who they acknowledged had disqualifying adverse information in the NPDB, which is inconsistent with VHA policy. Specifically, these providers had licenses that were revoked or surrendered for cause, but VA medical center officials overlooked or were unaware of this information. However, none of these providers still worked at VHA at the time we completed our review. For example, one VA medical center hired a provider who had a state license revoked for patient neglect and substandard care. VA medical center officials stated that they received the NPDB report about the revoked license at the time the provider was hired in 2014 but it was inadvertently overlooked by multiple staff. This provider voluntarily resigned in 2017. In our February 2019 report, we found that three factors were largely responsible for inconsistent adherence to VHA policies that disqualify providers from employment. First, some medical center officials are not aware of key VHA policies, such as the requirement that a provider who has had a license revoked or surrendered for cause is ineligible for employment unless the license is reinstated. For example, in the case of the provider who surrendered a license in 2014, documentation shows that the medical center staff became aware of the surrendered license in 2015, but VHA staff stated that the removal was stalled due to confusion about policies. This lack of awareness of key policies may be linked to a lack of mandatory training for credentialing staff. Second, gaps in VHA policy allow for inconsistent interpretation. For example, VHA has not issued policies pertaining to employing providers who have had their DEA registration for prescribing controlled substances revoked or surrendered for cause. While the DEA requires registrants, like VHA, to obtain a waiver before employing such providers, VHA policy is silent on the requirement to obtain a waiver; we found that VA medical center officials were unclear on the DEA requirement and had hired providers without obtaining the required DEA employment waiver. Further, we found that two providers inappropriately prescribed controlled substances without a DEA waiver. Third, VHA’s oversight of VA medical centers’ reviews of adverse information is inadequate. Under VHA policy, VISN officials are responsible for reviewing providers with certain adverse licensure actions. However, we found that this review was not always conducted or documented. Further, although VHA-wide reviews of provider licenses have been completed and have identified providers with licensure issues, VHA officials indicated that these types of reviews are not routinely conducted because they are labor intensive. In our February 2019 report, we also found that some VA medical centers had taken steps to improve the credentialing process and identify providers who do not meet the licensure requirements. For example, one medical center completed a periodic review of all licensed providers to identify providers who may have had an expired licensure issue. Another VA medical center updated its policies to require providers with adverse actions to be reviewed by management. However, we found that VHA does not routinely assemble and disseminate information about initiatives that medical centers have undertaken to improve the oversight of providers. In our February 2019 report, we concluded that without consistent adherence to VHA employment policies and adequate oversight, VHA lacks assurance that all VA providers have the appropriate professional qualifications and clinical abilities to care for patients. To address these shortcomings, in our February 2019 report we made seven recommendations to VA. VA concurred with these recommendations. Table 1 summarizes these recommendations and the steps VA has taken to address them. Selected VA Medical Centers’ Reviews of Providers’ Clinical Care Were Not Always Documented or Timely As we reported in November 2017, we found that from October 2013 through March 2017, the five selected VA medical centers required reviews of a total of 148 providers’ clinical care after concerns were raised about their care. However, for almost half of these reviews, officials at these medical centers could not provide documentation to show that the reviews had been conducted. We found that all five VA medical centers lacked at least some documentation of the reviews they told us they conducted, and in some cases, we found that the required reviews were not conducted at all. For example, we found that the medical centers lacked documentation showing they conducted a prospective review of 26 providers. Additionally, VA medical center officials confirmed that they failed to conduct this required review for an additional 21 providers. We also found that the five selected VA medical centers did not always conduct reviews of providers’ clinical care in a timely manner. Specifically, of the 148 providers, the VA medical centers did not initiate reviews of 16 providers for 3 or more months, and in some cases, for multiple years, after concerns had been raised about the providers’ care. For three of these 16 providers, additional concerns about the providers’ clinical care were raised before the reviews began. In our November 2017 report, we found that two factors were largely responsible for the inadequate documentation and untimely provider reviews. First, VHA policy does not require VA medical centers to document all types of reviews of providers’ clinical care, including retrospective reviews, and VHA has not established a timeliness requirement for initiating reviews of providers’ clinical care. Second, VHA’s oversight of the reviews of providers’ clinical care is inadequate. Under VHA policy, VISN officials are responsible for overseeing the credentialing and privileging processes at their respective VA medical centers. While reviews of providers’ clinical care after concerns are raised are a component of credentialing and privileging, we found that none of the VISN officials we spoke with described any routine oversight of such reviews. This may be in part because the standardized tool that VHA requires the VISNs to use during their routine audits does not direct VISN officials to ensure that all reviews of providers’ clinical care have been conducted and documented. Further, some of the VISN officials we interviewed told us they were not using the standardized audit tool as required. In our November 2017 report, we concluded that without adequate documentation and timely completion of reviews of providers’ clinical care, VA medical center officials lack the information they need to make decisions about providers’ privileges, including whether or not to take adverse privileging actions against providers. Furthermore, because of its inadequate oversight, VHA lacks reasonable assurance that VA medical center officials are reviewing all providers about whom clinical care concerns have been raised and are taking adverse privileging actions against the providers when appropriate. To address these shortcomings and improve VA medical center reviews of provider quality and safety concerns, we made three recommendations to VA in our November 2017 report. VA concurred with these recommendations. Table 2 summarizes these recommendations and the steps VA has taken to address them. Selected VA Medical Centers Did Not Report All Providers to the NPDB or to State Licensing Boards as Required In our November 2017 report, we found that from October 2013 through March 2017, the five VA medical centers we reviewed had only reported one of nine providers that should have been reported to the NPDB as required by VHA policy. Furthermore, none of these nine providers were reported to state licensing boards as required by VHA policy. These nine providers either had adverse privileging actions taken against them or resigned or retired while under investigation before an adverse privileging action could be taken. The VA medical centers documented that these nine providers had significant clinical deficiencies that sometimes resulted in adverse outcomes for veterans. For example, the documentation shows that one provider’s surgical incompetence resulted in numerous repeat surgeries for veterans. Similarly, the documentation shows that another provider’s opportunity to improve had to be halted and the provider was removed from providing care after only a week due to concerns that continuing the review would potentially harm patients. In addition to these nine providers, one VA medical center terminated the services of four contract providers based on deficiencies in the providers’ clinical performance, but the facility did not follow any of the required steps for reporting providers to the NPDB or relevant state licensing boards. This is concerning, given that the VA medical center documented that one of these providers was terminated for cause related to patient abuse after only 2 weeks of work at the facility. At the time of our review, two of the five VA medical centers we reviewed each reported one provider to the state licensing boards for failing to meet generally accepted standards of clinical practice to the point that it raised concerns for the safety of veterans. However, we found that the medical centers’ reporting to the state licensing boards took over 500 days to complete in both cases, which was significantly longer than the 100 days suggested in VHA policy. Across the five VA medical centers, we found that providers were not reported to the NPDB and state licensing boards as required for two reasons. First, VA medical center officials were generally not familiar with or misinterpreted VHA policies related to NPDB and state licensing board reporting. For example, at one VA medical center, we found that officials failed to report six providers to the NPDB because they were unaware that they were responsible for NPDB reporting. Officials at two other VA medical centers incorrectly told us that VHA cannot report contract providers to the NPDB. Second, VHA policy does not require the VISNs to oversee whether VA medical centers are reporting providers to the NPDB or state licensing boards when warranted. We found, for example, that VISN officials were unaware of situations in which VA medical center officials failed to report providers to the NPDB. As a result of VHA staff misinterpretation of VHA policy and insufficient oversight, we concluded that VHA lacks reasonable assurance that all providers who should be reported to the NPDB and state licensing boards are reported. Consequently, the NPDB and state licensing boards in other states where the providers we identified held licenses were not alerted to concerns about the providers’ clinical practice. We reported that this could allow a provider who delivered substandard care at one VA medical center to obtain privileges at another VA medical center or at hospitals outside of VA’s health care system. In our November 2017 report, we noted several cases of this occurring among the providers who were not reported to the NPDB or state licensing boards by the five VA medical centers we reviewed. For example, We found that two of the four contract providers whose contracts were terminated for clinical deficiencies remained eligible to provide care to veterans outside of that VA medical center. At the time of our review, one of these providers held privileges at another VA medical center, and another participated in the network of providers that can provide care for veterans in the community. We also found that a provider who was not reported as required to the NPDB during the period we reviewed had their privileges revoked 2 years later by a non-VA hospital in the same city for the same reason the provider was under investigation at the VA medical center. Officials at this VA medical center did not report this provider following a settlement agreement under which the provider agreed to resign. A committee within the VA medical center had recommended that the provider’s privileges be revoked prior to the agreement. There was no documentation of the reasons why this provider was not reported to the NPDB. To improve VA medical centers’ reporting of providers to the NPDB and state licensing boards and VHA oversight of these processes, we made one recommendation in our November 2017 report. VA concurred with this recommendation. Table 3 summarizes the recommendation and the steps VA has taken to address it. Chairman Pappas, Ranking Member Bergman, and Members of the Subcommittee, this concludes my statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff members have any questions concerning this testimony, please contact me at (202) 512-7114 (silass@gao.gov). Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions to this testimony include Marcia A. Mann (Assistant Director), Kaitlin M. McConnell (Analyst-in-Charge), Summar C. Corley, Cathy Hamann, Jacquelyn Hamilton, and Vikki Porter. Other contributors include David Bruno, Julia DiPonio, Ranya Elias, Kathryn A. Larin, and Joy Myers. Appendix I: Veterans Health Administration Credentialing, Privileging, and Monitoring Processes According to Department of Veterans Affairs’ (VA) Veterans Health Administration (VHA) policies, all licensed health care providers must be credentialed before they are permitted to work. Credentialing is the process of screening and evaluating qualifications and other credentials— including licensure, education, and relevant training—that is the first step in the process of determining whether the provider has appropriate clinical abilities and qualifications to provide medical services. Credentialing processes and requirements differ for independent licensed providers, such as doctors—who are permitted by law and the facility to deliver patient care services independently, without supervision—and dependent providers, such as nurses—who deliver patient care under the supervision or direction of an independent provider. Additionally, VHA policy states that only licensed independent providers may be granted clinical privileges. Privileging is a process through which a provider is permitted by a facility to independently provide medical or patient care that is in alignment with the provider’s clinical competence. Figure 1 provides a summary of the VHA credentialing and privileging processes for independent and dependent providers. VHA facilities are also required to monitor providers’ licenses after they are hired to ensure the licenses are current and review any licensure actions, in accordance with VHA policy. Figure 2 provides a summary of VHA’s processes for monitoring independent and dependent providers’ licenses. Related GAO Reports Veterans Health Administration: Greater Focus on Credentialing Needed to Prevent Disqualified Providers from Delivering Patient Care. GAO-19-6. Washington, D.C.: February 28, 2019. Department of Veterans Affairs: Actions Needed to Address Employee Misconduct Process and Ensure Accountability. GAO-18-137. Washington, D.C.: July 19, 2018. VA Health Care: Improved Oversight Needed for Reviewing and Reporting Providers for Quality and Safety Concerns. GAO-18-260T. Washington, D.C.: November 29, 2017. VA Health Care: Improved Policies and Oversight Needed for Reviewing and Reporting Providers for Quality and Safety Concerns. GAO-18-63. Washington, D.C.: November 15, 2017. Veterans Health Care: Improved Oversight of Community Care Physicians’ Credentials Needed. GAO-16-795. Washington, D.C.: September 19, 2016. VA Health Care: Improvements Needed in Processes Used to Address Providers’ Actions That Contribute to Adverse Events. GAO-14-55. Washington, D.C.: December 3, 2013. Veterans Health Care: Veterans Health Administration Processes for Responding to Reported Adverse Events, GAO-12-827R. Washington, D.C.: August 24, 2012. VA Health Care: Improved Oversight and Compliance Needed for Physician Credentialing and Privileging Processes. GAO-10-26. Washington, D.C.: January 6, 2010. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Nearly 165,000 licensed health care providers, such as physicians and nurses, provide care in VHA's VA medical centers and outpatient facilities. Medical center staff must determine whether to hire and retain health care providers by reviewing and verifying information about their qualifications and practice history. The NPDB is a key source of information about a provider's clinical practice history. Medical center staff must also investigate any concerns that arise about the clinical care their providers deliver. Depending on the findings from these reviews, medical centers may take an adverse privileging action against a provider. VA medical centers are required to report providers to the NPDB and state licensing boards under certain circumstances. Failing to adhere to these requirements can negatively affect patient safety. This testimony is primarily based on GAO's 2019 and 2017 reports on VHA processes for reviewing and reporting quality and safety concerns about VA providers. It addresses VA medical centers' implementation and VHA's oversight of (1) reviews of adverse information about providers in the NPDB; (2) reviews of providers' clinical care after concerns are raised; and (3) reporting of providers to the NPDB and state licensing boards. For the 2019 report, GAO reviewed a nongeneralizable sample of 57 VA providers who had an NPDB report. For the 2017 report, GAO reviewed providers whose clinical care was reviewed after a concern was raised about that care at a nongeneralizable selection of five VA medical centers. What GAO Found The Department of Veterans Affairs (VA) needs to take action to ensure its health care providers have the appropriate qualifications and clinical abilities to deliver high quality, safe care to veterans, as GAO recommended in its February 2019 and November 2017 reports. Specifically, GAO found the following: VA medical centers took action against some providers who did not meet VA licensure requirements, but overlooked others . In its 2019 report, GAO found that some VA medical centers took administrative or disciplinary actions against these providers, such as removing them from employment, after becoming aware of disqualifying information in the National Practitioner Data Bank (NPDB). The NPDB is an electronic repository that contains information on providers who have been disciplined by a state licensing board, among other information. However, in some cases VA medical centers overlooked or were unaware of disqualifying information in the NPDB. For example, officials told GAO they inadvertently overlooked a disqualifying adverse action and hired a provider whose license had been revoked for patient neglect. GAO found three reasons for this inconsistency: lack of mandatory training for key staff, gaps in Veterans Health Administration (VHA) policies, and inadequate oversight. Selected VA medical centers' reviews of providers' clinical care were not always documented . The five selected VA medical centers that GAO included in its 2017 report were required to review 148 providers' clinical care after concerns were raised about their care from October 2013 through March 2017. However, officials at these medical centers could not provide documentation to show that almost half of these reviews had been conducted. GAO found two reasons for inadequate documentation of these reviews: gaps in VHA policies and inadequate oversight of the reviews. Selected VA medical centers did not report providers to the NPDB or to state licensing boards as required . The five selected VA medical centers that GAO included in its 2017 report had reported one of nine providers to the NPDB that they were required to report from October 2013 through March 2017. None of these providers were reported to state licensing boards, as required by VHA policy. These nine providers either had adverse privileging actions taken against them—actions that limit the care providers can deliver at a facility or prevent the providers from delivering care altogether—or resigned or retired while under investigation before such an action could be taken. GAO found two reasons providers were not reported: lack of awareness or understanding of VHA policies and inadequate oversight of this reporting. GAO made 11 recommendations in its 2019 and 2017 reports to address the deficiencies identified. VA implemented two of these 11 recommendations, and provided action plans to address the other nine recommendations.
gao_GAO-19-577
gao_GAO-19-577_0
Background BMI is used as a screening tool for obesity. An individual with a BMI of 30 or higher is considered to have obesity. Over the past two decades, both the prevalence of obesity and estimates of the medical spending associated with individuals with obesity have increased. For example, a 2018 study estimated that the percentage of national medical expenditures used to treat obesity-related illnesses in adults increased from 6.13 percent in 2001 to 7.91 percent in 2015, a 29 percent increase. This study also found that the high medical costs of obesity are due to extremely high medical costs among a small percentage of the population who have severe obesity (those with a BMI of 40 or higher). In addition, a 2017 study found that medical expenditures rise most rapidly for individuals with a BMI of 40 or higher. One option for the treatment of obesity is the use of prescription obesity drugs. As of June 2019, there were nine prescription drugs approved by FDA to treat obesity. Four obesity drugs—benzphetamine, diethylpropion, phendimetrazine, and phentermine—were approved by FDA in 1961 or earlier for short-term use, which is generally about 12 weeks, and are available as generic drugs. The remaining five obesity drugs were approved by FDA in 1999 or later for long-term use and are available as brand-name drugs— bupropion/naltrexone (Contrave), liraglutide (Saxenda), lorcaserin (Belviq), orlistat (Xenical), and phentermine/topiramate (Qsymia). Each of these five brand-name obesity drugs underwent one or more randomized, controlled clinical trials for safety and efficacy prior to FDA approval of the drug—a total of 15 clinical trials across the five drugs. Obesity drugs work in different ways; some may help an individual feel full sooner or less hungry, while others may reduce fat absorption in the body. Results vary by medication and by person, but, according to the National Institutes of Health, on average, people who take obesity drugs as part of a lifestyle program lose between 3 and 9 percent more of their starting body weight than people in a lifestyle program who do not take obesity drugs. As with other prescription drugs, obesity drugs may have side effects such as headache, dizziness, dry mouth, nausea, and diarrhea. And, as with other prescription drugs, health care providers may prescribe an obesity drug for off-label use— that is, for a different medical condition, in a different dosage, or for a different duration than for which the drug is FDA approved. Obesity drugs should be used as an adjunct to lifestyle therapy (e.g., diet, physical activity, and behavioral counseling), according to guidelines from several medical associations. According to these guidelines, the use of obesity drugs is indicated for individuals with a BMI of 27 or higher with one or more obesity comorbidities (such as type 2 diabetes), or individuals with a BMI of 30 or higher who have a history of failure to achieve clinically meaningful weight loss (that is, weight loss of 5 percent or more) or who are unable to sustain weight loss. In addition, the guidelines recommend evaluating the patient’s weight loss after about 12 to 16 weeks of treatment with an obesity drug and discontinuing the drug if the patient has not lost a certain amount (e.g., at least 5 percent) of their initial body weight. Although obesity is classified as a disease, some health care providers, including those who specialize in the care of patients with obesity, continue to stigmatize patients with obesity. For example, a 2018 study reported that health care providers may perceive patients with obesity as being less compliant and having less self-discipline than other patients. Additionally, health care providers may not initiate discussions about weight loss with patients because of lack of time, other important issues or concerns, a belief that a patient is not motivated or interested in losing weight, or concern over a patient’s emotional state, according to another 2018 study. The Prevalence of Obesity Was Close to 40 Percent among All U.S. Adults from 2013 through 2016 The prevalence of obesity was about 38 percent among all U.S. adults (about four of every 10 adults) from 2013 through 2016, according to nationally representative estimates from CDC. The estimate of prevalence among adults covered by Medicare was about 40 percent, and among those with Medicaid or other public health insurance (excluding Medicare) it was about 42 percent. In addition, the prevalence of obesity among adults with private health insurance coverage and among the uninsured was similar, at about 37 percent and 38 percent, respectively. These national estimates also showed that about 24 percent of Medicare beneficiaries had Class 1 obesity, about 10 percent had Class 2 obesity, and about 6 percent had Class 3, or severe, obesity. (See fig. 1.) According to CDC estimates, adults age 18 to 64 and adults age 65 and older had a similar prevalence of obesity, about 39 percent and 38 percent, respectively. However, a higher percentage of adults age 18 to 64 than adults age 65 and older had Class 3 obesity. (See table 1.) Appendix III provides additional information on the prevalence of obesity among adults, as well as on the prevalence of adults who were overweight, which is defined as a BMI of 25 to <30, including 95 percent confidence intervals. Few Adults Used Obesity Drugs and Limited Data Are Available on Individuals Who Have Used These Drugs Relatively few U.S. adults, including adults with obesity and adults who reported trying to lose weight, used obesity drugs from 2012 through 2016, according to nationally representative estimates. Guidelines suggest prescribing obesity drugs as an adjunct to other diet and lifestyle changes, or when other approaches have not resulted in clinically significant weight loss. Those health care providers who prescribe obesity drugs consider several factors, such as whether there are any contraindications of the obesity drug for their patients and the cost of the drug. Some limited data are available on individuals who have used obesity drugs, including data on whether these individuals adhered to taking the prescribed obesity drug or maintained their weight loss over time. Relatively Few Adults Used Obesity Drugs Available data indicate that relatively few U.S. adults, including those with obesity, used obesity drugs. Specifically, of the estimated 233 million U.S. adults, fewer than a million used any of the nine obesity drugs, according to AHRQ’s nationally representative estimates from MEPS data for 2012 through 2016. Of the estimated 71.6 million U.S. adults with obesity, an estimated 660,000 per year, on average, used an obesity drug, according to these data. Similarly, among those who reported trying to lose weight, relatively few of them (about 3 percent) reported that they took prescription medication for weight loss, according to CDC’s nationally representative estimates from NHANES for 2013 through 2016. Additionally, six of the studies we reviewed examined this topic and found that few U.S. adults have used obesity drugs. For example, one study reported that in 2011, 2,554 obesity drug prescriptions were filled per 100,000 people, with about 87 percent of those prescriptions for phentermine, a generic obesity drug. Three other studies assessed the use of obesity drugs among veterans receiving care from the Veterans Health Administration and similarly found that few patients were prescribed obesity drugs. One of these studies found that about 1 percent of the 153,939 veterans who enrolled in the MOVE! Weight Management Program from 2013 through 2016 were prescribed an obesity drug (orlistat, phentermine, phentermine/topiramate, liraglutide, or bupropion/naltrexone) within 1 year of MOVE! initiation. Physicians May Have Concerns About Prescribing Obesity Drugs; Those Who Do Prescribe These Drugs Consider Multiple Factors According to officials from groups representing physicians and advocacy groups we interviewed, and seven studies we reviewed, some physicians and other health care providers may not be open to or comfortable with prescribing obesity drugs. For example, providers may not perceive obesity drugs to be safe or effective. According to officials from one advocacy and research group, concerns about the safety of obesity drugs may be related to the adverse consequences associated with past obesity drugs. In addition, one medical association we contacted indicated physicians consider clinical preventive service recommendations from the U.S. Preventive Services Task Force on the use of obesity drugs. The task force recommends that clinicians offer or refer adults with a BMI of 30 or higher to intensive, multicomponent behavioral interventions. Further, a systematic review of evidence of the benefits and harms of behavioral therapy and use of obesity drugs conducted for the task force found that obesity drugs, but not behavior-based interventions, were associated with higher rates of harm. The potential for harm (i.e., adverse events) may discourage physicians and other health care providers from prescribing these drugs. In addition, officials we interviewed and the studies we reviewed noted that a lack of insurance coverage, high out-of-pocket costs, and the patient’s means to afford obesity drugs may also discourage physicians from prescribing obesity drugs. The officials and studies also noted that physicians might have gaps in knowledge about obesity drugs. For example, officials from one medical association noted that lack of education is a barrier to physicians in prescribing obesity drugs for patients who would be candidates for them, and officials from another medical association said that many clinicians are not aware that there are FDA-approved drugs for obesity, and therefore they do not think about prescribing them. One study we reviewed found that, of the 111 primary care providers responding to a survey, most reported limited experience with obesity drugs as a barrier to prescribing them. While guidelines on the use of obesity drugs suggest prescribing obesity drugs as an adjunct to other diet and lifestyle changes, or when other approaches have not resulted in clinically significant weight loss, physicians and other health care providers may not understand the recommendations outlined in the guidelines. For example, one study found that many of the health care providers responding to a survey reported responses inconsistent with the guideline-recommended thresholds to initiate and continue use of obesity drugs. Physicians and health care providers who do prescribe obesity drugs take several factors into consideration. Specifically, before prescribing an obesity drug, these providers consider the likely benefits of weight loss, the drug’s possible side effects, the patient’s current health issues and other medications, family medical history, and the cost of the drug, according to the National Institutes of Health. According to officials from an advocacy group, specific considerations include (1) the patient’s other health conditions that may increase the risk from using a particular obesity drug (contraindications); (2) the ability of an obesity drug to treat both the patient’s obesity and other health conditions; (3) the patient’s ability to afford a particular obesity drug, given their insurance coverage and other financial resources; (4) patient preference regarding the dosage and form of the drug; and (5) the average efficacy (weight loss) of an obesity drug. Further, when treating obesity, providers use the least invasive treatments, such as lifestyle-based therapies first, then escalate to obesity drugs if noninvasive treatments prove ineffective, according to officials from the same advocacy group. Some Limited Data Are Available on Individuals Who Have Used Obesity Drugs Some limited data are available on individuals who have used obesity drugs, including data on the distribution of BMI, the use of obesity drugs in conjunction with other items or services, whether these individuals adhered to using the prescribed obesity drug or maintained their weight loss over time, and the impact that using obesity drugs has on other medical services directly related to obesity. The following is a summary of available information on specific aspects of individuals who have used obesity drugs. Distribution of BMI across individuals who have used obesity drugs. CDC’s nationally representative estimates for 2013 through 2016 found that the BMI of adults who reported that they used obesity drugs ranged from 21 to 64, with a median BMI of 34. However, these data are limited because they do not indicate how long the individual used the drugs before their BMI was measured. Use of obesity drugs in conjunction with other items or services. Two studies we reviewed examined the use of obesity drugs in conjunction with other items or services. These studies found that participants who used an obesity drug in conjunction with other services, such as behavioral counseling, lost more weight than those who did not take the drug with the other services. For example, in one 2019 study, participants who received intensive behavioral therapy combined with an obesity drug, liraglutide, had nearly double the weight loss (an average of about 12 percent of their body weight) compared to the participants who received only intensive behavioral therapy (an average of about 6 percent of their body weight). In addition, the 15 clinical trials for the brand-name obesity drugs that we reviewed generally found that a significantly higher percentage of participants who used the obesity drug combined with other items or services (such as a low-calorie diet or increased physical activity) achieved 5 percent or more weight loss compared to participants who used a placebo with the other items or services. One clinical trial that used an intensive behavior modification program (28 group sessions) found higher average weight loss (9 percent loss of initial body weight) for participants who used the obesity drug (bupropion/naltrexone) than for the placebo group. This clinical trial also found that the placebo group with the intensive behavior modification had higher weight loss than placebo groups in the other clinical trials, none of which used intensive behavioral therapy. Adherence to using the prescribed obesity drug. FDA’s analysis of Sentinel System data of obesity drugs dispensed in 2008 through 2017 found that in the majority of patients using obesity drugs, cumulative treatment duration was 90 days or less. FDA analyzed data for 267,836 new users of obesity drugs and found that about 58 percent of patients who used any of the obesity drugs did so for 90 days or less; about 31 percent used any of the obesity drugs for 30 or fewer days. The average duration for the first use of any of the nine obesity drugs was 69 days. (See appendix V for more data from FDA’s analysis.) FDA’s findings are consistent with the findings of two of the three studies that we reviewed that measured adherence to using the prescribed obesity drug. These studies reported that use of obesity drugs dropped significantly after 30 days. For example, one 2018 study that reviewed 1 year of data on 26,522 patients who had new prescription drug claims for one of four obesity drugs (liraglutide, lorcaserin, bupropion/naltrexone, and phentermine/topiramate) found that adherence to using any of the four obesity drugs dropped markedly during the first month following the initial claim for the drug. In addition, while the 15 clinical trials we reviewed were not designed to measure adherence to taking obesity drugs, they provide some information on whether or not study participants adhered to using these drugs during the trials. Participant dropout rates for these clinical trials ranged from 14 percent to 66 percent for the obesity drug treatment and the placebo groups, which could indicate difficulty in adherence to the study regimen; however, participants using the placebo generally had higher dropout rates than those using the obesity drug. The reasons for discontinuation among study participants in the clinical trials included side effects, such as headaches and nausea; being unavailable for follow up; and withdrawal of consent. Maintaining weight loss over time by individuals who have used obesity drugs. The recent systematic review conducted for the U.S. Preventive Services Task Force noted that data on long-term weight loss with obesity drugs are limited. The review found that individuals using obesity drugs were more likely to maintain their weight loss over 12 to 36 months compared with placebo, but noted that the evidence was limited by the small number of trials for each medication, poor follow up with participants, and limited applicability (given that participants had to meet narrowly defined inclusion criteria), among other limitations. We also identified six studies—each of which reviewed one of the FDA-approved obesity drugs—that examined weight loss maintenance, generally after about 1 year. For example, a 2018 study for one obesity drug (lorcaserin) found that while the obesity drug initially improved upon weight loss achieved with weight loss maintenance counseling, this advantage was not maintained at 1 year. That is, after 1 year, there was no significant difference in weight loss maintenance between the participants treated with the obesity drug along with counseling, compared to those treated with placebo along with counseling. Another study that examined clinical trial data for one obesity drug (bupropion/naltrexone) concluded that participants who lost at least 5 percent of their body weight after 16 weeks were likely to maintain clinically significant weight loss (of at least 5 percent) after 1 year of treatment with the drug. The impact of using obesity drugs on medical services directly related to obesity. We did not identify any studies on the impact that the use of obesity drugs had on the utilization of medical services directly related to obesity. In terms of studies on the impact on health outcomes, the systematic review conducted for the U.S. Preventive Services Task Force concluded that health outcomes data for individuals receiving treatment with obesity drugs were limited. The review reported that clinical trials of obesity drugs for weight loss examined few outcomes beyond quality of life measures, and that none of the drug-based maintenance trials reported the effects of the obesity drug interventions on health outcomes. The review noted that the trials included in the review were of highly selected populations with multiple exclusions relevant to health outcomes (e.g., history of serious medical conditions). The review further noted that while it appears that weight loss interventions, including obesity drugs, can reduce diabetes incidence, larger studies with longer-term follow up are required to understand the full benefits of these interventions on health outcomes and whether those effects are long lasting. Health Insurance Coverage for Obesity Drugs Is Limited and Varied across Types of Insurance Health insurance coverage for obesity drugs is limited—that is, not all public and private health insurance provided coverage for obesity drugs or may have additional requirements to determine these drugs are medically necessary. Medicare Part D plans may opt to cover obesity drugs, and state Medicaid programs or Medicaid managed care plans within states may choose either to cover or exclude obesity drugs from coverage. We found that both Medicare Part D and Medicaid reimbursed for a relatively small number of prescriptions for obesity drugs in 2016 and 2017. For private health insurance—which includes employer- sponsored health insurance, individually purchased health plans, and FEHBP plans—we found that coverage varied and, when obesity drugs were covered, the coverage could have additional requirements such as prior authorization or determination that a drug is medically necessary for the patient. Medicare. Under Medicare’s prescription drug benefit, Medicare Part D plans may choose to cover obesity drugs—in these cases, obesity drugs are considered supplemental drugs under an enhanced alternative coverage plan. Medicare beneficiaries who select a Part D plan that offers supplemental benefits, which may include coverage of excluded drugs such as obesity drugs, must pay the full premium cost for those additional benefits (i.e., Medicare does not subsidize them). Medicare Part D plans can choose whether or not to offer enhanced alternative coverage, and not all Medicare Part D plans that provide enhanced alternative coverage cover obesity drugs as supplemental drugs. For example: Roughly half of the Medicare beneficiaries covered by one large insurer’s Medicare Part D plans in one state have coverage for obesity drugs as a supplemental drug under enhanced alternative coverage, according to officials from that insurer. Officials at another large insurer told us that their Medicare Part D plans have historically covered supplemental drugs based on consumer demand, and obesity drugs do not typically meet their threshold for offering supplemental coverage. The officials noted that their plans have limited funds to cover supplemental drugs and that consumer demand is typically highest for other types of drugs, such as drugs to treat erectile dysfunction. Enhanced Alternative Coverage and Supplemental Drugs under Medicare Enhanced alternative coverage is alternative prescription drug coverage under Medicare Part D with value exceeding that of Medicare Part D’s defined standard coverage. Enhanced alternative coverage may include basic prescription coverage and supplemental benefits such as supplemental drugs. Supplemental drugs are drugs—including drugs for weight loss—that would be covered Part D drugs but for the fact that they are specifically excluded as Part D drugs under Medicare Part D’s basic prescription drug coverage. Medicare Part D plans may offer these excluded drugs, such as obesity drugs, as a supplemental drug under enhanced alternative coverage. A Medicare Part D plan can choose which drugs it covers as a supplemental drug under enhanced alternative coverage—that is, not all plans cover the same supplemental drugs as part of enhanced alternative coverage. Data from CMS on Medicare Part D reimbursement for obesity drugs provide some insight on coverage. For example, our analysis found that in 2017, 27 Medicare Part D plans reimbursed for obesity drugs under enhanced alternative coverage for 209 Medicare beneficiaries. (See table 2 for 2016 and 2017 data.) See appendix VI for more information. Medicaid. State Medicaid programs or Medicaid managed care plans within states may choose either to cover or exclude obesity drugs from coverage. Our analysis found that in 2017, Medicaid programs or Medicaid managed care plans in 41 states reimbursed pharmacies and other providers for at least one claim for an obesity drug, for a total of 30,800 prescriptions. (See table 3 for 2016 and 2017 data.) Medicaid managed care organizations may provide coverage of obesity drugs not covered by the state plan, according to CMS. See appendix VII for more information. Employer-sponsored and individually purchased health plans. Coverage of the nine obesity drugs varied in employer-sponsored and individually purchased health plans, according to the insurers and pharmacy benefit managers we interviewed. For example: Officials from one large insurer told us that coverage of obesity drugs is included in plans for about 90 percent of their members; only a small percentage of members do not have plans with this coverage. Officials from another large insurer surveyed its health plans in different geographic locations and found that, of those that responded, four of the six employer-sponsored and three of the six individually purchased health plans covered the nine obesity drugs. They said that many of the plans that covered obesity drugs in their employer- sponsored markets also covered these drugs in their individual market. Officials at a large pharmacy benefit manager said employers that provide employer-sponsored health insurance can choose to customize their formulary and decide whether to include obesity drugs. They said their select and premium prescription drug formularies include obesity drugs, so companies that decide to offer those formularies would cover obesity drugs, but many companies choose to customize their formularies and may not include obesity drugs. Even if employer-sponsored and individually purchased health plans offer coverage of obesity drugs, these plans often put requirements in place to determine a beneficiary’s eligibility for coverage of obesity drugs, according to officials from insurers and pharmacy benefit managers we interviewed. For example, plans may require beneficiaries to obtain prior authorization, require a determination of medical necessity of the drug for the patient, and review the drug’s effectiveness prior to making a coverage decision. For example, an official from one large insurer told us their drug formulary does not include obesity drugs because the clinical evidence indicates that other therapies are more effective for weight loss. However, this official also said that some of its plans would cover obesity drugs as a nonformulary option if a physician or other health care provider indicates that the obesity drug is medically necessary (e.g., after a patient has tried other treatment options, such as behavioral therapy). Further, if a patient is offered coverage of an obesity drug but fails to receive a clinical benefit within a specified time frame, insurers and pharmacy benefit managers told us the following: A patient and his or her physician may decide together whether the patient should continue or discontinue the obesity drug, and plans often defer to physicians to determine whether an obesity drug is medically necessary for a patient. Some plans may require additional information from a patient’s physician every 6 to 12 months for reapproval of coverage of an obesity drug, such as reporting outcomes (e.g., weight loss) while using the drug. Plans could require prior authorization to continue using an obesity drug. An individual may be able to try a different obesity drug covered by the formulary. For the largest employer-sponsored health care program in the United States—FEHBP, managed by the Office of Personnel Management—we found that some FEHBP plans offered by large insurers excluded obesity drugs from coverage. We examined the formularies for 12 plans offered by three large FEHBP insurers and found that the formularies for two plans from one insurer indicated some type of coverage of obesity drugs in 2018. One plan offered coverage for 50 percent of the plan’s allowed amount for weight management drugs, and the other plan offered coverage of two obesity drugs as tier 2 drugs, which have higher copayments than tier 1 drugs. For individually purchased health plans offered on health care exchanges, nine of the 34 states with federally facilitated exchanges had at least one plan in the silver tier of coverage that included some type of coverage for obesity drugs in 2018, according to a 2018 study. The study found that covered obesity drugs were generally the older drugs and that the newer drugs tended to be covered with higher copayments or more likely to require prior authorizations than other medications. Two-Thirds of Obesity Drug Payments Were Made Out of Pocket; Adults Who Used Obesity Drugs Had Higher Average Estimated Medical Spending Out-of-pocket payments from the patient or patient’s family made up two- thirds of the amounts paid for obesity drugs, according to nationally representative estimates for 2012 through 2016. These amounts could include insurance copayments and deductible amounts, and payments for obesity drugs not covered by insurance. Private health insurance paid about one quarter of the amount paid for obesity drugs, and Medicare and other public health insurance paid the remainder. Average annual medical spending and prescription drug spending were higher for adults who used any of the nine obesity drugs than for those who did not, according to these estimates. However, the differences in these estimates do not establish any causal relationship between using obesity drugs and having higher average annual medical or prescription drug spending. Two-Thirds of Obesity Drug Payments Were Paid Out of Pocket by Patients; Phentermine Was Most Purchased Out-of-pocket payments made up about two-thirds of total amounts paid for obesity drugs for U.S. adults and private health insurance paid a quarter, according to AHRQ’s nationally representative estimates from MEPS data for 2012 through 2016. Medicare, Medicaid, and other public health insurance paid the remainder; however, estimates for each of these sources of payment are imprecise. (See fig. 2.) Similar to studies on the use of obesity drugs, AHRQ’s estimates also found that 80 percent of amounts paid for any of the nine obesity drugs was for one obesity drug, phentermine, which is available as a generic drug. We also examined available spending data from CMS on payments for obesity drugs and found the following: Medicare Part D prescription drug plans spent $19,714 for obesity drugs in 2016 and $140,296 in 2017, according to our analysis of CMS’s Prescription Drug Event data. These amounts include Medicare Part D plan reimbursements for any of the nine obesity drugs under enhanced alternative coverage. CMS’s data also showed that total beneficiary spending—that is, the total amount Medicare beneficiaries paid out of pocket as copayments or deductibles—for any of these prescriptions totaled $4,048 in 2016 and $5,376 in 2017. See appendix VI for more information. Total Medicaid state and federal spending—that is, reimbursement amounts for the nine obesity drugs—was at least $5,017,424 in 2016 and $7,453,442 in 2017, according to our analysis of available data from CMS’s Medicaid State Drug Utilization data. These amounts do not include all Medicaid spending for obesity drugs under Medicaid managed care. For example, if a Medicaid program pays a managed care organization for drugs as part of their capitated payment for all Medicaid services, they are not reimbursed on a per-drug basis, and obesity drugs covered by Medicaid in that state would show up as a $0 reimbursement amount in CMS’s Medicaid State Drug Utilization data. According to CMS data, Medicaid spending for obesity drugs was the greatest in California in 2016 and 2017. See appendix VII for more information. In addition, when the number of prescriptions dispensed are counted, FDA’s estimates from 2017 IQVIATM data—which are projected nationally from prescriptions dispensed in about 59,900 outpatient retail pharmacies—found that most prescriptions dispensed for obesity drugs were paid for by private insurance. FDA’s analysis found that almost 64 percent of prescriptions dispensed for any of the nine obesity drugs was paid for by private health insurance, and 35 percent of prescriptions dispensed was paid for by cash (i.e., out-of-pocket) payments paid for by the patient or their family in 2017. The remaining 1 percent of prescriptions dispensed for obesity drugs was paid for by Medicare Part D and Medicaid at an estimated 0.9 percent and 0.1 percent, respectively. Adults Age 18 to 64 Who Used Obesity Drugs Had Higher Average Medical and Prescription Drug Spending Than Those Who Did Not For all U.S. adults age 18 to 64, the estimated average annual medical and prescription drug spending per adult was higher for those who used an obesity drug than for those who did not use an obesity drug. Specifically, the estimated average annual medical expenditures were $7,575 per adult who used an obesity drug and $4,302 for those who did not, according to AHRQ’s nationally representative estimates from MEPS data for 2012 through 2016. Further, the estimated average annual prescription drug expenditures per adult were $2,198 for those who used an obesity drug and $1,111 for those who did not. However, these data do not necessarily indicate that use of obesity drugs leads to higher average annual medical and prescription drug spending. For U.S. adults with obesity, there was not a significant difference between the estimated average annual medical and prescription drug expenditures per adult for those who used an obesity drug and those who did not use an obesity drug. This may be due to the small sample size of 279 adults with obesity who used an obesity drug in the MEPS data. Appendix VIII provides more information on AHRQ’s estimated expenditures for obesity drugs and other medical and prescription drug spending. We did not identify any studies other than AHRQ’s estimates from MEPS data that specifically addressed the medical spending for adults who used obesity drugs compared to those who did not. Agency Comments We provided a draft of this report to HHS for review and comment. HHS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. Appendix I: Objectives, Scope, and Methodology The Bipartisan Budget Act of 2018 included a provision for GAO to review the prevalence of obesity and the use of obesity drugs in the Medicare and non-Medicare populations, including spending for and coverage of these drugs. We examined (1) the prevalence of obesity among adults in the United States; (2) what is known about the use of obesity drugs and the individuals who use them; (3) what is known about health insurance coverage of obesity drugs; and (4) what is known about spending on obesity drugs and about medical spending for adults who used obesity drugs compared to those who did not. To address our reporting objectives, we examined estimates from federal agencies within the Department of Health and Human Services (HHS), including the Centers for Disease Control and Prevention’s (CDC) estimates from the National Health and Nutrition Examination Survey (NHANES), the Agency for Health Care Research and Quality’s (AHRQ) estimates from the Medical Expenditure Panel Survey (MEPS), and the Food and Drug Administration’s (FDA) estimates from IQVIA and the Sentinel System. We also analyzed Medicare Part D Prescription Drug Event data and Medicaid State Drug Utilization data from the Centers for Medicare & Medicaid Services (CMS). For each data source, we examined the latest available data at the time of our review. In addition, we conducted a literature review; interviewed officials and reviewed documents from stakeholder organizations, federal agencies, insurers, and others; and examined relevant laws and regulations. National Health and Nutrition Examination Survey We examined CDC’s nationally representative estimates from NHANES of the prevalence of obesity among U.S. adults and use of obesity drugs. NHANES is a cross-sectional survey designed to monitor the health and nutritional status of the civilian, noninstitutionalized U.S. population. The survey consists of interviews conducted in participants’ homes and standardized physical examinations, including measured height and weight, conducted in mobile examination centers. CDC analyzed data from two 2-year cycles of NHANES (2013 through 2014 and 2015 through 2016) for the prevalence of obesity [defined as a body mass index (BMI) of 30 or higher] for all adults by age (18 and older, 18 through 64, and 65 and older), health insurance coverage, and class of obesity. The insurance categories were mutually exclusive: (1) Medicare, which includes all adults who reported having Medicare, regardless of whether they reporting having another type of health insurance (e.g., private health insurance) in addition to Medicare; (2) private health insurance (excluding individuals with Medicare); (3) Medicaid/public health insurance (excluding Medicare); and (4) uninsured. We also examined CDC’s estimates from NHANES on the prevalence of overweight (defined as a BMI of 25 to <30) among U.S. adults. In addition, we examined CDC’s estimates from NHANES for 2013 through 2016 on adults who took prescription medications for weight loss. NHANES asks participants if they tried to lose weight, and, for those who did, if they took diet pills prescribed by a doctor. CDC’s estimates included the lower and upper bounds of the 95 percent confidence intervals (the interval that would contain the actual population value for 95 percent of the samples NHANES could have drawn). Medical Expenditure Panel Survey We examined AHRQ’s nationally representative estimates from MEPS data on the use of and payment sources for obesity drugs. MEPS collects nationally representative data on health care use, expenditures, sources of payment, and insurance coverage for the U.S. civilian, noninstitutionalized population. For this analysis, AHRQ estimated the distribution of payments for obesity drugs using MEPS pooled data for years 2012 through 2016. We also examined AHRQ’s estimates from MEPS of annual expenditures for medical care and all prescription drugs—for those individuals who used obesity drugs and those who did not—and annual expenditures for obesity drugs. AHRQ’s estimates included the lower and upper bounds of the 95 percent confidence intervals. We examined FDA’s nationally projected data on the prescriptions dispensed for obesity drugs from outpatient retail pharmacies using 2017 IQVIA™ National Prescription Audit Extended Insights and IQVIA™ Total Patient Tracker. IQVIA™ is proprietary data that includes data for prescriptions dispensed at approximately 59,900 U.S. outpatient retail pharmacies. FDA analyzed IQVIA data and provided aggregated results for the nationally estimated number of prescriptions dispensed for the nine obesity drugs from U.S. outpatient retail pharmacies, by payment method. These patterns may not apply to other settings of care (e.g., mail-order or specialty pharmacies or clinics). In addition, the analysis captures data when a prescription was dispensed; it does not indicate that the patient took the obesity drug, and it does not indicate if the drug was prescribed off label for something other than weight loss. Sentinel System We examined FDA’s national estimates of prescriptions for obesity drugs dispensed by outpatient pharmacies for new users of obesity drugs (by number of days supplied and by age and gender of patient) from the agency’s Sentinel System. FDA’s Sentinel System uses prescription drug dispensing data from populations with federal or commercial insurance to characterize drug utilization of a large U.S. population with private and public health insurance. FDA examined drug dispensing data from January 1, 2008, through December 31, 2017, from 17 of 18 Sentinel data partners, including Medicare, which contributed fee-for- service enrollee data. FDA analyzed dispensings for 267,836 new users of the nine prescription obesity drugs. FDA estimated the duration of the first treatment episode (in days) for patients’ prescription dispensings for any of the nine obesity drugs using a 14-day episode gap—that is, if there were more than 14 days between exhausting the previous dispensing’s days supplied for that prescription and refilling the prescription, then FDA counted it as a new treatment episode. FDA estimated cumulative treatment duration by summing days’ supply of all dispensings of an obesity drug during a patient’s presence in the database, without regard to time between dispensings. Medicare Part D Prescription Drug Event Data For information on the number of claims for obesity drugs that were reimbursed, the number of plans that provided reimbursement, and the amount reimbursed for obesity drugs under the Medicare prescription drug program known as Medicare Part D, we analyzed Medicare Prescription Drug Event data from CMS for 2016 and 2017. We analyzed Medicare Part D plan reimbursements (payments to pharmacies) and beneficiary spending (the total amount Medicare beneficiaries paid out of pocket as copayments or deductibles) for the nine obesity drugs for claims that CMS’s data coded as reimbursed as a supplemental drug under enhanced alternative coverage. We excluded 1,787 claims in 2016 and 1,775 claims in 2017 for one obesity drug, orlistat (Xenical), that were listed in CMS’s data as covered under Medicare Part D (and were not coded as a supplemental drug under enhanced alternative coverage). According to CMS officials, orlistat has off-label indications including diabetes and hyperlipidemia, and when orlistat is used for these indications the drug would be covered under Medicare Part D, and the Medicare Part D plan is responsible for ensuring it is dispensed appropriately per Medicare Part D policy. We also excluded 25 claims in 2016 and 26 claims in 2017 for prescription obesity drugs listed as over-the-counter in the prescription drug event data because, according to CMS, these appear to be outliers. Because our analysis was limited to those instances in which a Medicare Part D plan reimbursed for an obesity drug as a supplemental drug under enhanced alternative coverage, the number of Medicare Part D plans that provided coverage for obesity drugs could be higher. For example, some plans may have covered obesity drugs, but none of the beneficiaries enrolled in these plans filled a prescription for such a drug. Medicaid State Drug Utilization Data For information on obesity drugs reimbursed by state Medicaid programs or Medicaid managed care programs within those states, we analyzed CMS’s Medicaid State Drug Utilization data for 2016 and 2017. We analyzed the data to estimate the number of prescriptions reimbursed and total Medicaid state and federal spending—that is, the Medicaid amount reimbursed (state and federal reimbursement, including dispensing fees)—for the nine obesity drugs. These amounts do not include all Medicaid spending for obesity drugs because managed care organizations can be paid for the drugs as part of their capitated payment for all Medicaid services, they are not reimbursed on a per-drug basis, and their payments are not recorded in CMS’s Medicaid State Drug Utilization data. Because our analysis was limited to those instances in which Medicaid reimbursed for an obesity drug, the number of states in which state Medicaid programs or Medicaid managed care plans provided coverage for obesity drugs could be higher. For example, a state could have provided coverage for obesity drugs, but no beneficiaries in that state filled a prescription for an obesity drug. Interviews with Officials in Stakeholder Organizations, Federal Agencies, Insurers, and Others We obtained information and reviewed studies from officials from eight stakeholder organizations (representing medical associations and advocacy groups for obesity research and treatment) on the use of obesity drugs and guidelines for using obesity drugs and to obtain their perspectives on what physicians and other health care providers take into consideration when prescribing these drugs, among other things. These stakeholders were selected because of their medical or scientific expertise, relevant publications, or familiarity with the treatment of obesity and obesity drugs. We also reviewed data and documents and interviewed officials from HHS agencies: CDC, FDA, AHRQ, CMS, and the National Institutes of Health. In addition, we reviewed guidance documents and obtained information from the Office of Personnel Management, which administers the Federal Employees Health Benefits program (FEHBP). FEHBP is the largest employer-sponsored health insurance program in the United States, providing health insurance coverage to about 8 million federal employees, retirees, and their dependents in 2016 through contracts with private health insurance plans. We obtained information about the health insurance coverage of obesity drugs from officials from the three largest pharmacy benefit managers, four large insurers, and two organizations knowledgeable about prescription drug benefits for employer-sponsored health plans. We also reviewed drug formularies for selected private health insurance plans, including FEHBP plans, to determine if any of the nine obesity drugs were included. Literature Review We conducted a literature review of relevant peer-reviewed studies published from January 2012 through January 2019. We identified studies through a search of bibliographic databases, including ProQuest, Scopus, MEDLINE, and International Pharmaceutical Abstracts , using terms such as “obesity,” “weight loss,” and “prescriptions.” Of the 765 citations we identified, we reviewed 220 full studies, which we examined for information related to the use of obesity drugs and individuals who use them, coverage of obesity drugs, and spending for obesity drugs for individuals who used them compared to those who did not. We determined 19 studies were relevant to the use of obesity drugs and 1 study was relevant to coverage of obesity drugs. Our literature review focused on studies with a U.S.-based, adult population (age 18 and older); we excluded studies related to childhood obesity and studies on animals. We also examined available information on the clinical trials conducted prior to FDA approval of the prescription obesity drugs for the U.S. market, including 64 studies from our literature review that summarized one or more of the clinical trials. We also identified 17 additional studies in our literature review that provided relevant background information. Additionally, we reviewed five studies provided by stakeholder organizations (in addition to the studies we had identified in our literature review) that we determined were relevant to our research objectives, as well as guidelines for the use of obesity drugs in obesity treatment. To determine the reliability of the data we used for all four objectives— CDC’s estimates from NHANES, AHRQ’s estimates from MEPS, FDA’s data from IQVIA and the Sentinel System, and CMS’s Medicare Part D Prescription Drug Event data and Medicaid State Drug Utilization data— we reviewed documentation on data collection processes and discussed limitations of the data with the relevant federal agency officials. In addition, we conducted data reliability checks on the data, when appropriate. We determined the data used in this report were sufficiently reliable for our purposes. We conducted this performance audit from April 2018 to August 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: List of FDA-Approved Prescription Obesity Drugs Table 4 provides additional information on the nine prescription drugs approved by the Food and Drug Administration (FDA) to treat obesity that we included in our review. Appendix III: Prevalence of Obesity and Overweight among U.S. Adults This appendix presents national estimates of the prevalence of obesity among U.S. adults age 18 and older, based on the Centers for Disease Control and Prevention’s (CDC) estimates from the National Health and Nutrition Examination Survey (NHANES) for 2013 through 2016. It presents the estimates and the ranges for the 95 percent confidence intervals for prevalence of obesity by age and class of obesity (see table 5), and by insurance coverage and class of obesity (see table 6). It also presents national estimates of the prevalence of overweight (defined as a body mass index of 25 to <30) among U.S. adults, by age and insurance coverage (see table 7). Appendix IV: List of Selected Studies Reviewed Table 8 is a list of selected studies, categorized by specific topic area, that we reviewed that pertain to our research objectives, including information related to the use of obesity drugs and individuals who use them, physician considerations in prescribing obesity drugs, and health insurance coverage of obesity drugs. We identified these studies either through our literature review of peer-reviewed studies published from January 2012 through January 2019 or from one of the stakeholder organizations we contacted. Appendix V: Estimates of New Adult Users of Obesity Drugs, 2008-2017 This appendix presents estimates of prescriptions dispensed for new adult users of obesity drugs by duration of use and by age and gender, using data from the Food and Drug Administration’s (FDA) Sentinel System from 2008 through 2017. Of the 267,836 new users of obesity drugs included in this analysis, the first treatment episode did not exceed 30 days in about 54 percent of patients and exceeded 90 days in about 22 percent of patients. Cumulatively, about 42 percent of patients who used any of the obesity drugs did so for more than 90 days across treatment episodes. (See table 9.) Overall, most new users of obesity drugs were female (82.2 percent) and under age 65 (91.7 percent). (See table 10.) Phentermine and bupropion/naltrexone (Contrave) were the most commonly used obesity drugs in FDA’s Sentinel System analysis. Appendix VI: Reimbursement for Obesity Drugs in Medicare Part D Enhanced Alternative Coverage, 2016 and 2017 This appendix presents information on Medicare Part D plan reimbursement for obesity drugs under enhanced alternative coverage from our analysis of Centers for Medicare & Medicaid Services’ (CMS) Prescription Drug Event data. Medicare Part D plans can choose whether or not to offer enhanced alternative coverage, and not all Medicare Part D plans that provide enhanced alternative coverage cover obesity drugs as supplemental drugs. As of February 2017, 1,949 Medicare Part D plans provided enhanced alternative coverage to 18.9 million Medicare beneficiaries, according to the Medicare Payment Advisory Commission. Additionally, in 2015, total Medicare Part D spending for prescription drugs was about $137 billion—this represents payments from all payers including beneficiaries (cost sharing), and excluding rebates and discounts from pharmacies and manufacturers that are not reflected in prices at the pharmacies. Tables 11 and 12 show the number of claims reimbursed, the number of plans that provided reimbursement, and the amount reimbursed for obesity drugs under Medicare Part D enhanced alternative coverage for 2016 and 2017, respectively. Appendix VII: Reimbursement for Obesity Drugs in Medicaid, 2016 and 2017 This appendix presents information on Medicaid reimbursements for obesity drugs under state Medicaid programs or Medicaid managed care programs within those states from our analysis of Centers for Medicare & Medicaid Services’ (CMS) Medicaid State Drug Utilization data. State Medicaid programs or Medicaid managed care programs reimbursed for at least one obesity drug prescription in 42 states in 2016 and 41 states in 2017. The amount that Medicaid reimbursed and the total number of prescriptions for obesity drugs reimbursed by Medicaid in 2016 and 2017 are shown by state (tables 13 and 14), and by obesity drug (tables 15 and 16). Over half of the prescriptions for obesity drugs reimbursed under Medicaid in 2016 and 2017 were for the generic obesity drug, phentermine. Appendix VIII: Estimates of Medical and Prescription Drug Expenditures for Adults Who Used and Did Not Use Obesity Drugs This appendix presents nationally representative estimates of U.S. adults’ average annual expenditures (spending) for medical care, all prescription drugs, and for obesity drugs from the Agency for Healthcare Research and Quality (AHRQ) based on data from the Medical Expenditure Panel Survey (MEPS) for 2012 through 2016. Table 17 shows the estimated average annual expenditures for all prescription drugs and table 18 shows the estimated average annual medical expenditures, including prescription drugs, per adult who used and per adult who did not use any obesity drugs. For adults age 18 to 64, the differences in the estimated average annual expenditures for all medical care and for all prescriptions drugs per adult who used and who did not use any of the nine obesity drugs in our review were statistically significant. However, the differences in these estimates do not indicate that there was a causal relationship between using obesity drugs and having higher average annual medical or prescription drug expenditures. Table 19 shows the estimated average annual expenditures per adult for obesity drugs. Appendix IX: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, Kim Yamane (Assistant Director), Lisa A. Lusk (Analyst-in-Charge), George Bogart, Zhi Boon, Kaitlin Dunn, Laurie Pachter, and Merrile Sing made key contributions to this report. Also contributing to this report were Alexander Cattran, Leia Dickerson, Diona Martyn, Christina Ritchie, and Ethiene Salgado-Rodriguez.
Why GAO Did This Study Obesity has been associated with an increased risk of developing conditions such as heart disease, stroke, diabetes, and certain types of cancer. Treatment options for individuals with obesity include lifestyle therapy, such as diet, exercise, and behavioral counseling; obesity drugs; surgery; or a combination of these efforts. The Bipartisan Budget Act of 2018 (P.L. 115-123) included a provision for GAO to review the prevalence of obesity and the use and insurance coverage of obesity drugs. This report examines the prevalence of obesity in the United States, and what is known about the use and health insurance coverage of obesity drugs, among other objectives. GAO examined data from agencies within the Department of Health and Human Services (HHS) on the prevalence of obesity (using estimates for 2013 through 2016) and the use, spending, and coverage of obesity drugs; conducted a literature review of relevant studies published from January 2012 through January 2019 in peer-reviewed and other publications; reviewed drug formularies for selected health plans; and reviewed documents and interviewed officials from federal agencies and stakeholder organizations (including medical associations, advocacy groups, pharmacy benefit managers, and insurers). HHS provided technical comments on a draft of this report, which were incorporated as appropriate. What GAO Found The prevalence of obesity—that is, body weight higher than what is considered a healthy weight for a given height—was about 38 percent among all U.S. adults, according to the latest available national estimates at the time of GAO's analysis. This prevalence was similar for adults with different types of health insurance. Treatment for adults with obesity may include one or more of nine prescription drugs that the Food and Drug Administration has approved for weight management (i.e., obesity drugs), though relatively few adults have used these drugs. Of an estimated 71.6 million U.S. adults with obesity, an estimated 660,000 per year, on average, used an obesity drug from 2012 through 2016, according to national estimates. Among adults who reported trying to lose weight, about 3 percent reported that they took prescription medication for weight loss from 2013 through 2016, according to national estimates. Coverage of obesity drugs varied across different types of health insurance, including Medicare and Medicaid. Plans cited factors such as low consumer demand and strong evidence supporting other treatments in their coverage decisions. GAO's analysis of Centers for Medicare & Medicaid Services' data indicates that some Medicare prescription drug plans and state Medicaid programs reimbursed for some obesity drugs in 2016 and 2017. Coverage for private health insurance plans also varied, and plans may require the patient to obtain prior authorization for the drugs to be covered, according to officials from insurers and pharmacy benefit managers GAO interviewed. For example, officials from one insurer said that some of their plans only cover obesity drugs after a patient has tried other treatment options such as behavioral counseling.
gao_GAO-19-431T
gao_GAO-19-431T_0
Background As shown in table 1 the cost of counting the nation’s population has been escalating with each decade. The 2010 Census was the most expensive in U.S. history at about $12.3 billion, and was about 31 percent more costly than the $9.4 billion 2000 Census (in 2020 dollars). According to the Bureau, the total cost of the 2020 Census in October 2015 was estimated at $12.3 billion and in October 2017 that cost estimate grew to approximately $15.6 billion, approximately a $3 billion increase. Additionally, Bureau officials told us that while the estimated cost of the census had increased to $15.6 billion, it was nevertheless managing the 2020 Census to a lower cost of $14.1 billion. Bureau officials explained that the $14.1 billion includes all program costs and contingency funds to cover risks and general estimating uncertainty. The remaining $1.5 billion estimated cost is additional contingency for “unknown unknowns”—that is, low probability events that could cause massive disruptions—and several what-if scenarios such as an increase in the wage rate or additional supervisors needed to manage field operations. Moreover, as shown in figure 1, the average cost for counting a housing unit increased from about $16 in 1970 to around $92 in 2010 (in 2020 constant dollars). At the same time, the return of census questionnaires by mail (the primary mode of data collection) declined over this period from 78 percent in 1970 to 63 percent in 2010. Declining mail response rates has led to higher costs because the Bureau sends temporary workers to each non-responding household to obtain census data. Achieving a complete and accurate census has become an increasingly daunting task, in part, because the population is growing larger, more diverse, and more reluctant to participate in the enumeration. In many ways, the Bureau has had to invest substantially more resources each decade to conduct the enumeration. In addition to these external societal challenges that make achieving a complete count a daunting task, the Bureau also faces a number of internal management challenges that affect its capacity and readiness to conduct a cost-effective enumeration. Some of these issues—such as acquiring and developing IT systems and preparing reliable cost estimates—are long-standing in nature. At the same time, as the Bureau looks toward 2020, it also faces newly emerging and evolving uncertainties. For example, on March 26, 2018, the Secretary of Commerce announced his decision to add a question to the decennial census on citizenship status. On January 15, 2019, the U.S. District Court for the Southern District of New York ruled on one of a number of legal challenges to the Secretary’s decision. That ruling is being appealed, thus, leaving the use of the question uncertain. The U.S. Supreme Court is scheduled to begin hearing arguments in April 2019 regarding the addition of the citizenship question to the census form. In our prior work we have noted the risks associated with late changes of any nature to the design of the census if the Bureau is unable to fully test those changes under operational conditions. The Bureau also faced budgetary uncertainties that, according to the Bureau, led to the curtailment of testing in 2017 and 2018. However, the Consolidated Appropriations Act, 2018 appropriated for the Periodic Censuses and Programs account $2.544 billion, which more than doubles the Bureau’s request in the President’s Fiscal Year 2018 Budget of $1.251 billion. According to the explanatory statement accompanying the act, the appropriation, which is available through fiscal year 2020, is provided to ensure the Bureau has the necessary resources to immediately address any issues discovered during operational testing, and to provide a smoother transition between fiscal year 2018 and fiscal year 2019. The availability of those resources enabled the Bureau to continue preparations for the 2020 Census during the 35 days when appropriations lapsed for the Bureau. Moreover, the Consolidated Appropriations Act, 2019 appropriated for the Periodic Censuses and Programs account $3.551 billion. According to Bureau officials, this level of funding for fiscal year 2019 is sufficient to carry out 2020 Census activities as planned. Importantly, the census is conducted against a backdrop of immutable deadlines. In order to meet legally mandated reporting requirements, census activities need to take place at specific times and in the proper sequence. Thus, it is absolutely critical for the Bureau to stay on schedule. Figure 2 shows some dates for selected decennial events. The Bureau Has Begun Opening Offices and Hiring Temporary Staff The Bureau has begun to open its area census offices (ACO) for the 2020 Census. It has signed leases for all 248 ACOs, of which 39 of the offices will be open for the address canvassing operation set to begin in August 2019 where staff verifies the location of selected housing units. The remaining 209 offices will begin opening this fall. In 2010 the Bureau opened 494 census offices. The Bureau has been able to reduce its infrastructure because it is relying on automation to assign work and to record payroll. Therefore there is less paper—field assignments, maps, and daily payroll forms—to manually process. For the 2020 Census, the Bureau is refining its recruiting and hiring goals, but tentatively plans to recruit approximately 2.24 million applicants and hire nearly 500,000 temporary field staff from that applicant pool for two key operations: address canvassing, and nonresponse follow-up, where they visit households that do not return census forms to collect data in person. In 2010 the Bureau recruited 3.8 million applicants and hired 628,000 temporary workers to conduct the address canvassing and nonresponse follow-up field operations. According to Bureau officials, it has reduced the number of temporary staff it needs to hire because automation has made field operations more efficient and there is less paper. As of April 15, 2019, for its early operations efforts which includes hiring listers for address canvassing, the Bureau has processed approximately 264,000 applicants which represent 128.4 percent of its 205,000 recruiting goal. The Bureau is also in the process of hiring approximately 1,500 partnership specialists needed by June 2019 to help increase awareness and participation in the 2020 Census in minority communities and hard-to-reach populations. As of April 17, 2019, the Bureau has hired 467 partnership specialists, and another 329 applicants are waiting to have their background checks completed. Moreover, Bureau officials also stated that the current economic environment (i.e., the low unemployment rate compared to the economic environment of the 2010 Census) has not yet impacted their ability to recruit staff. The Bureau will continue to monitor the impact of low unemployment on its ability to recruit and hire at the local and regional levels. The Bureau Plans to Rely Heavily on IT for the 2020 Census For the 2020 Census, the Bureau is significantly changing how it intends to conduct the census, in part by re-engineering key census-taking methods and infrastructure, and making use of new IT applications and systems. For example, the Bureau plans to offer an option for households to respond to the survey via the internet and enable field-based enumerators to use applications on mobile devices to collect survey data from households. To do this, the Bureau plans to utilize 52 new and legacy IT systems, and the infrastructure supporting them, to conduct the 2020 Census. A majority of these 52 systems have been tested during operational tests in 2017 and 2018. For example, the Bureau conducted its 2018 End-to- End test, which included 44 of the 52 systems and was intended to test all key systems and operations in a census-like environment to ensure readiness for the 2020 Census. Nevertheless, additional IT development and testing work needs to take place before the 2020 Census. Specifically, officials from the Bureau’s Decennial Directorate said they expect that the systems will need to undergo further development and testing due to, among other things, the need to add functionality that was not part of the End-to-End test, scale system performance to support the number of respondents expected during the 2020 Census, and address system defects identified during the 2018 End-to-End test. To prepare the systems and technology for the 2020 Census, the Bureau is also relying on significant contractor support. For example, it is relying on contractors to develop a number of systems and components of the IT infrastructure, including the IT platform that is intended to be used to collect data from households responding via the internet and telephone, and for non-response follow-up activities. Contractors are also deploying the IT and telecommunications hardware in the field offices and providing device-as-a-service capabilities by procuring the mobile devices and cellular service to be used for non-response follow-up. In addition to the development of technology, the Bureau is relying on a technical integration contractor to integrate all of the key systems and infrastructure. The contractor’s work is expected to include, among other things, evaluating the systems and infrastructure and acquiring the infrastructure (e.g., cloud or data center) to meet the Bureau’s scalability and performance needs; integrating all of the systems; and assisting with technical, performance and scalability, and operational testing activities. 2020 Census Identified by GAO as a High-Risk Area In February 2017, we added the 2020 Decennial Census as a high-risk area needing attention from Congress and the executive branch. This was due to significant risks related to, among other things, innovations never before used in prior enumerations, the acquisition and development of IT systems, and expected escalating costs. Among other things, we reported that the commitment of top leadership was needed to ensure the Bureau’s management, culture, and business practices align with a cost-effective enumeration. We also stressed that the Bureau needed to rigorously test census-taking activities; ensure that scheduling adheres to best practices; improve its ability to manage, develop, and secure its IT systems; and have better oversight and control over its cost estimation process. Our experience has shown that the key elements needed to make progress toward being removed from the High-Risk List are top-level attention by the administration and agency leaders grounded in the five criteria for removal, as well as any needed congressional action. The five criteria for removal that we identified in November 2000 are as follows: Leadership Commitment. The agency has demonstrated strong commitment and top leadership support. Capacity. The agency has the capacity (i.e., people and resources) to resolve the risk(s). Action Plan. A corrective action plan exists that defines the root causes and solutions, and that provides for substantially completing corrective measures, including steps necessary to implement solutions we recommended. Monitoring. A program has been instituted to monitor and independently validate the effectiveness and sustainability of corrective measures. Demonstrated Progress. The agency has demonstrated progress in implementing corrective measures and in resolving the high-risk area. These five criteria form a road map for efforts to improve, and ultimately address, high-risk issues. Addressing some of the criteria leads to progress, while satisfying all of the criteria is central to removal from the list. As we reported in the March 2019 high-risk report, the Bureau’s efforts to address the risks and challenges for the 2020 Census had fully met one of the five criteria for removal from the High-Risk List—leadership commitment—and partially met the other four, as shown in figure 3. Additional details about the status of the Bureau’s efforts to address this high-risk area are discussed later in this statement. The 2020 Census Remains High Risk Due to Challenges Facing the Enumeration The 2020 Census is on our list of high-risk programs because, among other things, (1) innovations never before used in prior enumerations are not expected to be fully tested, (2) the Bureau continues to face challenges in implementing IT systems, (3) the Bureau faces significant cybersecurity risks to its systems and data, and (4) the Bureau’s cost estimate for the 2020 Census was unreliable. If not sufficiently addressed, these risks could adversely impact the cost and quality of the enumeration. Moreover, the risks are compounded by other factors that contribute to the challenge of conducting a successful census, such as the nation’s increasingly diverse population and concerns over personal privacy. Key Risk #1: The Bureau Has Redesigned the Census with the Intent to Control Costs, but Has Scaled Back Critical Tests The basic design of the enumeration—mail out and mail back of the census questionnaire with in-person follow-up for non-respondents—has been in use since 1970. However, a lesson learned from the 2010 Census and earlier enumerations is that this traditional design is no longer capable of cost-effectively counting the population. In response to its own assessments, our recommendations, and studies by other organizations, the Bureau has fundamentally re-examined its approach for conducting the 2020 Census. Specifically, its plan for 2020 includes four broad innovation areas: re-engineering field operations, using administrative records, verifying addresses in-office, and developing an internet self-response option (see table 2). If they function as planned, the Bureau initially estimated that these innovations could result in savings of over $5 billion (in 2020 constant dollars) when compared to its estimates of the cost for conducting the census with traditional methods. However, in June 2016, we reported that the Bureau’s initial life-cycle cost estimate developed in October 2015 was not reliable and did not adequately account for risk. As discussed earlier in this statement, the Bureau has updated its estimate from $12.3 billion and now estimates a life-cycle cost of $15.6 billion, which would result in a smaller potential savings from the innovative design than the Bureau originally estimated. According to the Bureau, the goal of the cost estimate increase was to ensure quality was fully addressed. While the planned innovations could help control costs, they also introduce new risks, in part, because they include new procedures and technology that have not been used extensively in earlier decennials, if at all. Our prior work has shown the importance of the Bureau conducting a robust testing program, including the 2018 End-to-End test. Rigorous testing is a critical risk mitigation strategy because it provides information on the feasibility and performance of individual census-taking activities, their potential for achieving desired results, and the extent to which they are able to function together under full operational conditions. To address some of these challenges we have made numerous recommendations aimed at improving reengineered field operations, using administrative records, verifying the accuracy of the address list, and securing census responses via the internet. The Bureau has held a series of operational tests since 2012, but according to the Bureau, it scaled back its most recent field tests because of funding uncertainties. For example, the Bureau canceled the field components of the 2017 Census Test including non-response follow-up, a key census operation. In November 2016, we reported that the cancelation of the 2017 Census Test was a lost opportunity to test, refine, and integrate operations and systems, and that it put more pressure on the 2018 End-to-End test to demonstrate that enumeration activities will function under census-like conditions as needed for 2020. However, in May 2017, the Bureau scaled back the operational scope of the 2018 End-to-End test and, of the three planned test sites, only the Rhode Island site would fully implement the 2018 End-to-End test. The Washington and West Virginia sites would test just one field operation. In addition, due to budgetary concerns, the Bureau decided to remove three coverage measurement operations (and the technology that supports them) from the scope of the test. However, removal of the coverage measurement operations did not affect testing of the delivery of apportionment or redistricting data. Without sufficient testing, operational problems can go undiscovered and the opportunity to improve operations will be lost, in part because the 2018 End-to-End test was the last opportunity to demonstrate census technology and procedures across a range of geographic locations, housing types, and demographic groups under decennial-like conditions prior to the 2020 Census. To manage risk to the census, the Bureau has developed hundreds of mitigation and contingency plans. To maximize readiness for the 2020 Census, it will also be important for the Bureau to prioritize among its mitigation and contingency strategies those that will deliver the most cost-effective outcomes for the census. We reported on the 2018 End-to-End test in December 2018 and noted that the Bureau had made progress addressing prior test implementation issues but still faced challenges. As the Bureau studies the results of its testing to inform the 2020 Census, it will be important that it addresses key program management issues that arose during implementation of the test. Namely, by not aligning the skills, responsibilities, and information flows for the first-line supervisors during field data collection, the Bureau limited its role in support of enumerators within the re-engineered field operation. The Bureau also lacked mid-operation training or guidance, which, if implemented in a targeted, localized manner, could have further helped enumerators navigate procedural modifications and any commonly encountered problems when enumerating. It will be important for the Bureau to prioritize its mitigation strategies for these implementation issues so that it can maximize readiness for the 2020 Census. Key Risk #2: The Bureau Faces Challenges in Implementing IT Systems We have previously reported that the Bureau faces challenges in managing and overseeing IT programs, systems, and contractors supporting the 2020 Census. Specifically, we have noted challenges in the Bureau’s efforts to manage, among other things, the schedules and contracts for its systems. As a result of these challenges, the Bureau is at risk of being unable to fully implement the systems necessary to support the 2020 Census and conduct a cost-effective enumeration. The Bureau Has Made Initial Progress against Its Revised Development and Testing Schedule, but Risks Missing Near-term Milestones To help improve its implementation of IT for the 2020 Census, the Bureau recently revised its systems development and testing schedule. Specifically, in October 2018, the Bureau organized the development and testing schedule for its 52 systems into 16 operational deliveries. Each of the 16 operational deliveries has milestone dates for, among other things, development, performance and scalability testing, and system deployment. According to Bureau officials in the Decennial Directorate, the schedule was revised, in part, due to schedule management challenges experienced, and lessons learned, while completing development and testing during the 2018 End-to-End test. The Bureau has made initial progress in executing work against its revised schedule. For example, the Bureau completed development for the systems in the first operational delivery—for 2020 Census early operations preparations—in July 2018, and deployed these systems into production in October 2018. However, our current work has determined that the Bureau is at risk of not meeting several near-term systems testing milestones. As of April 2019, six systems that are expected to be used in a total of two operational deliveries are at risk of not meeting milestone dates which would signal that the systems have completed development and are ready for testing. These six systems are needed for, among other things, field assignment management and worker performance tracking during address canvassing, data collection for operations, business and support automation, and customer support during self-response. According to Bureau documentation, these systems were at risk due, in part, to the lack of finalized system requirements and specifications. Figure 4 presents an overview of the status for all 16 operational deliveries, as of April 2019. The at-risk systems previously discussed add uncertainty to a highly compressed time frame over the next 4 months. Importantly, between April and August 2019, the Bureau is expected to begin integration testing for the systems in seven operational deliveries, including internet self- response and non-response follow-up. Officials from the Bureau’s integration contractor noted concern that the current schedule leaves little room for any delays in completing the remaining development and testing activities. In addition to managing the compressed testing time frames, the Bureau also has to quickly finalize plans related to its IT infrastructure. For example, in March 2019, the Bureau’s technical integration contractor stated that it needed the Bureau to obtain approval from federal partners for its Trusted Internet Connection or finalize alternative plans in order to complete performance and scalability testing in a timely manner. As of mid-April 2019, the Bureau stated that it was still awaiting final approval. Given that these plans may impact systems being tested this summer or deployed into production for the address canvassing operation in August 2019, it is important that the Bureau quickly addresses this matter. Our past reporting noted that the Bureau faced significant challenges in managing its schedule for system development and testing that occurred in 2017 and 2018. We reported that while the Bureau had continued to make progress in developing and testing IT systems for the 2020 Census, it had experienced delays in developing systems to support the 2018 End-to-End test. These delays compressed the time available for system and integration testing and for security assessments. In addition, several systems experienced problems during the test. We noted then, and reaffirm now, that continued schedule management challenges may compress the time available for the remaining system and integration testing and increase the risk that systems may not function or be as secure as intended. The Bureau has acknowledged that it faces risks to the implementation of its systems and technology. As of March 2019, the Bureau had identified about 330 active risks for the 2020 Census program, through its risk management process, including 20 high risks that may have substantial technical and schedule impacts if realized. Taken together, these risks represent a cross-section of issues, such as the effects of late changes to technical requirements, the need to ensure adequate time for system development and performance and scalability testing, contracting issues, privacy risks, and skilled staffing shortages. Going forward, it will be important that the Bureau effectively manages these risks to better ensure that it meets near-term milestones for system development and testing, and is ready for the major operations of the 2020 Census. Key Risk #3: The Bureau Faces Significant Cybersecurity Risks to Its Systems and Data The risks to IT systems supporting the federal government and its functions, including conducting the 2020 Census, are increasing as security threats continue to evolve and become more sophisticated. These risks include insider threats from witting or unwitting employees, escalating and emerging threats from around the globe, and the emergence of new and more destructive attacks. Underscoring the importance of this issue, we have designated information security as a government-wide high-risk area since 1997 and, in our most recent biennial report to Congress, ensuring the cybersecurity of the nation was one of nine high-risk areas that we reported needing especially focused executive and congressional attention. Our prior and ongoing work has identified significant challenges that the Bureau faces in securing systems and data for the 2020 Census. Specifically, the Bureau has faced challenges related to completing security assessments, addressing security weaknesses, resolving cybersecurity recommendations from DHS, and addressing numerous other cybersecurity concerns (such as phishing). The Bureau Has Made Progress in Completing Security Assessment, but Critical Work Remains Federal law specifies requirements for protecting federal information and information systems, such as those systems to be used in the 2020 Census. Specifically, the Federal Information Security Management Act of 2002 and the Federal Information Security Modernization Act of 2014 (FISMA) require executive branch agencies to develop, document, and implement an agency-wide program to provide security for the information and information systems that support operations and assets of the agency. In accordance with FISMA, National Institute of Standards and Technology (NIST) guidance, and Office of Management and Budget (OMB) guidance, the Bureau’s Office of the Chief Information Officer (CIO) established a risk management framework. This framework requires system developers to ensure that each of the Bureau’s systems undergoes a full security assessment, and that system developers remediate critical deficiencies. According to the Bureau’s risk management framework, the systems expected to be used to conduct the 2020 Census will need to have complete security documentation (such as system security plans) and an approved authorization to operate prior to their use. Currently, according to the Bureau’s Office of the CIO: Fourteen of the 52 systems have authorization to operate, and will not need to be reauthorized before they are used in the 2020 Census Thirty-two of the 52 systems have authorization to operate, and may need to be reauthorized before they are used in the 2020 Census Six of the 52 systems do not have authorization to operate, and will need to be authorized before they are used in the 2020 Census. Figure 5 summarizes the authorization to operate status for the systems being used in the 2020 Census, as reported by the Bureau in April 2019. As we have previously reported, while large-scale technological changes (such as internet self-response) increase the likelihood of efficiency and effectiveness gains, they also introduce many cybersecurity challenges. The 2020 Census also involves collecting personally identifiable information (PII) on over a hundred million households across the country, which further increases the need to properly secure these systems. Thus, it will be important that the Bureau provides adequate time to perform these security assessments, completes them in a timely manner, and ensures that risks are at an acceptable level before the systems are deployed. We have ongoing work examining how the Bureau plans to address both internal and external cyber threats, including its efforts to complete system security assessments and resolve identified weaknesses. The Bureau Has Identified a Significant Number of Corrective Actions to Address Security Weaknesses, but Has Not Always Been Timely in Completing Them FISMA requires that agency-wide information security programs include a process for planning, implementing, evaluating, and documenting remedial actions (i.e., corrective actions) to address any deficiencies in the information security policies, procedures, and practices of the agency. Agencies must establish procedures to reasonably ensure that all information security control weaknesses, regardless of how or by whom they are identified, are addressed through the agency’s remediation processes. For each identified control weakness, the agency is required to develop and implement a plan of actions and milestones (POA&M) based on findings from security control assessments, security impact analyses, continuous monitoring of activities, audit reports, and other sources. Additionally, the Bureau’s framework requires that security assessment findings that need to be remediated are to be tracked as POA&Ms. These POA&Ms are expected to provide a description of the vulnerabilities identified during the security assessment that resulted from a control weakness. As of March 2019, the Bureau had over 500 open POA&Ms to remediate for issues identified during security assessment activities, including ongoing continuous monitoring. Of these open POA&Ms, 247 (or about 48 percent) were considered “high-risk” or “very high-risk.” While the Bureau established POA&Ms for addressing these identified security control weaknesses, it did not always complete remedial actions in accordance with its established deadlines. For example, of the 247 open “high-risk” or “very high-risk” POA&Ms we reviewed through March 2019, the Bureau identified 115 as being delayed. Further, 70 of the 115 had missed their scheduled completion dates by 60 or more days. In addition, the number of open “high-risk” or “very high-risk” POA&Ms that the Bureau identified as delayed has substantially increased since June 2018, as shown in figure 6. According to the Bureau, these POA&Ms were identified as delayed due to technical challenges or resource constraints to remediate and close them. However, without resolving identified vulnerabilities in a timely manner, the Bureau faces an increased risk, as continuing opportunities exist for unauthorized individuals to exploit these weaknesses and gain access to sensitive information and systems. The Bureau Has Begun Implementing DHS’s Cybersecurity Recommendations, but Has Not Established a Formal Process to Address Them The Bureau is working with federal and industry partners, including the Department of Homeland Security, to support the 2020 Census cybersecurity efforts. Specifically, the Bureau is working with DHS to ensure a scalable and secure network connection for the 2020 Census respondents (e.g., virtual Trusted Internet Connection with the cloud), improve its cybersecurity posture (e.g., improve risk management processes and procedures), and to strengthen its response to potential cyber threats (e.g., federal cyber incident coordination). Federal law describes practices for strengthening cybersecurity by documenting or tracking corrective actions. As previously mentioned, FISMA requires executive branch agencies to establish a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in their information security policies, procedures, and practices. GAO’s internal control standards also state that agencies should establish effective internal control monitoring that includes a process to promptly resolve the findings of audits and other reviews. Specifically, agencies should document and complete corrective actions to remediate identified deficiencies on a timely basis. This would include correcting identified deficiencies or demonstrating that the findings and recommendations do not warrant agency action. Since January 2017, DHS has been providing cybersecurity assistance (including issuing recommendations) to the Bureau in preparation for the 2020 Census, and the Bureau has reported making progress in addressing those recommendations. Specifically, DHS has been providing cybersecurity assistance to the Bureau in five areas: management coordination and executive support, including a cybersecurity threat intelligence and information sharing enhancement through, among other things, a DHS cyber threat briefing to the Bureau’s leadership; network and infrastructure security and resilience, including National Cybersecurity Protection System (also called EINSTEIN) support; incident response and management readiness through a Federal Incident Response Evaluation assessment; and risk management and vulnerability assessments on specific targets provided by the Bureau. In the last 2 years, as a result of these activities, DHS has provided 17 recommendations for the Bureau to strengthen its cybersecurity efforts. Among other things, the recommendations pertained to strengthening incident management capabilities, penetration testing and web application assessments of select systems, and phishing assessments to gain access to sensitive PII. Due to the sensitive nature of the recommendations, we are not identifying the specific recommendations or specific findings associated with them in this statement. As of February 2019, the Bureau had fully completed actions to address three recommendations, needed to further improve on actions taken for one recommendation it indicated had been completed, and needed to complete actions in progress for the remaining 13 recommendations (as summarized in table 3). However, the Bureau had not established a formal process for documenting, tracking, and completing corrective actions for all the recommendations provided by DHS. To the Bureau’s credit, it had incorporated the corrective actions associated with the three completed recommendations into its formal process used for tracking POA&Ms, which includes identifying remediation activities, resources required, milestones, and completion dates. The Bureau did not incorporate the remaining 14 recommendations into the POA&M process. Instead, in November 2018, the Bureau created an informal document to track the 17 DHS recommendations, but this document does not consistently include details such as the resources required, expected completion date, or whether the recommendations do not warrant agency action. Until the Bureau implements a formal process for tracking and implementing appropriate corrective actions to remediate identified cybersecurity weaknesses from DHS, and addresses the identified deficiencies, it faces an increased likelihood that these weaknesses will go uncorrected and may be exploited to cause harm to agency’s 2020 Census IT systems and gain access to sensitive respondent data. Implementing a formal process would also help to ensure that DHS’s efforts result in improvements to the Bureau’s cybersecurity posture. The Bureau Faces Several Other Cybersecurity Challenges in Implementing the 2020 Census The Bureau faces other significant cybersecurity challenges in addition to those previously discussed. More specifically, we previously reported that the extensive use of IT systems to support the 2020 Census redesign may help increase efficiency, but that this redesign introduces critical cybersecurity challenges. These challenges include those related to the following: Phishing. We have previously reported that advanced persistent threats may be targeted against social media web sites used by the federal government. In addition, attackers may use social media to collect information and launch attacks against federal information systems through social engineering, such as phishing. Phishing is a digital form of social engineering that uses authentic-looking, but fake, emails, websites, or instant messages to get users to download malware, open malicious attachments, or open links that direct them to a website that requests information or executes malicious code. Phishing attacks could target respondents, as well as Bureau employees and contractors. The 2020 Census will be the first one in which respondents will be heavily encouraged to respond via the internet. This will likely increase the risk that cyber criminals will use phishing in an attempt to steal personal information. Disinformation from social media. We previously reported that one of the Bureau’s key innovations for the 2020 Census is the large-scale implementation of an internet self-response option. The Bureau is encouraging the public to use the internet self-response option through expanded use of social media. However, the public perception of the Bureau’s ability to adequately safeguard the privacy and confidentiality of the 2020 Census internet self-responses could be influenced by disinformation spread through social media. According to the Bureau, if a substantial segment of the public is not convinced that the Bureau can safeguard public response data against data breaches and unauthorized use, then response rates may be lower than projected, leading to an increase in cases for follow-up and subsequent cost increases. Ensuring that individuals gain only limited and appropriate access to 2020 Census data. The Bureau plans to enable a public- facing website and Bureau-issued mobile devices to collect PII (e.g., name, address, and date of birth) from the nation’s entire population— estimated to be over 300 million. In addition, the Bureau is planning to obtain and store administrative records containing PII from other government agencies to help augment information that enumerators did not collect. The number of reported security incidents involving PII at federal agencies has increased dramatically in recent years. Because of these challenges, we have recommended, among other things, that federal agencies improve their response to information security incidents and data breaches involving PII, and consistently develop and implement privacy policies and procedures. Accordingly, it will be important for the Bureau to ensure that only respondents and Bureau officials are able to gain access to this information, and enumerators and other employees only have access to the information needed to perform their jobs. Ensuring adequate control in a cloud environment. The Bureau has decided to use cloud solutions as a key component of the 2020 Census IT infrastructure. We have previously reported that cloud computing has both positive and negative information security implications and, thus, federal agencies should develop service-level agreements with cloud providers. These agreements should specify, among other things, the security performance requirements— including data reliability, preservation, privacy, and access rights— that the service provider is to meet. Without these safeguards, computer systems and networks, as well as the critical operations and key infrastructures they support, may be lost; information—including sensitive personal information—may be compromised; and the agency’s operations could be disrupted. Ensuring contingency and incident response plans are in place to encompass all of the IT systems to be used to support the 2020 Census. Because of the brief time frame for collecting data during the 2020 Census, it is especially important that systems are available for respondents to ensure a high response rate. Contingency planning and incident response help ensure that, if normal operations are interrupted, network managers will be able to detect, mitigate, and recover from a service disruption while preserving access to vital information. Implementing important security controls, including policies, procedures, and techniques for contingency planning and incident response, helps to ensure the confidentiality, integrity, and availability of information and systems, even during disruptions of service. Without contingency and incident response plans, system availability might be impacted and result in a lower response rate. The Bureau’s CIO has acknowledged these cybersecurity challenges and is working to address them, according to Bureau documentation. In addition, we have ongoing work looking at many of these challenges, including the Bureau’s plans to protect PII, use a cloud-based infrastructure, and recover from security incidents and other disasters. Key Risk #4: The Bureau Will Need to Control Any Further Cost Growth and Develop Cost Estimates That Reflect Best Practices Since 2015, the Bureau has made progress in improving its ability to develop a reliable cost estimate. We have reported on the reliability of the $12.3 billion life-cycle cost estimate released in October 2015 and the $15.6 billion revised cost estimate released in October 2017. In 2016 we reported that the October 2015 version of the Bureau’s life-cycle cost estimate for the 2020 Census was not reliable. Specifically, we found that the 2020 Census life-cycle cost estimate partially met two of the characteristics of a reliable cost estimate (comprehensive and accurate) and minimally met the other two (well-documented and credible). We recommended that the Bureau take specific steps to ensure its cost estimate meets the characteristics of a high-quality estimate. The Bureau agreed and has taken action to improve the reliability of the cost estimate. In August 2018 we reported that while improvements had been made, the Bureau’s October 2017 cost estimate for the 2020 Census did not fully reflect all the characteristics of a reliable estimate. (See figure 7.) In order for a cost estimate to be deemed reliable as described in GAO’s Cost Estimating and Assessment Guide and thus, to effectively inform 2020 Census annual budgetary figures, the cost estimate must meet or substantially meet the following four characteristics: Well-Documented. Cost estimates are considered valid if they are well-documented to the point they can be easily repeated or updated and can be traced to original sources through auditing, according to best practices. Accurate. Accurate estimates are unbiased and contain few mathematical mistakes. Credible. Credible cost estimates must clearly identify limitations due to uncertainty or bias surrounding the data or assumptions, according to best practices. Comprehensive. To be comprehensive an estimate should have enough detail to ensure that cost elements are neither omitted nor double-counted, and all cost-influencing assumptions are detailed in the estimate’s documentation, among other things, according to best practices. The 2017 cost estimate only partially met the characteristic of being well- documented. In general, some documentation was missing, inconsistent, or difficult to understand. Specifically, we found that source data did not always support the information described in the basis of estimate document or could not be found in the files provided for two of the Bureau’s largest field operations: Address Canvassing and Non- Response Follow-Up. We also found that some of the cost elements did not trace clearly to supporting spreadsheets and assumption documents. Failure to document an estimate in enough detail makes it more difficult to replicate calculations, or to detect possible errors in the estimate; reduces transparency of the estimation process; and can undermine the ability to use the information to improve future cost estimates or even to reconcile the estimate with another independent cost estimate. The Bureau told us it would continue to make improvements to ensure the estimate is well- documented. Increased Costs Are Driven by an Assumed Decrease in Self- Response Rates and Increases in Contingency Funds and IT Cost Categories The 2017 life-cycle cost estimate includes significantly higher costs than those included in the 2015 estimate. The largest increases occurred in the Response, Managerial Contingency, and Census/Survey Engineering categories. For example, increased costs of $1.3 billion in the response category (costs related to collecting, maintaining, and processing survey response data) were in part due to reduced assumptions for self- response rates, leading to increases in the amount of data collected in the field, which is more costly to the Bureau. Contingency allocations increased overall from $1.35 billion in 2015 to $2.6 billion in 2017, as the Bureau gained a greater understanding of risks facing the 2020 Census. Increases of $838 million in the Census/Survey Engineering category were due mainly to the cost of an IT contract for integrating decennial survey systems that was not included in the 2015 cost estimate. Bureau officials attribute a decrease of $551 million in estimated costs for Program Management to changes in the categorization of costs associated with risks. Specifically, in the 2017 version of the estimate, estimated costs related to program risks were allocated to their corresponding work breakdown structure (WBS) element. Figure 8 shows the change in cost by WBS category for 2015 and 2017. More generally, factors that contributed to cost fluctuations between the 2015 and 2017 cost estimates include: Changes in assumptions. Among other changes, a decrease in the assumed rate for self-response from 63.5 percent in 2015 to 60.5 percent in 2017 increased the cost of collecting responses from nonresponding housing units. Improved ability to anticipate and quantify risk. In general, contingency allocations designed to address the effects of potential risks increased overall from $1.3 billion in 2015 to $2.6 billion in 2017. An overall increase in IT costs. IT cost increases, totaling $1.59 billion, represented almost 50 percent of the total cost increase from 2015 to 2017. More defined contract requirements. Bureau documents described an overall improvement in the Bureau’s ability to define and specify contract requirements. This resulted in updated estimates for several contracts, including for the Census Questionnaire Assistance contract. However, while the Bureau has been able to better quantify risk; in August 2018 we also reported that the Secretary of Commerce included a contingency amount of about $1.2 billion in the 2017 cost estimate to account for what the Bureau refers to as “unknown unknowns.” According to Bureau documentation these include such risks as natural disasters or cyber attacks. The Bureau provides a description of how the risk contingency for “unknown unknowns” is calculated; however, this description does not clearly link calculated amounts to the risks themselves. Thus, only $14.4 billion of the Bureau’s $15.6 billion cost estimate has justification. According to Bureau officials, the cost estimate remains at $15.6 billion, but they are managing the 2020 Census at a lower level of funding—$14.1 billion and, at this time, do not plan to request funding for the $1.2 billion contingency fund for unknown unknowns or $369 million in funding for selected discrete program risks for what-if scenarios such as an increase in the wage rate or additional supervisors needed to manage field operations. Instead of requesting funding for these contingencies upfront the Bureau plans to work with OMB and Commerce to request additional funds, if the need arises. According to Bureau officials they anticipate that the remaining $1.1 billion in contingency funding included in the $14.1 billion will be sufficient to carry out the 2020 Census. In June 2016 we recommended the Bureau improve control over how risk and uncertainty are accounted for. This prior recommendation remains valid given the life-cycle cost estimate still includes the $1.2 billion unjustified contingency fund for “unknown unknowns”. Moreover, given the cost growth between 2015 and 2017 it will be important for the Bureau to monitor cost in real-time, as well as, document, explain and review variances between planned and actual cost. In August 2018 we reported that the Bureau had not been tracking variances between estimated life-cycle costs and actual expenses. Tools to track variance enable management to measure progress against planned outcomes and will help inform the 2030 Census cost estimate. Bureau officials stated that they already have systems in place that can be adapted for tracking estimated and actual costs. We will continue to monitor the status of the tracking system. According to Bureau officials it plans to release an updated version of the 2020 Census life-cycle estimate in the spring of 2019. To ensure that future updates to the life-cycle cost estimate reflect best practices, it will be important for the Bureau to implement our recommendation related to the cost estimate. Continued Management Attention Needed to Keep Preparations on Track and Help Ensure a Cost- Effective Enumeration 2020 Challenges Are Symptomatic of Deeper Long-Term Organizational Issues The difficulties facing the Bureau’s preparation for the decennial census in such areas as planning and testing; managing and overseeing IT programs, systems, and contractors supporting the enumeration; developing reliable cost estimates; prioritizing decisions; managing schedules; and other challenges, are symptomatic of deeper organizational issues. Following the 2010 Census, a key lesson learned for 2020 that we identified was ensuring that the Bureau’s organizational culture and structure, as well as its approach to strategic planning, human capital management, internal collaboration, knowledge sharing, capital decision- making, risk and change management, and other internal functions are aligned toward delivering more cost-effective outcomes. The Bureau has made improvements over the last decade, and continued progress will depend in part on sustaining efforts to strengthen risk management activities, enhancing systems testing, bringing in experienced personnel to key positions, implementing our recommendations, and meeting regularly with officials from its parent agency, Commerce. Going forward, we have reported that the key elements needed to make progress in high-risk areas are top-level attention by the administration and agency officials to (1) leadership commitment, (2) ensuring capacity, (3) developing a corrective action plan, (4) regular monitoring, and (5) demonstrated progress. Although important steps have been taken in at least some of these areas, overall, far more work is needed. We discuss three of five areas below. The Secretary of Commerce has successfully demonstrated leadership commitment. For example, the Bureau and Commerce have strengthened this area with executive-level oversight of the 2020 Census by holding regular meetings on the status of IT systems and other risk areas. In addition, in 2017 Commerce designated a team to assist senior Bureau management with cost estimation challenges. Moreover, on January 2, 2019, a new Director of the Census Bureau took office, a position that had been vacant since June 2017. With regard to capacity, the Bureau has improved the cost estimation process of the decennial when it established guidance including: roles and responsibilities for oversight and approval of cost estimation processes, procedures requiring a detailed description of the steps taken to produce a high-quality cost estimate, and a process for updating the cost estimate and associated documents over the life of a project. However, the Bureau continues to experience skills gaps in the government program management office overseeing the $886 million contract for integrating the IT systems needed to conduct the 2020 Census. Specifically, as of February 2019, 15 of 44 positions in this office were vacant. For the monitoring element, we found to track performance of decennial census operations, the Bureau relied on reports to track progress against pre-set goals for a test conducted in 2018. According to the Bureau, these same reports will be used in 2020 to track progress. However, the Bureau’s schedule for developing IT systems during the 2018 End-to-End test experienced delays that compressed the time available for system testing, integration testing, and security assessments. These schedule delays contributed to systems experiencing problems after deployment, as well as cybersecurity challenges. In the months ahead, we will continue to monitor the Bureau’s progress in addressing each of the five elements essential for reducing the risk to a cost-effective enumeration. Further Actions Needed on Our Recommendations Over the past several years we have issued numerous reports that underscored the fact that, if the Bureau was to successfully meet its cost savings goal for the 2020 Census, the agency needed to take significant actions to improve its research, testing, planning, scheduling, cost estimation, system development, and IT security practices. As of April 2019, we have made 97 recommendations related to the 2020 Census. The Bureau has implemented 72 of these recommendations, 24 remain open, and one recommendation was closed as not implemented. Of the 24 open recommendations, 11 were directed at improving the implementation of the innovations for the 2020 Census. Commerce generally agreed with our recommendations and is taking steps to implement them. Moreover, in April 2018 we designated 15 recommendations as “priority.” Priority recommendations are those recommendations that we believe warrant priority attention from heads of key departments and agencies. Eight of these 15 priority recommendations have been closed as implemented over the past year. On July 19, 2018, in response to our April 2018 letter calling his attention to our priority recommendations, the Commerce Secretary concurred that there was still much work to be done, and that the number of our priority recommendations concerning the 2020 Census was reflective of Commerce’s focus on ensuring a successful census in 2020. On April 23, 2019, we sent an updated priority recommendation letter to the Commerce Secretary that included five new recommendations from our recent work and also reflected the department’s progress on implementing past recommendations. We believe that attention to these recommendations is essential for a cost-effective enumeration. The recommendations included implementing reliable cost estimation and scheduling practices in order to establish better control over program costs, as well as taking steps to better position the Bureau to develop an internet response option for the 2020 Census. In addition to our recommendations, to better position the Bureau for a more cost-effective enumeration, on March 18, 2019, we met with OMB, Commerce, and Bureau officials to discuss the Bureau’s progress in reducing the risks facing the census. We also meet regularly with Bureau officials and managers to discuss the progress and status of open recommendations related to the 2020 Census, which has resulted in Bureau actions in recent months leading to closure of some recommendations. We are encouraged by this commitment by Commerce and the Bureau in addressing our recommendations. Implementing our recommendations in a complete and timely manner is important because it could improve the management of the 2020 Census and help to mitigate continued risks. Conclusions In conclusion, while the Bureau has made progress in revamping its approach to the census, it faces considerable challenges and uncertainties in implementing key cost-saving innovations and ensuring they function under operational conditions; managing the development and testing of its IT systems; ensuring the cybersecurity of its systems and data; and developing a quality cost estimate for the 2020 Census and preventing further cost increases. For these reasons, the 2020 Census is a GAO high-risk area. Regarding cybersecurity, the Bureau’s involvement of DHS to improve its cybersecurity posture, including cyber threat briefings and vulnerability assessments, is a positive step forward. However, the Bureau’s corrective actions to address its high-risk and very high-risk security weaknesses are frequently delayed—often for months—which increases the risk that these weaknesses could be exploited to cause harm to the agency’s systems. In addition, the Bureau’s process for addressing DHS’s cybersecurity recommendations has shortcomings, which increases the risk that the underlying deficiencies identified by DHS may be exploited to gain access to the Bureau’s systems and sensitive data. Going forward, continued management attention and oversight will be vital for ensuring that risks are managed, preparations stay on track, and the Bureau is held accountable for implementing the enumeration, as planned. Without timely and appropriate actions, the challenges previously discussed could adversely affect the cost, accuracy, schedule, and security of the enumeration. We will continue to assess the Bureau’s efforts and look forward to keeping Congress informed of the Bureau’s progress. Recommendations for Executive Action We are making the following two recommendations to Commerce: The Secretary of Commerce should direct the Director of the Census Bureau to direct the Census Bureau’s CIO to take steps to ensure that identified corrective actions for cybersecurity weaknesses are implemented within prescribed time frames. (Recommendation 1) The Secretary of Commerce should direct the Director of the Census Bureau to direct the Bureau’s CIO to implement a formal process for tracking and executing appropriate corrective actions to remediate cybersecurity weaknesses identified by DHS, and expeditiously address the identified deficiencies. (Recommendation 2) Chairman Serrano, Ranking Member Aderholt, and Members of the Subcommittee, this completes our prepared statement. We would be pleased to respond to any questions that you may have. GAO Contacts and Staff Acknowledgments If you have any questions about this statement, please contact Robert Goldenkoff at (202) 512-2757 or by email at goldenkoffr@gao.gov or Nick Marinos at (202) 512-9342 or by email at marinosn@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other key contributors to this testimony include Jon Ticehurst (Assistant Director); Ty Mitchell (Assistant Director); Lisa Pearson (Assistant Director); Andrea Starosciak (Analyst in Charge); Christopher Businsky, Rebecca Eyler, Scott Pettis, Lindsey Pilver; Kate Sharkey; Kevin R. Smith; Umesh Thakkar; and Tim Wexler. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The Bureau, a component of the Department of Commerce (Commerce), is responsible for conducting a complete and accurate decennial census of the U.S. population. The decennial census is mandated by the Constitution and provides vital data for the nation. A complete count of the nation's population is an enormous undertaking as the Bureau seeks to control the cost of the census, implement operational innovations, and use new and modified IT systems. In recent years, GAO has identified challenges that raise serious concerns about the Bureau's ability to conduct a cost-effective count. For these reasons, GAO added the 2020 Census to its High-Risk list in February 2017. GAO was asked to testify about the reasons the 2020 Census remains on the High-Risk List and the steps the Bureau needs to take to mitigate risks to a successful census. To do so, GAO summarized its prior work regarding the Bureau's planning efforts for the 2020 Census. GAO also included preliminary observations from its ongoing work examining the IT systems readiness and cybersecurity for the 2020 Census. This information is related to, among other things, the Bureau's progress in developing and testing key systems and the status of cybersecurity risks. What GAO Found The 2020 Decennial Census is on GAO's list of high-risk programs primarily because the Census Bureau (Bureau) (1) is using innovations that are not expected to be fully tested, (2) continues to face challenges in implementing information technology (IT) systems, and (3) faces significant cybersecurity risks to its systems and data. Although the Bureau has taken initial steps to address risk, additional actions are needed as these risks could adversely impact the cost, quality, schedule, and security of the enumeration. Innovations : The Bureau is planning several innovations for the 2020 Census, including allowing the public to respond using the internet. These innovations show promise for controlling costs, but they also introduce new risks, in part, because they have not been used extensively, if at all, in earlier enumerations. As a result, testing is essential to ensure that key IT systems and operations will function as planned. However, citing budgetary uncertainties, the Bureau scaled back operational tests in 2017 and 2018, missing an opportunity to fully demonstrate that the innovations and IT systems will function as intended during the 2020 Census. To manage risk to the census, the Bureau has developed hundreds of mitigation and contingency plans. To maximize readiness for the 2020 Census, it will also be important for the Bureau to prioritize among its mitigation and contingency strategies those that will deliver the most cost-effective outcomes for the census. Implementing IT systems : The Bureau plans to rely heavily on IT for the 2020 Census, including a total of 52 new and legacy IT systems and the infrastructure supporting them. To help improve its implementation of IT, in October 2018, the Bureau revised its systems development and testing schedule to reflect, among other things, lessons learned during its 2018 operational test. However, GAO's ongoing work has determined that the Bureau is at risk of not meeting near-term IT system development and testing schedule milestones for two upcoming 2020 Census operational deliveries, including address canvassing (i.e., verification of the location of selected housing units). These schedule management challenges may compress the time available for the remaining system development and testing, and increase the risk that systems will not function as intended. It will be important that the Bureau effectively manages IT implementation risk to ensure that it meets near-term milestones for system development and testing, and that it is ready for the major operations of the 2020 Census. Cybersecurity : The Bureau has established a risk management framework that requires it to conduct a full security assessment for each system expected to be used for the 2020 Census and, if deficiencies are identified, to determine the corrective actions needed to remediate those deficiencies. As of March 2019, the Bureau had over 500 corrective actions from its security assessments that needed to be addressed, including nearly 250 that were considered “high-risk” or “very high-risk.” However, of these 250 corrective actions, the Bureau identified 115 as being delayed. Further, 70 of the 115 were delayed by 60 or more days. According to the Bureau, these corrective actions were delayed due to technical challenges or resource constraints. Resolving identified vulnerabilities within the Bureau's established time frames can help reduce the risk that unauthorized individuals may exploit weaknesses to gain access to sensitive information and systems. To its credit, the Bureau is also working with the Department of Homeland Security (DHS) to support its 2020 Census cybersecurity efforts. For example, DHS is helping the Bureau ensure a scalable and secure network connection for the 2020 Census respondents and to strengthen its response to potential cyber threats. During the last 2 years, as a result of these activities, the Bureau has received 17 recommendations from DHS to improve its cybersecurity posture. However, the Bureau lacks a formal process for tracking and completing corrective actions for these recommendations which would help to ensure that DHS's efforts result in improvements to the Bureau's cybersecurity posture. In addition to addressing risks which could affect innovations and the security of the enumeration, the Bureau has the opportunity to improve its cost estimating process for the 2020 Census, and ultimately the reliability of the estimate itself, by reflecting best practices. In October 2017, the 2020 Census life-cycle cost estimate was updated and is now projected to be $15.6 billion, a more than $3 billion (27 percent) increase over its earlier estimate. GAO reported in August 2018 that although the Bureau had taken steps to improve its cost estimation process for 2020, it needed to implement a system to track and report variances between actual and estimated cost elements. According to Bureau officials, they plan to release an updated version of the 2020 Census life-cycle estimate in the spring of 2019. To ensure that future updates to the life-cycle cost estimate reflect best practices, it will be important for the Bureau to implement GAO's recommendation related to the cost estimate. Over the past decade, GAO has made 97 recommendations specific to the 2020 Census to help address these risks and other concerns. Commerce has generally agreed with these recommendations and has taken action to address many of them. However, as of April 2019, 24 of the recommendations had not been fully implemented. Of the 24 open recommendations, 11 were directed at improving the implementation of the innovations for the 2020 Census. To ensure a cost-effective enumeration, it will be important for the Bureau to address these recommendations. What GAO Recommends GAO is making two recommendations to the Bureau to (1) better ensure that cybersecurity weaknesses are addressed within prescribed time frames, and (2) improve its process for addressing cybersecurity weaknesses identified by DHS.
gao_GAO-19-280
gao_GAO-19-280_0
Background A variety of federal laws, regulations, and policies establish requirements and guidance for EPA to follow when appointing members to serve on advisory committees. For example, one purpose of FACA is to ensure that uniform procedures govern the establishment and operation of advisory committees. Also under FACA, an agency establishing an advisory committee must, among other things, require the committee’s membership to be balanced in terms of the points of view represented and the functions to be performed by the committee. In addition, federal ethics regulations establish when and how federal officials should review financial disclosure forms to identify and prevent conflicts of interest prohibited by federal law for any prospective committee members required to file these forms in connection with their appointments to advisory committees. GSA has provided additional guidance regarding the implementation of ethics requirements under FACA. Various EPA offices and officials are responsible for helping the agency follow these requirements. For example, EPA’s Federal Advisory Committee Management Division—which has overall responsibility for committee management and ensuring that EPA’s advisory committees comply with FACA—developed the Federal Advisory Committee Handbook to clarify roles and responsibilities for complying with relevant requirements. The handbook was written primarily for EPA employees assigned as designated federal officers for committees. These officers are responsible for the day-to-day management of advisory committees and play a central role in identifying and recommending candidates who can help the committees meet their goals. EPA employees assigned as designated federal officers also are responsible for maintaining committee records. According to EPA’s Federal Advisory Committee Handbook, one of the primary reasons that Congress passed FACA was to ensure public access to the records and documents of advisory committees, and that this fosters greater transparency and accountability of agencies’ use of advisory committees. EPA’s Ethics Office is responsible for helping the agency follow federal ethics requirements. Housed within the agency’s Office of General Counsel in headquarters, the Ethics Office oversees all aspects of the agency’s ethics program, including financial disclosure reporting. The Designated Agency Ethics Official coordinates and manages the program. The Designated Agency Ethics Official delegates authority to more than 100 deputy ethics officials located throughout the agency— including in headquarters and regional offices—to carry out most elements of EPA’s ethics program. For example, deputy ethics officials are to review financial disclosure reports for prospective committee members to identify and prevent conflicts of interest. Deputy assistant administrators, deputy regional administrators, office directors, and other EPA managers may be appointed to serve as deputy ethics officials for their offices as ancillary duties to their other responsibilities. EPA’s Advisory Committees and Committee Members EPA can establish two kinds of advisory committees—non-discretionary and discretionary committees. The agency establishes non- discretionary committees when required to by statute or directed to by the President. For example, the Clean Air Act requires EPA to establish an advisory committee to, among other things, help EPA review standards for national ambient air quality every 5 years. EPA also can establish discretionary committees at the Administrator’s direction if, for example, these committees provide an important and unique perspective on EPA programs or operations. An example of a discretionary committee is the Pesticide Program Dialogue Committee, which was formed to help EPA perform its duties under the Federal Insecticide, Fungicide and Rodenticide Act and related laws. See appendix II for a list of EPA’s 22 advisory committees as of March 31, 2018. EPA must approve the establishment of any subcommittees formed to assist committees with their work. EPA also can appoint different types of members to its advisory committees, depending on the needs of its committees and other considerations. For instance, EPA may appoint a committee member as a federal government employee under an appropriate hiring authority. If EPA expects a federal employee to serve no more than 130 days in any 365-day period, guidance from the U.S. Office of Government Ethics (OGE), which oversees the executive branch’s ethics program, states that the employee should be designated as a special government employee (SGE). If EPA decides not to appoint the committee member as a federal employee, that committee member would be a non-employee representative. EPA decides whether to appoint committee members as federal employees. To help federal agencies such as EPA determine whether to designate committee members as SGEs or representatives, OGE has developed guidance on factors to consider when agencies make these determinations. For example, OGE guidance states that SGEs are expected to provide independent expert advice and provide their best judgment free from conflicts of interest. They are generally subject to federal ethics regulations placed on other federal employees—including the requirement to file financial disclosure forms. In addition, OGE guidance states that representatives serve as the voice of groups or entities with a financial or other stake in a particular matter before an advisory committee. Federal ethics regulations generally do not apply to representative members on FACA committees. GSA’s FACA Database GSA has certain government-wide responsibilities for implementing FACA, including maintaining the government-wide FACA database that tracks certain characteristics of advisory committees. Specifically, FACA requires GSA to comprehensively review the activities and responsibilities of each advisory committee annually, including the committees for which EPA officials are responsible. In turn, GSA requires federal agencies responsible for advisory committees to enter data about those committees into the database. GSA and the responsible agency (e.g., EPA) review the data on a fiscal year basis for accuracy and completeness. These reviews are typically completed by February or March of the following year. GSA’s database is accessible by the general public. It includes data on committee members and committee activities from more than 50 agencies going back to 1997. The information on EPA committees includes: whether a committee member is designated as an SGE or representative; the occupation or affiliation of a committee member; state or other geographic information associated with a committee member’s occupation or affiliation; the appointment’s start and end date for each committee member; and the dates that committees held meetings. EPA’s Established Process for Appointing Members to Serve on Advisory Committees Includes Soliciting Nominations, Evaluating Candidates, and Obtaining Approvals Based on our review of EPA’s Federal Advisory Committee Handbook, the agency’s established process for appointing advisory committee members includes three main phases. These phases are soliciting nominations, evaluating candidates, and obtaining approvals from relevant EPA offices, such as the Federal Advisory Committee Management Division, before the Administrator or Deputy Administrator makes final appointment decisions. As shown in figure 1, each of the three main phases in EPA’s process involves several smaller steps. Unless noted otherwise, explanations of these steps can be found in the handbook, which documents the agency’s established process. Soliciting Nominations Soliciting nominations involves six basic steps, which are carried out by a committee’s designated federal officer. The steps are as follows: Develop selection criteria. This step involves identifying the specific perspectives or points of view that should be represented by members on the committee, such as specific scientific perspectives or understandings of environmental justice. This step applies to both discretionary and non-discretionary committees. In addition, federal laws establish membership requirements for the agency’s non- discretionary committees that designated federal officers must consider when developing selection criteria. For example, the Clean Air Act requires EPA to appoint seven members—including at least one member of the National Academy of Sciences, one physician, and one person representing state air-pollution control agencies—to an independent scientific advisory committee, known as CASAC. The selection criteria developed in this step should be reflected in the notice soliciting nominations. Develop an outreach plan. This plan should: (1) describe in detail how committees intend to solicit a diverse set of nominees and (2) discuss the specific forms of solicitation. For example, one outreach plan we reviewed specified that EPA staff would solicit nominations from the American Academy of Pediatrics, American Chemical Society, and other organizations that can help EPA review the quality, relevance, and performance of its research programs. Develop membership balance plans for discretionary committees. GSA guidance states that membership balance plans for discretionary committees should describe the process used to ensure that committee membership is balanced in terms of the points of view represented and functions to be performed by the committee. For example, one membership balance plan we reviewed stated that EPA staff would consider candidates from farm worker organizations; pesticide industry and trade associations; state, local and tribal governments; and public health and other organizations. According to that membership balance plan, EPA staff also would consider prospective committee members’ geographic location to help achieve balanced membership. Solicit nominations. During this step, the designated federal officer can solicit nominations via Federal Register notices and other means, such as emails to professional associations and specific EPA email distribution lists. In response to these notices, organizations can nominate individuals, or individuals can nominate themselves or other individuals. Contact nominees after receiving nominations. During this step, the designated federal officer confirms nominees’ qualifications and experience as well as their interest in and availability to serve on the committee. Assess the diversity of the pool of nominees and conduct additional outreach, if needed, to increase the diversity of the pool. EPA’s Federal Advisory Committee Handbook provides illustrative examples of how to follow this step. In one example, the handbook explains that a committee needs a representative from local government. For the past several years, the position has been filled by someone from an affluent suburban county. To increase diversity, the handbook recommends that the designated federal officer broaden outreach to other parts of the country, especially local governments that serve low-income, rural, urban, medically underserved, or vulnerable populations. Evaluating Candidates Evaluating candidates similarly involves several steps. The committee’s designated federal officer is primarily responsible for taking these steps for his or her assigned committee. In addition, a deputy ethics official is to review financial disclosure forms for any prospective members who are required to file these forms. In general, the steps for evaluating candidates are as follows: Evaluate candidates against selection criteria. During this step, the designated federal officer identifies the specific point of view that each candidate would bring to the committee—as well as each candidate’s ability to meet the selection criteria after interviewing candidates and reviewing their curriculum vitae, publications, and other relevant information. EPA’s Federal Advisory Committee Handbook notes that having the best people who represent key interests and balanced viewpoints enables the committee to provide EPA with recommendations that the agency can rely on as collective advice representing diverse stakeholder views. Identifying the best candidates may involve reviewing many more nominees than can be appointed. For example, EPA received approximately 100 nominations for 18 positions on the Science Advisory Committee on Chemicals in fiscal year 2017. Prepare a draft membership grid document with staff- recommended candidates and alternates. After evaluating individual candidates, the handbook directs the designated federal officer to recommend at least one primary and alternate candidate for each point of view and consolidate his or her short-list of recommended candidates into a draft membership grid document. The handbook indicates that this is a key step in the agency’s appointment process. It is intended to help designated federal officers identify gaps as they seek to meet FACA requirements for balanced committee membership. The handbook also directs the designated federal officer to submit the draft membership grid to EPA’s Federal Advisory Committee Management Division, EPA’s Office of General Counsel, and the Assistant Administrator for review and approval before submitting final recommendations to the Administrator. Therefore, the draft membership grid, which documents EPA staff’s rationale for recommending specific candidates, is intended to serve as the basis for discussions with EPA management as final decisions about the committee’s composition are made, according to EPA’s Federal Advisory Committee Handbook. Recommending at least one alternate for each point of view is intended to provide the EPA Administrator or Deputy Administrator—who officially selects committee members based on staff recommendations—with flexibility in appointing members, according to the handbook. Review financial disclosure forms for conformance with applicable conflict-of-interest statutes, regulations issued by OGE including any supplemental agency requirements, and other federal ethics rules, which state, among other things, that: SGEs appointed to serve on federal advisory committees generally must file financial disclosure forms within 30 days of assuming their new positions and either before providing advice to the agency or before the first committee meeting if they are eligible to file confidentially. The designated ethics official from each executive branch agency generally is to review financial disclosure reports within 60 days after receiving them and is to certify by signature and date that the filer is in compliance with federal ethics rules, and this official generally may delegate this responsibility. Obtaining Approvals Obtaining approvals involves several steps and numerous EPA officials. The steps for obtaining approvals generally are as follows: EPA’s Federal Advisory Committee Management Division reviews the proposed membership for balance. EPA guidance states that designated federal officers are to obtain written concurrence from the division before preparing the final membership package for the Administrator to sign. EPA’s Office of General Counsel conducts a legal review of the proposed membership. EPA guidance states that designated federal officers are to obtain written concurrence from the Office of General Counsel prior to appointment. Assistant Administrator or Regional Administrator approves the list of recommended candidates that will be presented to the Administrator’s office. Administrator or Deputy Administrator makes final appointment decisions and signs appointment letters. EPA Generally Followed Its Established Process but Did Not Follow a Key Step for Appointing 20 Members to Two Committees or Ensure Certain Members Met Federal Ethics Requirements From fiscal year 2017 through the first two quarters of fiscal year 2018, EPA generally followed its established process for most advisory committees; however, in fiscal year 2018, EPA did not follow a key step in its process for appointing 20 committee members to the SAB and CASAC. SAB is the agency’s largest committee and CASAC is responsible for, among other things, reviewing national ambient air-quality standards. In addition, when reviewing the step in EPA’s appointment process related specifically to financial disclosure reporting, we found that EPA did not consistently ensure that SGEs appointed to advisory committees met federal financial disclosure requirements. EPA Followed Most Steps but Did Not Follow a Key Step As Described in its Established Process for Appointing 20 Members to 2 Advisory Committees Our review of agency documents that supported appointment decisions for the 17 committees that appointed or reappointed committee members from fiscal year 2017 through the first two quarters of fiscal year 2018 found that EPA generally followed its process for most committees. All 14 of the discretionary committees that appointed or reappointed members during this time period developed membership balance plans, as required by GSA’s FACA regulations. In addition, 15 committees followed the step in EPA’s appointment process related to draft membership grid documents. That is, 20 of the 22 appointment packets we reviewed had draft membership grid documents reflecting EPA staff input on the best qualified and most appropriate candidates for achieving balanced committee membership. Additionally, 21 of the 22 appointment packets we reviewed contained documentation showing that EPA’s Office of General Counsel reviewed the proposed membership prior to appointment, as recommended by EPA’s Federal Advisory Committee Handbook. Figure 2 shows EPA’s established process and the steps we reviewed. For additional information about the extent to which EPA followed its process for appointing committee members, see appendix III. However, EPA did not follow a key step in its established process for appointing 20 members in fiscal year 2018 to the SAB and CASAC, which advise the agency on environmental regulatory matters, among other things. Specifically, the fiscal year 2018 appointment packets for the SAB and CASAC did not include draft membership grid documents reflecting EPA staff rationales for recommending the candidates EPA’s staff deem best qualified and most appropriate for achieving balanced committee membership. EPA officials told us in March 2019 that they did not prepare draft membership grids, as recommended by EPA’s Federal Advisory Committee Handbook, because EPA management requested a series of briefings instead. EPA officials also told us that during these briefings, EPA staff presented options for management to consider that reflected staff evaluations and summaries of public comments on candidates. EPA management then decided whom to appoint after reviewing the entire list of personnel nominated for membership—not a short-list of staff-recommended candidates, as called for by EPA’s handbook. During previous appointment cycles, EPA documents indicate and officials told us that EPA followed its established process when appointing committee members to SAB and CASAC. Specifically, documents from SAB’s and CASAC’s fiscal year 2017 appointment cycles indicate that both committees prepared draft membership grids in fiscal year 2017 in accordance with EPA’s established process. In addition, SAB and CASAC staff we interviewed told us that the process they used for filling vacancies prior to the fiscal year 2018 appointments involved vetting candidates before documenting in draft membership grids the candidates they deemed best qualified and most appropriate for achieving balanced committees. EPA officials stated that the briefing process they used in fiscal year 2018 was considered better than the use of draft membership grids, as it allowed EPA management to have in-depth discussions with SAB staff, resulting in better knowledge and a greater understanding of the SAB’s and CASAC’s membership needs. In written comments on the draft report, EPA stated that the vetting of candidates for SAB and CASAC occurred in a different manner than in previous years with a process more robust than membership grids. In addition, EPA stated that the public comment process was more robust, going beyond what was prescribed in the traditional membership process. There may be benefits to such discussions and solicitation of input. However, under EPA’s established process, agency staff are to document in draft membership grids and include in appointment packets their rationales for recommending the candidates they deem best qualified and most appropriate for achieving balanced committees. EPA developed guidance to implement FACA, one purpose of which is to encourage the establishment of uniform committee appointment and administration procedures. In written comments on the draft report, EPA noted that agency staff documented evaluations of advisory committee candidates in briefing documents. However, EPA did not provide these documents along with its comments. Moreover, neither these evaluations nor summaries of public comments were included in the packets that EPA’s Federal Advisory Committee Handbook indicates are to contain committee appointment information, impeding EPA’s ability to ensure that it consistently meets—across all of its advisory committees—FACA’s purpose of encouraging uniform committee appointment procedures. In addition, Federal Standards for Internal Control call for management to design control activities to achieve objectives and respond to risks, such as by clearly documenting all transactions and other significant events in a manner that allows the documentation to be readily available for examination. By directing officials responsible for appointing committee members to follow a key step in EPA’s appointment process—developing draft membership grids to document staff rationales for proposed membership—the agency would also have better assurance that it could show how it made appointment decisions to achieve the best qualified and most appropriate candidates for balanced membership. EPA Did Not Consistently Ensure That Committee Members Met Federal Ethics Requirements When reviewing the steps in EPA’s appointment process related specifically to financial disclosure reporting, we found that from fiscal year 2017 through the first two quarters of fiscal year 2018, EPA did not consistently ensure that 74 SGEs appointed or reappointed to serve on EPA advisory committees met federal financial-disclosure requirements. Of the 74 disclosure forms we reviewed, an ethics official signed and dated that the filer was in compliance with federal ethics rules for 77 percent, or 57 of the forms. However, for about 23 percent, or 17 of the 74 financial disclosure forms we reviewed, an ethics official had not signed and dated that the filer was in compliance with federal ethics rules. In addition, for about 57 percent, or 42 of the 74 forms we reviewed, we were unable to determine whether an ethics official had reviewed the financial disclosure forms within 60 days after they were filed because the forms did not indicate when EPA had received them. Table 1 illustrates the extent to which EPA took steps to ensure compliance with federal financial-disclosure-reporting requirements relevant to SGEs during this time period. In 2017, OGE found similar weaknesses in EPA’s ethics program. For example, when OGE reviewed a sample of EPA advisory committees’ ethics documents from 2015, it found that none of the financial disclosure forms for one committee had been reviewed—or signed and dated—by an ethics official to indicate that filers were in conformance with federal ethics rules. For two other committees, OGE found that EPA had not received in 2015 certain financial-disclosure forms that were due that year. We also found that EPA’s Ethics Office had not periodically evaluated, through audits or spot-checks, the quality of financial disclosure reviews conducted by its deputy ethics officials for SGEs appointed to advisory committees, as part of the periodic review of its ethics program called for by OGE regulations. An official we interviewed from EPA’s Ethics Office told us that the office did not have the staffing levels necessary to audit or spot-check financial disclosure reviews for SGEs. In addition, in a June 2018 correspondence to OGE about OGE’s review of EPA’s ethics program, EPA’s Designated Agency Ethics Official stated that EPA’s Ethics Office had fewer than three full-time equivalent positions at times during 2017. The correspondence also stated that the agency’s Office of General Counsel is committed to doubling the Ethics Office’s staffing levels in the future to increase oversight of its deputy ethics officials. Federal regulations and guidance specify that EPA has certain oversight responsibilities for its programs—including its ethics program. For example, OGE regulations: state that designated agency ethics officials, acting directly or through other officials, are responsible for carrying out effective financial disclosure programs by, among other things, using information in financial disclosure reports to prevent and resolve potential conflicts of interest; specify actions the official must take if the reviewing official concludes that information disclosed in the report may reveal a violation of applicable laws and regulations; and state that designated agency ethics officials are responsible for periodically evaluating their agencies’ ethics programs. Standards for Internal Control in the Federal Government also states that management should design control activities to achieve objectives and respond to risks, such as by comparing actual performance to planned or expected results and analyzing significant differences. Because EPA had not periodically evaluated through audits or spot- checks the quality of financial disclosure reviews for SGEs appointed to advisory committees, the agency was not well positioned to compare the program’s actual performance with planned results or address instances of noncompliance with federal ethics requirements. Until EPA’s Ethics Office, as part of its periodic review of its ethics program, evaluates—for example, through audits or spot-checks—the quality of financial disclosure reviews conducted for SGEs appointed to EPA advisory committees, it will not have reasonable assurance that it is addressing noncompliance with federal ethics requirements and preventing conflicts of interest among SGEs appointed to EPA advisory committees. EPA officials acknowledged that taking this additional oversight measure could enhance the agency’s ethics program. Selected Characteristics of Four EPA Advisory Committees Changed Notably after January 2017, but There Were No Notable Changes for 14 Committees Of the four characteristics we reviewed—committee composition, regional affiliation, membership turnover, and number of committee meetings— one or more of the first three characteristics changed notably for four of 18 of EPA’s advisory committees after January 2017. There were no notable changes in the four characteristics we reviewed for the other 14 committees for which we reviewed at least one of the characteristics. The Committee Composition, Regional Affiliation, or Membership Turnover of Four Committees Changed Notably after January 2017 The committee composition, regional affiliation, or membership turnover of four of EPA’s advisory committees changed notably after January 2017 compared to the period after January 2009. There was no notable change in the fourth characteristic we reviewed—that is, the number of meetings committees held. Each change identified as notable had at least a 20 percentage point difference in the change to the characteristic after January 2017 compared to the period after January 2009. See appendix I for additional information about our methodology. Committee Composition There was a notable decrease in the percentage of members affiliated with academic institutions on the SAB and EPA Board of Scientific Counselors (BOSC) committees after January 2017 compared to the period after January 2009. Our analysis shows that the percentage of committee members with an academic affiliation serving on the SAB decreased by 27 percentage points, or from 77 percent (36 of 47 members) on January 19, 2017, to 50 percent (22 of 44 members) about 15 months later on March 31, 2018. There was little change in the period after January 2009, when the percentage of academic members serving on the SAB remained stable at 83 percent (33 of 40 members) on January 19, 2009, and 82 percent (32 of 39 members) about 15 months later on March 31, 2010. Regarding 2013, academic members serving on the SAB decreased from 82 percent (40 of 49 members) on January 20, 2013 to 73 percent (37 of 51 members) about 15 months later. In addition to academic members, other members serving on the SAB are (1) affiliated with government (federal, local, state, or tribal) or with industry or non-government organizations (NGO); (2) are consultants; or (3) are others we could not assign to one of the above categories. See figure 3. BOSC also experienced a notable decrease in the percentage of members with an academic affiliation serving on the committee after January 2017 compared to the period after January 2009. Our analysis shows that the percentage of committee members with an academic affiliation serving on BOSC decreased by 45 percentage points, or from 65 percent (11 of 17 members) on January 19, 2017, to 20 percent (3 of 15 members) about 15 months later on March 31, 2018. There was little change in the percentage of academic members serving on BOSC after either January 2009 or January 2013. The percentage of members with an academic affiliation serving on BOSC was 55 percent (6 of 11 members) on January 19, 2009, and 56 percent (5 of 9 members) about 15 months later on March 31, 2010. Seven of 12 members were affiliated with academic institutions on January 20, 2013, and 5 of 9 members were similarly affiliated about 15 months later. See table 2. The regional affiliation of SAB committee members also changed notably after January 2017 compared to the period after January 2009. Our analysis shows that members affiliated with the southern region—which spans from Texas to Delaware—increased by about 25 percentage points, or from 28 percent (13 of 47 members) on January 19, 2017, to 52 percent (23 of 44 members) about 15 months later on March 31, 2018. There was little change in the period after January 2009, when the percentage of members affiliated with the southern region increased from 30 percent (12 of 40 members) on January 19, 2009, to 33 percent (13 of 39 members) about 15 months later on March 31, 2009. Regarding 2013, members affiliated with the southern region decreased from 33 percent (16 of 49 members) on January 20, 2013, to 27 percent (14 of 51 members) about 15 months later. Figure 4 shows the regional affiliation of SAB members using U.S. Census regions after January 2017 and January 2009. There was also a notable change in the number of members who left three committees after January 2017 compared to the number of members who left those committees after January 2009. Our analysis shows that of the members serving on January 19, 2017, 71 percent (12 of 17 members) of BOSC, 62 percent (23 of 37 members) of the Clean Air Act Advisory Committee, and 63 percent (25 of 40 members) of the Pesticide Program Dialogue Committee were no longer serving about 15 months later on March 31, 2018. There was little change in the period after January 2009, when 18 percent (2 of 11 members) of the members of BOSC and 3 percent (one of 35 members) of the members serving on the Clean Air Act Advisory Committee on January 19, 2009, were no longer serving on the committees about 15 months later on March 31, 2010. All of the members serving on the Pesticide Program Dialogue Committee (34 members) on January 19, 2009, were also serving about 15 months later on March 31, 2010. Regarding 2013, 25 percent (3 of 12 members) serving on BOSC on January 20, 2013, were not serving about 15 months later. All members serving on the other two committees on January 20, 2013, were also serving about 15 months later. For Most Advisory Committees We Reviewed, the Characteristics Did Not Change Notably After January 2017 In most instances, the four characteristics that we analyzed—committee composition, regional affiliation, membership turnover, and number of committee meetings held—did not change notably for the committees we reviewed from January 2017 to about 15 months later compared to the same time frame after January 2009. In many of these instances, the characteristics we analyzed had changed, but these changes were not large enough to be considered notable based on the approach we used to identify notable changes. Committee Composition Other than the SAB and BOSC, there were no notable changes after January 2017 in the composition of the five committees for which we analyzed this characteristic. We analyzed the committee composition of the three other committees combined because they did not have enough members to make individual analysis meaningful. Our analysis shows that the largest change after January 2017 that we did not identify as notable also occurred with BOSC. The percentage of members serving on BOSC with a government affiliation increased by 22 percentage points, or from 18 percent (3 of 17 members) on January 19, 2017, to 40 percent (6 of 15 members) about 15 months later on March 31, 2018. This compares to 2009 when the percentage of members serving on BOSC with a government affiliation remained at zero percent on January 19, 2009, (11 members) and about 15 months later on March 31, 2010, (9 members). Regional Affiliation Other than the SAB, there were no notable changes after January 2017 in the regional affiliation of members of the 10 committees for which we analyzed this characteristic. In addition to the SAB, we analyzed the regional affiliation of three other committees individually and the remaining six committees combined. The largest change in regional affiliation after January 2017 that we did not identify as notable also occurred with the SAB. Members affiliated with the northeast region decreased by more than 14 percentage points, or from 28 percent (13 of 47 members) on January 19, 2017, to 14 percent (6 of 44 members) about 15 months later on March 31, 2018. This compares to 2009 when the percentage of members affiliated with the northeast region stayed about the same, changing from 20 percent (8 of 40 members) on January 19, 2009, to 18 percent (7 of 39 members) about 15 months later on March 31, 2010. Membership Turnover Other than BOSC, the Clean Air Act Advisory Committee, and the Pesticide Program Dialogue Committee, there were no notable changes after January 2017 to membership turnover for the 14 committees for which we analyzed this characteristic. In addition to these three committees, we analyzed the membership turnover of six other committees individually and the remaining five committees combined. Our analysis shows that the largest change in membership turnover after January 2017 that we did not identify as notable occurred with the SAB. Of the members serving on this committee on January 19, 2017, 45 percent (21 of 47 members) were no longer serving about 15 months later on March 31, 2018. This compares to 2009 when 35 percent (14 of 40 members) serving on January 19, 2009, were not serving about 15 months later on March 31, 2010. Number of Committee Meetings Held There was no notable change in the percentage decrease of meetings held before and after January 2017 compared to a similar time frame before and after January 2009. We analyzed the number of meetings held by 18 committees. Our analysis shows that for the 18 committees combined, the number of meetings decreased by 40 percent (from 90 to 54 meetings) from the approximately 15 month period before January 2017 to the approximately 15 month period after January 2017. This compares to a 27 percent decrease in meetings (from 164 to 120 meetings) from the approximately 15-month period before January 2009 to the approximately 15-month period after January 2009. Overall, there was a decrease in the number of meetings from before January 2009 to after January 2017. The number of meetings held by the 18 committees combined decreased 67 percent (from 164 to 54 meetings) from the approximately 15-month period before January 2009 to the approximately 15-month period after January 2017. Figure 5 illustrates the decrease in the number of meetings held during this time frame. The figure shows the number of meetings held by SAB separately because of the relatively large number of meetings that it held relative to the other committees. Conclusions EPA’s federal advisory committees play an important role in advising the agency. EPA generally followed its established process for 15 of the 17 advisory committees that appointed or reappointed committee members during the time period we reviewed. However, EPA did not follow a key step in its process for appointing 20 members to two committees that advise the agency on environmental regulatory matters, among other things. The agency did not prepare draft membership grids with staff rationales for proposed membership, the documents intended to reflect EPA staff input on the best qualified and most appropriate candidates for achieving balanced committee membership before appointing these members. EPA officials told us in March 2019 that they did not prepare draft membership grids, as recommended by EPA’s Federal Advisory Committee Handbook, because EPA management requested a series of briefings instead. There may be benefits to following different procedures; however, under EPA’s established process, agency staff are to document in draft membership grids and include in appointment packets their rationales for recommending the candidates they deem best qualified and most appropriate for achieving balanced committees. By directing officials responsible for appointing committee members to prepare draft membership grids and include them in appointment packets for all committees, the agency would have better assurance that it could show how it made appointment decisions to achieve the best qualified and most appropriate candidates for balanced committee membership. EPA also did not consistently ensure that committee members appointed as SGEs met federal ethics requirements, and as part of its periodic review of its ethics program, EPA did not evaluate through audits or spot- checks the quality of financial disclosure reviews conducted by deputy ethics officials for these committee members. Until EPA’s Ethics Office periodically evaluates—for example, through audits or spot-checks—the quality of financial disclosure reviews conducted for SGEs appointed to EPA advisory committees, it will not have reasonable assurance that it will address noncompliance with federal ethics requirements and prevent conflicts of interest among SGEs appointed to EPA advisory committees. Recommendations for Executive Action We are making the following two recommendations to EPA: The EPA Administrator should direct EPA officials responsible for appointing advisory committee members to follow a key step in its appointment process—developing and including draft membership grids in appointment packets with staff rationales for proposed membership— for all committees. (Recommendation 1) EPA’s Designated Agency Ethics Official should direct EPA’s Ethics Office, as part of its periodic review of EPA’s ethics program, to evaluate—for example, through audits or spot-checks—the quality of financial disclosure reviews for special government employees appointed to EPA advisory committees. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to EPA for review and comment. In its written comments, reproduced in appendix IV, EPA disagreed with a key finding related to the first recommendation, with how we conducted some of our data analyses, and with some of the data points we presented. EPA agreed with the findings and conclusions related to the second recommendation. EPA also provided other comments, which we incorporated as appropriate. EPA stated that it believed a key finding related to the draft report’s first recommendation—that EPA follow, for all committees, the key step in its appointment process related to developing draft membership grids—was in error and should be removed from the final version of the report. EPA also stated that it followed all membership steps outlined in agency guidance with the exception of two committees, SAB and CASAC, who substituted the development of a membership grid with what the agency states was a more rigorous examination of the candidates (a series of briefings with senior management discussing the strengths and weaknesses of potential candidates). EPA stated that this is within the discretion of the EPA Administrator and that the vetting of candidates for SAB and CASAC occurred in a different manner than in previous years with a process more robust than membership grids. In addition, EPA stated that the public comment process was more robust, going beyond what was prescribed in the traditional membership process. According to EPA, for SAB and CASAC, the public was offered additional opportunity to provide input on all nominated candidates under consideration. We agree that conducting such briefings is within the discretion of the EPA Administrator, and we did not assess the outcomes of the membership appointment process. However, it remains that for SAB and CASAC, EPA did not follow a key step in its established appointment process—as documented in its agency-wide handbook—in which agency staff are to document in draft membership grids their rationales for recommending the candidates they deem best qualified and most appropriate for achieving balanced committees. While there may be benefits to following any number of alternative processes for appointing committee members, as EPA stated in its Federal Advisory Committee Advisory Handbook, EPA developed the handbook to help agency officials comply with FACA requirements. For these two advisory committees, EPA did not follow its established committee appointment process, impeding EPA’s ability to ensure that it consistently meets— across all of its advisory committees—FACA’s purpose of encouraging uniform committee appointment procedures. Furthermore, EPA did not provide documentation of the “more rigorous examination” of candidates it conducted in briefings. In its written comments, EPA stated that the SAB Staff Office documented staff evaluations in briefing documents and that we did not request such documents. However, we requested all appointment packets for the 17 committees that appointed or reappointed committee members from fiscal year 2017 through the first two quarters of fiscal year 2018. These appointment packets were to contain the documents used by EPA management to make appointment and reappointment decisions. EPA did not include the briefing documents in their packets for the SAB or CASAC, impeding EPA’s ability to ensure that it consistently meets— across all of its advisory committees—FACA’s purpose of encouraging uniform committee appointment procedures. Nor did the agency provide any such documentation in subsequent discussions about the extent to which the agency followed its established process. Our most recent meeting with EPA took place on March 19, 2019. As appropriate, we modified the report to further clarify our specific finding. Moreover, EPA disagreed with how we conducted some of our data analyses and with some of the data points we presented. We took numerous steps to ensure the accuracy of the data points presented in this report. In some instances, we identified missing or inconsistent data and shared this information with EPA officials. EPA provided some corrected data for members with missing or inconsistent appointment- date data from October 1, 2015 to March 31, 2018. We also asked EPA staff to confirm that the data had been updated in the FACA database, discussing the data with individual EPA staff members, conducting logic tests and spot-checking the data to identify errors and inconsistencies, and providing EPA with an opportunity to review and correct in writing the data presented prior to preparing our draft report. Also, in its written comments, EPA stated that we did not review data for BOSC subcommittees. Our methodology focused on the composition of committees and not their subcommittees. We continue to believe that the methodology we employed to analyze data was appropriate. We outline our rationale in appendix I, which includes the steps we took to ensure data reliability. For these reasons, we do not plan to make any further changes based on the additional data EPA provided. Lastly, EPA did not dispute our findings and conclusions related to the second recommendation that the agency evaluate, for example, through audits or spot checks, the quality of financial disclosure reviews for special government employees appointed to EPA advisory committees. EPA noted that at the time of our audit, its Ethics Office was understaffed. In its written comments, EPA said that it has now resolved these staffing issues and is engaged in a full and thorough review of all employees’ (including special government employees serving on federal advisory committees) ethics forms to ensure they meet all ethics requirements. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Administrator of the U.S. Environmental Protection Agency, the Administrator of the U.S. General Services Administration, and the Director of the U.S. Office of Government Ethics. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology To describe the U.S. Environmental Protection Agency’s (EPA) established process for appointing members to serve on EPA advisory committees, we identified and reviewed the federal laws, regulations, and policies that are relevant to EPA’s process for appointing advisory committee members. To ensure that we correctly identified all relevant laws, regulations, and guidance, we consulted with: (1) the Committee Management Secretariat at the U.S. General Services Administration (GSA), which issues regulations and guidance for Federal Advisory Committee Act (FACA) committees government-wide; (2) the U.S. Office of Government Ethics, which develops ethics-related regulations for executive branch employees; and (3) EPA. Examples of EPA guidance that we reviewed include EPA’s Federal Advisory Committee Handbook, Strengthening and Improving Membership on EPA Federal Advisory Committees, and EPA Ethics Advisory 2008-02. To evaluate the extent to which EPA followed its established process for appointing members from fiscal year 2017 through the first two quarters of fiscal year 2018, we reviewed pertinent documentation from the 17 committees that appointed or reappointed advisory committee members during this time frame. The remaining committees did not appoint any committee members during the time frame we reviewed. For the above- mentioned 17 committees, we reviewed all advisory committee appointment packets—each of which can contain appointment documents for numerous appointees or reappointees—produced during this time. We also reviewed the first section (Section 1: Identifying Information and Record of Agency Review) of the Confidential Financial Disclosure Form for EPA Special Government Employees (EPA Form 3110-48) for 74 individuals who were required to submit them to EPA to determine if they met federal financial-disclosure-reporting requirements. We reviewed all 74 of the forms provided by the 8 committees that appointed or reappointed special government employees (SGE) to serve on a committee from fiscal year 2017 through the first two quarters of fiscal year 2018. Additionally, we interviewed EPA officials involved with appointing committee members to understand the steps these officials took. We then compared the steps they described taking with selected steps in EPA’s established process for appointing members to evaluate the extent to which the agency followed its process. We focused on steps in the appointment process that were to be documented in the appointment packets, which EPA used to support appointment decisions. Specifically, we reviewed those aspects of the process for which EPA had documentary evidence, and we evaluated the implementation of ethics oversight requirements that are relevant to EPA’s committee-member appointment process. To determine whether the agency followed selected steps in its established process, two senior analysts reviewed the appointment packets. Specifically, one senior analyst conducted the primary analysis for about half of the 22 appointment packets we received, while the other conducted the primary analysis for the remaining packets. Afterwards, each analyst reviewed the other’s conclusions and noted agreement or disagreement based on the evidence provided. In some cases, discussion was necessary to resolve differences of opinion between the two analysts. Those discussions were documented. If additional documentation was necessary to resolve differences of opinion, we obtained additional information from the agency. The two analysts reached agreement on all of the packets. To describe how, if at all, selected characteristics of EPA’s advisory committees changed after January 2017, we analyzed information from the FACA database, a publically-available database maintained by GSA. The database contains information about FACA advisory committees that agencies, including EPA, are required to provide. The initial scope of our review was the 22 committees in existence on March 31, 2018. Of these 22 committees, we excluded from all of our analyses the four committees that were established after November 2007 because this is the earliest date of one of our analyses. We also excluded four other committees from the three analyses that rely on member appointment start and end dates (committee composition, membership turnover, and regional affiliation) because of missing or inconsistent data. Additionally, we excluded some other committees from some of our analyses because of other types of data reliability issues or because of the nature of the characteristic. To assess the reliability of the committee data, we reviewed database technical documentation and interviewed GSA and EPA officials to identify any potential issues with our planned analysis of the data, among other things, and determined that overall the data were sufficiently reliable for conducting analysis to describe changes in selected member and committee characteristics for our selected time periods. We discuss additional steps we took to assess the reliability of the data and data reliability issues with the FACA database at the end of this appendix. Additionally, appendix II identifies which committees we excluded from which analyses and the reasons why. Primarily using information available in the FACA database, we compared changes in four committee characteristics across committees and changes in presidential administrations. Specifically, we measured the characteristics before and after January 20, 2017, and compared them to similar periods before and after January 20, 2009. Additionally, we also compared the characteristics to those before and after January 21, 2013, to provide context to our findings and identify any patterns over time in the data. The four characteristics we measured and compared across committees and changes in presidential administrations were: Number of committee meetings For the first two characteristics, we compared across committees the percentage of members in the characteristics’ categories on either January 19, 2017, or January 19, 2009, to a day about 15 months later (either March 31, 2010, or March 31, 2018). For membership turnover, we compared across committees the percentage of members on either January 19, 2017, or January 19, 2009, who left a committee by about 15 months later (either March 31, 2010, or March 31, 2018). We chose March 31, 2018, to allow for a period of time after January 2017 for changes to occur in committee characteristics, and the fiscal year 2018 data file we received from GSA was updated as of March 31, 2018. For the fourth characteristic, we compared across committees the number of meetings held in the 15 months before January 20, 2009 and January 20, 2017, to a similar period after those dates (November 12, 2007, to March 31, 2010, or November 12, 2015, to March 31, 2018). To identify changes to a characteristic that were notable, we used the following methodology. First we identified any changes after January 2017 that were large relative to other changes to that characteristic after January 2017. If we identified a relatively large change, we then compared it to changes to the characteristic after January 2009 to assess whether it was large relative to those changes. If it was, we would identify the change as notable. The committees we analyzed individually had at least 10 members (or 10 meetings) in the relevant time periods being measured, with the exception of two committees which had nine members on March 31, 2010. We analyzed the other committees combined since relatively small changes in counts would have a relatively large impact on percentages. Committee Composition We measured the committee composition of 5 of 18 committees. We excluded 4 of the 18 committees because of data reliability issues and 9 committees because they were not staffed primarily with SGEs. We limited the committee composition analysis to SGEs because SGEs are expected to provide their best judgement free from conflicts of interest, rather than represent a particular viewpoint. We analyzed two of the five committees individually and the other three committees combined. To measure the composition of the five committees, we first categorized each member’s occupation from the “occupation/affiliation” field in the FACA database into one of six categories. The categories were: non-government organization (NGO); or other. To assign the categories, one GAO analyst reviewed the occupation/affiliation data for each member and assigned one of five categories (academic, consultant, government, industry, or NGO) to each member. In instances where it was unclear what category to assign, the analyst conducted online searches regarding the occupation/affiliation information to identify the type of entity and assign a category. We assigned the category “other” in 30 instances where the member was affiliated with more than one of the other categories, not affiliated with any of the other categories (for example, retired), or for which the FACA database did not provide sufficient information to assign one of the other categories. A second analyst reviewed the reasonableness of the categories assigned by the first analyst—including the additional research. The two analysts reached consensus on the categories for each member. We then applied the methodology described above to identify notable changes in committee composition after January 2017. Regional Affiliation We measured the regional affiliation of 10 of 18 committees. We excluded 8 committees because of data reliability issues. We analyzed 4 of the 10 committees individually and the other 6 committees combined. To measure the regional affiliation of the 10 committees, we assigned one of four U.S. Census regions (as defined by the U.S. Census Bureau) to each committee member based on data in the “occupation/affiliation” field in the FACA database for that member—in most instances, state information is included in this field. We then applied the methodology described above to identify notable changes in regional affiliation to the period after January 2017. The regions were: Western. Membership Turnover We measured membership turnover in 14 of 18 committees. We excluded 4 committees because of data reliability issues. We analyzed 9 of the committees individually and the other 5 committees combined. To measure membership turnover of the 14 committees, we used date fields indicating when committee members began and ended their terms to determine the percentages of members on a committee on January 19, 2017, and January 19, 2009, who were not members about 15 months later. We then applied the methodology described above to identify notable changes in membership turnover after January 2017. Number of Committee Meetings We measured the change in the number of meetings for 18 committees. We analyzed two of the committees individually and the other 16 committees combined. To measure this characteristic, we used data on the date that meetings were held (we used the date that the meeting began if it was a multi-day meeting). We then applied the methodology described above to identify notable changes in the number of meetings after January 2017. Data Reliability and Analysis Preparation We assessed the reliability of the data provided to us by GSA and took certain steps to prepare the data for analysis. GSA provided us with data files downloaded to Excel from its FACA database from October 1, 2005, to March 31, 2018, for our analysis. GSA maintains the FACA database on a fiscal year basis. During the fiscal year, staff in each agency, including EPA, are to enter data to reflect any changes about the agency’s FACA committees. At the end of each fiscal year, GSA is to perform, in conjunction with each agency, an annual comprehensive review of the data entered into the database by the agency for that fiscal year. According to GSA officials, these reviews constitute the agency’s main process for ensuring the reliability of the database. Once the review is complete, the data are locked down, meaning they can no longer be changed. We received data through the 2017 fiscal year after GSA completed the 2017 review. Because this latest GSA review was the end of fiscal year 2017 and we wanted to include data into 2018, we requested that EPA update the database to March 31, 2018, for each committee for certain data fields relevant to our analyses. We asked that for each committee, the EPA staff member responsible for entering a committee’s data in the FACA database provide confirmation to us that the data had been updated through March 31, 2018. After we received confirmation that data for the 22 committees in existence on March 31, 2018, had been updated, GSA staff provided us the data update for EPA committees from October 1, 2017, through March 31, 2018. To further assess the reliability of these data, we reviewed the database’s technical documentation and interviewed GSA and EPA officials to identify any potential issues with our planned analysis of the data. We conducted logic tests and spot-checked the data to identify errors and inconsistences. For example, we scanned committee member’s names to identify potential duplicates of the same person in the same committee and made corrections where appropriate. If a person served on more than one committee, we included that person separately for each committee on which he or she served. For each member, we also checked the appointment start and end dates indicated in each fiscal year for inconsistencies across fiscal years. In some instances, we identified missing or inconsistent data in these dates and shared this information with EPA officials. EPA was able to provide some corrected data for members with missing or inconsistent appointment-date data from October 1, 2015, to March 31, 2018. We excluded from our analyses four committees for which over 30 percent of members had appointment date issues we were not able to resolve, as well as individual members with unresolved date issues for the committees we included in the analysis. We also checked the 2018 data that GSA provided to us against the data posted to EPA’s website. We determined that overall the data were sufficiently reliable for conducting analysis to describe changes in selected member and committee characteristics for our selected time periods. Finally, we took steps to structure the data provided by GSA in the format needed for our analyses. Specifically, because GSA maintains its data on a fiscal year basis, the data we received from GSA contained a separate row in the database for each committee member for each fiscal year that he or she was a member. To facilitate our analyses, we transposed the dataset so there was one row for each member (for each committee, if a member was in more than one committee) that contained the data from all of the fiscal year records for that member. We conducted this performance audit from October 2017 to July 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Additional Information about U.S. Environmental Protection Agency’s Advisory Committees Table 3 provides information about each of the 22 advisory committees managed by the U.S. Environmental Protection Agency (EPA) as of March 31, 2018. For each of these committees, the table also identifies whether we included it in one or more of our analyses. If we excluded a committee from certain analyses, we also explain why. Appendix III: Advisory-Committee Appointment Packets for Which the U.S. Environmental Protection Agency Followed the Steps Evaluated by GAO Table 4 summarizes the number of advisory-committee appointment packets for which the U.S. Environmental Protection Agency (EPA) did or did not follow the steps we evaluated for appointing members to serve on EPA advisory committees. Appendix IV: Comments from the U.S. Environmental Protection Agency Appendix V: GAO Contact and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the individuals named above, Joseph Thompson (Assistant Director), John Delicath, Charles Egan, Chad Gorman, Richard Johnson, Yvonne Jones, Mary Koenen, James Lager, Amber Sinclair, and Kiki Theodoropoulos made important contributions to this report.
Why GAO Did This Study Federal advisory committees provide advice to federal agencies on many topics. As of March 31, 2018, EPA managed 22 such committees. They advise the agency on such issues as developing regulations and managing research programs. Questions have been raised about EPA's process for appointing committee members after recent policy changes affecting who serves on the advisory committees. GAO was asked to review issues related to how EPA appoints advisory committee members. This report examines: (1) EPA's process for appointing advisory committee members, (2) the extent to which EPA followed its process for selecting members from October 2016 through March 2018, and (3) how, if at all, selected characteristics of EPA advisory committees changed after January 2017. GAO reviewed relevant federal laws, regulations, and guidance; reviewed documents from committees that appointed members over this period; analyzed information from the GSA's FACA database; and interviewed agency officials. What GAO Found Based on GAO's review of U.S. Environmental Protection Agency's (EPA) guidance, the agency's established process for appointing advisory committee members involves three main phases: soliciting nominations, evaluating candidates, and obtaining approvals. Each phase involves several steps. For example, a key step for evaluating candidates involves EPA staff's preparing documents that reflect staff recommendations on the best qualified and most appropriate candidates for achieving balanced committee membership, according to EPA guidance. EPA generally followed its established process for most of its 22 advisory committees; however, in fiscal year 2018, EPA did not follow a key step for appointing 20 committee members to two committees GAO reviewed: the EPA Science Advisory Board and Clean Air Scientific Advisory Committee, which advise the agency on environmental regulatory matters, among other things. The 2018 appointment packets for these two committees did not contain documents reflecting EPA staff rationales for proposed membership, as called for by EPA's established process. EPA developed guidance to implement the Federal Advisory Committee Act (FACA). By directing officials responsible for appointing committee members to follow a key step in its process to document staff rationales for proposed membership, the agency would have better assurance that it will (1) consistently meet FACA's purpose of encouraging uniform appointment procedures and (2) show how it made appointment decisions to achieve the best qualified and most appropriate candidates for balanced committee membership. EPA also did not consistently ensure that members appointed as special government employees (SGE)—who are expected to provide their best judgment free from conflicts of interest and are required by federal regulations to disclose their financial interests—met federal ethics requirements. For about 23 percent, or 17 of the 74 financial disclosure forms GAO reviewed, an ethics official had not signed and dated that the SGE filing the form was in compliance with federal ethics rules. EPA also did not periodically review its ethics program, as called for by federal regulations, such as through audits or spot-checks, to evaluate the quality of financial disclosure reviews for SGEs. Until EPA's Ethics Office evaluates the quality of financial disclosure reviews of SGEs as part of its periodic review of its ethics program, it will not have reasonable assurance that it will address noncompliance with federal ethics requirements and prevent conflicts of interest on its advisory committees. Based on GAO's review of the U.S. General Services Administration's (GSA) FACA database, there were notable changes to selected characteristics of EPA advisory committees (i.e. at least a 20 percentage point difference in the change to a characteristic after January 2017 compared to the period after January 2009). Of the four characteristics GAO reviewed—committee composition, regional affiliation, membership turnover, and number of meetings committees held—one or more of the first three changed notably for four of 18 EPA advisory committees after January 2017. What GAO Recommends GAO is recommending that EPA direct (1) officials responsible for appointing committee members to follow a key step in its appointment process to document staff rationales for proposed membership and (2) EPA's Ethics Office to evaluate the quality of financial disclosure reviews of SGEs appointed to advisory committees. EPA disagreed with the first and agreed with the second recommendation. GAO continues to believe that both are valid, as discussed in the report.
gao_GAO-19-668T
gao_GAO-19-668T_0
Background Wireless broadband connects users to the Internet using spectrum to transmit data between the customer’s location and the service provider’s facility, and can be transmitted using fixed wireless and mobile technologies, as shown in figure 1. Fixed wireless broadband technologies establish an Internet connection between fixed points—such as from a radio or antenna that may be mounted on a tower, to a stationary wireless device located at a home— and generally requires a direct line of sight. Mobile wireless broadband technologies also establish an Internet connection that requires the installation of antennas, but this technology provides connectivity to customers wherever they are covered by service, including while on the move, such as with a cell phone. Spectrum is the resource that makes wireless broadband connections possible. Spectrum frequency bands each have different characteristics that result in different levels of ability to cover distances, penetrate physical objects, and carry large amounts of information. Examples of some of the frequency bands that can be used by commercial and nonfederal entities for broadband services are shown in figure 2. The frequency bands that can be used for broadband services are either licensed or unlicensed. For licensed spectrum, FCC can assign licenses through auctions, in which prospective users bid for the exclusive rights to transmit on a specific frequency band within geographic areas. Having exclusive rights ensures there will be no interference from other spectrum users in that band. License holders may sell or lease their license, in whole or in part, to another provider, a process that is known as a secondary market transaction, with FCC’s approval. FCC has assigned licenses administratively in two frequency bands that can be used for broadband services. FCC also authorizes the use of unlicensed spectrum, where an unlimited number of users can share frequencies without a license, such as wireless microphones, baby monitors, and garage door openers. In contrast to users of licensed spectrum, unlicensed users have no regulatory protection from interference by other licensed or unlicensed users in the bands. In March 2010, FCC issued the National Broadband Plan that included a centralized vision for achieving affordability and maximizing use of high- speed Internet. The plan made recommendations to FCC, including that FCC should take into account the unique spectrum needs of tribal communities when implementing spectrum policies and evaluate its policies and rules to address obstacles to spectrum access by tribal communities. With regard to tribal lands, the plan recommended that FCC increase its commitment to government-to-government consultation with tribal leaders and consider increasing tribal representation in telecommunications planning. FCC established the Office of Native Affairs and Policy in July 2010 to promote the deployment and adoption of communication services and technologies to all native communities, by, among other things, ensuring consultation with tribal governments pursuant to FCC policy. Few Tribal Entities Had Obtained Licensed Spectrum and Face Barriers Doing So For our November 2018 report, we identified 18 tribal entities from FCC’s license data that held active spectrum licenses in bands that can be used to provide broadband services as of September 2018. Of those 18, 4 obtained the spectrum through a secondary market transaction and 2 from an FCC spectrum auction. We interviewed 16 tribal entities that were using wireless technologies at the time to provide service, and 14 told us that they were accessing unlicensed spectrum to do so. While representatives from most of the 16 tribal entities reported some advantages of unlicensed spectrum, such as the spectrum is available at no cost, they also discussed their experiences with the limitations of unlicensed spectrum, including issues with interference and speed or capacity. Some of the stakeholders we contacted and FCC have highlighted the importance of exclusive-use licensed spectrum for tribal entities. For example, FCC’s Office of Native Affairs and Policy reported in 2012 that unlicensed spectrum is not an option across all tribal lands and that tribal access to robust licensed spectrum is a critical need. In addition, representatives from the stakeholders we interviewed told us that there are non-technological benefits for tribal entities to obtain greater access to licensed spectrum, including: enhanced ability to deliver additional Internet services, enhanced ability to sell or lease spectrum for profit, and additional opportunities to obtain federal funding that requires entities to hold or have access to licensed spectrum. Furthermore, two tribal stakeholders and representatives from several tribal entities told us that having access to licensed spectrum would enable tribes to exercise their rights to sovereignty and self- determination. For example, representatives from four of the tribal entities told us that having access to licensed spectrum would ensure that spectrum is being used in a way that aligns with tribal goals and community needs, further supporting their rights to self-determination. In our November 2018 report, we described barriers tribal entities reported facing in accessing licensed spectrum. First, representatives from tribal entities we contacted said that obtaining a spectrum license through an auction was too expensive for many tribal entities. Indeed, over 60 percent (983 of 1,611) of the winning bids from a 2015 spectrum auction were more than $1 million. Representatives from some tribal entities told us they were unable to obtain financing to participate in auctions because tribal governments cannot use tribal lands as collateral to obtain loans and that participating in spectrum auctions requires auction-specific expertise that tribal entities may not have. Second, tribal entities reported facing barriers obtaining spectrum through secondary market transactions. Most of the spectrum allocated for commercial use has already been assigned through spectrum auctions and other mechanisms to private providers that may not be providing service on tribal lands. As such, there may be tribal areas where providers hold licenses for bands but are not using the spectrum to provide Internet service. All three of the tribal associations we contacted confirmed that there were unused spectrum licenses over tribal lands, and representatives from a nationwide provider indicated that they only deploy services if there is a business case to support doing so. Accordingly, the secondary market is one of few avenues available to tribal entities that would like to access licensed spectrum. However, representatives from tribal entities we contacted told us it could be challenging to participate in the secondary market because there is a lack of willing sellers, license holders are not easily identified, and tribal entities may not be aware of how to pursue secondary market transactions. For example, representatives from a tribal entity that had been successful in obtaining a license through the secondary market told us that an Indian-owned telecommunications consulting company was pivotal in identifying the license holder and facilitating the transaction, and without such assistance, the transaction would not have occurred. FCC Had Taken Some Actions to Increase Access, but Does Not Collect or Communicate Key Spectrum-Related Information to Tribal Entities At the time of our November 2018 report, we found that FCC had taken some actions to increase tribal access to spectrum. In particular: FCC issued a proposed rulemaking in March 2011 that sought comments on three proposals to create new spectrum access opportunities for tribal entities (see fig. 3). As of July 12, 2019, FCC had not adopted new rules or taken further action on the 2011 rulemaking. FCC issued a proposed rulemaking in May 2018 that sought comment on establishing a priority window for tribal nations located in rural areas to obtain a license in the Educational Broadband Service spectrum band (also known as the 2.5 GHz band). In the proposed rulemaking, FCC had found that significant portions of this band were not being used, primarily in rural areas. FCC had not finalized this rule at the time of our November 2018 report, but published a draft order in June 2019 that would establish a priority filing window so that tribal entities could get access to unassigned spectrum in the 2.5 GHz band on rural tribal lands prior to an FCC auction. FCC adopted this order on July 10, 2019. FCC’s Office of Native Affairs and Policy conducts training, consultation, and outreach to tribal entities on spectrum-related issues, such as communicating with tribal entities prior to FCC auctions or when FCC regulatory actions or policies would affect tribal governments and spectrum over their lands. FCC’s 2010 National Broadband Plan stated that ongoing measurement of spectrum utilization should be developed to better understand how spectrum resources are being used because some studies indicated that spectrum goes unused in many places much of the time. The plan stated that any spectrum utilization studies that FCC conducts should identify tribal lands as distinct entities. The plan also stated that FCC should make data available that would promote a robust secondary market for spectrum licenses, such as information on how and to whom spectrum is allocated on tribal lands. In FCC’s 2018 strategic plan, FCC stated that it will implement ongoing initiatives that will assist in spectrum policy planning and decision making, promote a robust secondary market in spectrum, and improve communications services in all areas of the United States, including tribal areas. Additionally, federal internal control standards state that agencies should use quality information, including information that is complete, to inform the decision-making processes and communicate with external entities. Tribal governments are an example of such external entities. However, in our 2018 report, we found that FCC had not consistently collected data related to tribal access to spectrum or communicated important information to tribes. In particular: FCC did not collect data on whether spectrum license-holders or auction applicants are tribal entities. Without this information, FCC did not have a comprehensive understanding of the extent that tribal entities are attempting to obtain or access licensed spectrum or have been successful at obtaining and accessing it. FCC did not analyze the extent that unused licensed spectrum exists over tribal lands, even though FCC had the information—broadband availability data from providers and information on geographic areas covered by spectrum licenses—needed for such an analysis. Although FCC officials told us evaluating the effectiveness of FCC’s secondary market policies is a way to increase the use of unused spectrum, FCC’s approach did not include an analysis of unused spectrum licenses on tribal lands. As a result, FCC’s evaluations of the secondary market may not have accurately reflected how its policies affect tribal entities. Because the secondary market is one of few ways for tribal entities to access licensed spectrum, such an assessment would enable FCC to better promote a robust secondary market that provides opportunities for tribes to access spectrum. FCC did not communicate information to tribes that could benefit them in their efforts to obtain spectrum in the secondary market. As described earlier, the secondary market is a significant mechanism for tribal entities to obtain spectrum licenses, but representatives from the tribal entities we interviewed reported challenges related to participating in the secondary market, such as not knowing whom to contact should they wish to engage in a secondary market transaction to obtain a spectrum license. We concluded that FCC’s efforts to promote and support tribal entities’ access to spectrum had done little to increase tribal use of spectrum. In particular, FCC lacked information that could help inform its decision- making processes related to spectrum policy planning, which is intended to improve communications services in all areas of the United States, including tribal lands. By collecting data on the extent that tribal entities are obtaining and accessing spectrum, FCC could better understand tribal spectrum issues and use this information as it implements ongoing spectrum initiatives. Furthermore, the ability of tribal governments to make informed spectrum planning decisions and to participate in secondary market transactions is diminished without information from FCC on the spectrum transactions that occur over tribal lands. Providing this information directly to tribal entities could enable them to enter into leasing, partnership, or other arrangements to obtain spectrum. In our November 2018 report, we recommended that FCC (1) collect data on the extent that tribal entities are obtaining and accessing spectrum and use this information as FCC implements ongoing spectrum initiatives; (2) analyze data to better understand the extent that unused spectrum licenses exist over tribal lands, such as by analyzing the data for a sample of tribal lands, and as appropriate use this information to inform its oversight of the secondary market; and (3) make information on spectrum-license holders more accessible and easy to understand for interested parties, including tribal entities, to promote their ability to purchase or lease spectrum licenses from other providers. FCC agreed with these recommendations and described the actions it plans to take to implement them. For example, according to FCC, it will consider ways to collect data on the extent to which tribal entities are obtaining and accessing spectrum; analyze data from a sample of spectrum licenses on tribal lands to inform FCC’s spectrum policies; and transition to a more user-friendly system for its licensing data. Chairman Hoeven, Vice Chairman Udall, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Andrew Von Ah, Director, Physical Infrastructure Issues at (202) 512-2834 or vonaha@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Sally Moino and Anne Doré. Other staff who made contributions to the report cited in this testimony are identified in the source product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Broadband service on tribal lands continues to lag behind the rest of the country, especially on rural tribal lands. Broadband service can be delivered through wireless technologies using radio frequency spectrum. According to FCC, increasing tribal access to spectrum would help expand broadband service on tribal lands. This statement is based on GAO's November 2018 report ( GAO-19-75 ) related to spectrum use for broadband services by tribal entities and selected updates. Specifically, it discusses (1) tribal entities' ability to obtain and access spectrum to provide broadband services and the reported barriers that may exist, and (2) the extent to which FCC promotes and supports tribal efforts to obtain and access spectrum. For that report, GAO interviewed 16 tribal entities that were using wireless technologies. Selected entities varied geographically, among other characteristics. GAO analyzed FCC's license and auction data as of September 6, 2018, reviewed FCC's rulemakings on spectrum for broadband services, and interviewed other tribal and industry stakeholders and FCC officials. The information obtained was not generalizable to all tribes or industry participants. As an update, GAO reviewed FCC's June 2019 draft order related to spectrum in the 2.5 GHz band. What GAO Found The tribal entities—tribal governments and tribally owned telecommunications providers—GAO contacted for its November 2018 report cited various barriers to obtaining spectrum licenses in bands that can be used to provide broadband services. Based on data from the Federal Communications Commission (FCC) as of September 2018, GAO identified 18 tribal entities that held active spectrum licenses in such bands. For example, of these 18 tribal entities, 4 obtained licenses through secondary market transactions—that is, they bought or leased the license from another provider, and 2 obtained a license through an FCC spectrum auction. The barriers tribal officials identified to obtaining licensed spectrum include high costs at auctions and, in the case of secondary market transactions, a lack of information on who holds licenses over tribal lands. Because most spectrum allocated for commercial use has already been assigned, the secondary market is one of the few avenues available to tribal entities that would like to access licensed spectrum. At the time of GAO's November 2018 report, FCC had taken some actions to increase tribal access to spectrum. For example, FCC issued proposed rulemakings in 2011 and 2018 that sought comment on tribal-specific proposals, such as establishing tribal-licensing priorities and initiating processes to transfer unused spectrum licenses to tribal entities. FCC had not finalized these rules at the time of GAO's report, but FCC published a draft order in June 2019 that would create a tribal-licensing priorty window, whereby tribal entities would have an opportunity to obtain spectrum in the 2.5 gigahertz (GHz) band prior to the spectrum being auctioned. FCC adopted the order on July 10, 2019. FCC stated that it will implement spectrum initiatives and that it recognizes the importance of promoting a robust secondary market to improve communications throughout the United States, including tribal lands. However, GAO found that FCC had not consistently collected data related to tribal access to spectrum. For example: FCC did not collect data on whether spectrum auction applicants are tribal entities and therefore did not have a comprehensive understanding of the extent that tribal entities are attempting to obtain licensed spectrum. FCC did not analyze the extent that unused licensed spectrum exists over tribal lands. Although FCC officials said evaluating the effectiveness of FCC's secondary market policies is a way to increase the use of unused spectrum, FCC's approach did not include an analysis of unused spectrum licenses on tribal lands. As a result, FCC's evaluations of the secondary market may not accurately reflect how its policies affect tribal entities. By collecting data on the extent that tribal entities are obtaining and accessing spectrum, FCC could better understand tribal spectrum issues and use this information as it implements ongoing spectrum initiatives. Further, given that the secondary market is one of few ways for tribal entities to access licensed spectrum to provide Internet service, FCC could promote a more robust secondary market by analyzing unused licensed spectrum over tribal lands and using that information to inform FCC's oversight responsibilities. What GAO Recommends In the November 2018 report, GAO made three recommendations to FCC, including that FCC should collect data on tribal access to spectrum and analyze unused licensed spectrum over tribal lands. FCC agreed with the recommendations and described actions to address them.
gao_GAO-20-492
gao_GAO-20-492_0
Background Employment-Related Identity Fraud Individual Taxpayer Identification Number (ITIN) An ITIN is a tax processing number issued by IRS to individuals who are required to have a U.S. taxpayer identification number but who do not have and are not eligible to obtain a Social Security number from the Social Security Administration. IRS issues ITINs to help individuals comply with the U.S. tax laws, and to provide a means to efficiently process and account for tax returns and payments for those not eligible for Social Security numbers. Taxpayers may first realize they are victims of employment-related identity fraud when IRS notifies them of discrepancies in the reporting of income earned using their names and SSNs. After filing deadlines have passed, IRS’s Nonfiler and Automated Underreporter (AUR) programs use W-2 information to identify and follow up with taxpayers who appear to owe taxes but either have not filed returns (Nonfiler) or have filed returns but underreported earnings (AUR). Other taxpayers may become aware that their SSNs were used by other people when IRS sends them an Employment-Related Identity Theft (CP01E) notice. IRS sends these notices to taxpayers whose SSNs appear on W-2s that have been attached to tax returns (Forms 1040, U.S. Individual Income Tax Return) that were filed with Individual Taxpayer Identification Numbers (ITIN) (see sidebar). In these cases, IRS marks the taxpayer accounts with an employment-related identity theft indicator. Victims may also notice wages they did not earn appearing on their Social Security earnings record or may be alerted by SSA that their Supplemental Security Income benefits are being reduced or eliminated because of wages earned by someone else using their SSN. Information Exchanges Involved in Employment- Related Identity Fraud The following individuals and agencies are involved in verifying individuals’ eligibility for employment, in processing wage information, or in monitoring identity fraud cases. Employer: Employers are required to complete the Form I-9, Employment Eligibility Verification for new hires. As part of completing the form, employers certify that they have examined documentation demonstrating that new hires are who they say they are, are eligible for employment, and that the documentation appears to be genuine. The employer is required to submit a W-2 to each employee as well as SSA by January 31 each year. Employee: As part of obtaining employment, the employee provides the employer with documentation to authenticate his or her identity. It is at this point that the employee could provide someone else’s SSN or other information. DHS: DHS manages E-Verify, a free, internet-based system that employers can use to verify employees’ employment eligibility. SSA supports DHS in this effort. Federal agencies are required to use E- Verify for federal employees and contractors. Some states also require employers to use E-Verify to verify the eligibility of some or all employees or contracts. According to DHS, by the end of fiscal year 2019, more than 890,000 employers were enrolled in E-Verify. SSA: SSA receives W-2s from employers and uses this information to update earnings records and to make determinations about benefits. After receiving and processing W-2s, SSA sends the W-2 information to IRS as part of the Combined Annual Wage Reporting (CAWR) process. SSA also maintains the Social Security Number Verification Service, a free SSN verification program that registered employers can use to verify that employee names and SSNs match SSA’s records before they submit W-2s to SSA. IRS: IRS uses W-2 information to verify tax return information, such as wages, withholdings, and Employer Identification Numbers (EIN), and to enforce tax law. IRS has legal authority to penalize employers $250 for each inaccurate W-2 they submit up to a maximum of $3 million in total penalties per year. In 2013, the SSA OIG reported that IRS does not routinely penalize employers who consistently submit erroneous or inaccurate wage information. Federal Trade Commission: It collects and reports to the public aggregated data from self-reported victims of identity fraud. Victims can visit www.IdentityTheft.gov to report identity theft and access resources. A Million SSNs May Be at Risk of Employment-Related Identity Fraud and Tax Noncompliance, but the Extent of Such Fraud Is Unknown Our analysis shows that millions of SSNs in NDNH data exhibited risk characteristics associated with employment-related identity fraud in tax year 2016. More than a million of those were also at risk of not meeting all IRS tax return requirements, such as reporting all associated W-2s. However, IRS did not identify all of those noncompliant returns. Further, employment-related identity fraud can diminish tax revenues. IRS’s method for tracking employment-related identity fraud likely understates the extent of the problem. More Than 2.9 Million SSNs in NDNH Data Had Risk Characteristics in Tax Year 2016 We identified more than 2.9 million SSNs that had risk characteristics associated with SSN misuse, and had evidence of employment activity based on our analysis of NDNH verified quarterly wage records for 122.8 million individuals from August to October 2016. The risk characteristics included: Individuals who had wages reported for three or more employers in the same quarter; Individuals who were deceased; Individuals under age 14; and Individuals over age 84 (see table 1). We previously reported that the existence of three or more wage records in the same time frame for the same individual indicates possible SSN misuse, which could include employment-related identity fraud. We also previously reported, along with the Department of Justice and SSA OIG, that deceased persons, children, and elderly populations are at risk of identity theft (IDT). Fraudsters may target these groups because they believe there is a lower chance the SSNs are being used for legitimate employment. Individuals with three or more employers within the same quarter. Our analysis of NDNH data identified millions of SSNs with three or more wage records from August to October 2016. Specifically, of the 122.8 million SSNs included in the data, we found 2.8 million with three or more wage records in the same quarter. Further, we found almost 10,000 of those SSNs had wages reported by 10 or more employers in the same quarter. It is not uncommon for individuals to have second jobs or to change employers. However, when wages are reported by three or more employers for the same calendar quarter, it can indicate potential misuse of an SSN (see table 2). As an illustrative example of potential SSN misuse, one SSN had wages reported by 15 employers from 14 different states for a 3-month period in 2016 (see figure 1). According to the wage data, on average, each of these employers was paying the employee approximately $26,900 a year. Deceased individuals. We identified several thousand SSNs for deceased individuals included in the NDNH data. Specifically, the NDNH data August-October 2016 showed 13,600 SSNs for individuals SSA identified as deceased prior to May 2016. Of these, 8,400 are reported to have died before 2014. In some cases, we found individuals who had been deceased for a decade. Children. We identified tens of thousands of SSNs for children under the age of 14. Specifically, NDNH data included 33,856 SSNs of individuals who, according to SSA data, were under the age of 14 with earned income reported. One reason children can be at risk of long- term victimization of employment-related identity fraud is because it usually takes children a while before they start working or applying for financial credit. This gives a fraudster ample opportunity to exploit their stolen identities. Still, there are legitimate circumstances for children to be earning wages, such as in the entertainment and advertising industries. Elderly. We identified tens of thousands of wage records from elderly individuals. Specifically, the 2016 NDNH data included 65,823 SSNs with earned income reported that SSA data identified as being over 84. The Federal Trade Commission reported that in 2016, approximately one-fifth of IDT complaints they received involved people age 60 years or older. Further, the elderly have low participation rates in the workforce. The Bureau of Labor Statistics reported that, in 2016, the workforce participation rate for those ages 75 and above was 8.4 percent, compared to a rate of 62.8 percent for the overall workforce. Over a Million SSNs with Risk Characteristics Were Also Associated with Tax Compliance Issues for 2016, Not All of Which Were Pursued by IRS Some SSNs with risk characteristics were sometimes also associated with IRS returns that did not include required W-2 forms. Specifically, more than 1.3 million individuals—of the 2.9 million SSNs we determined to have risk characteristics associated with SSN misuse—had at least one wage record they did not report to IRS. Of these 1.3 million individuals, more than half failed to include at least one W-2 on their tax return, and slightly less than half (43 percent) did not include any W-2s in a tax return (see table 3). IRS has enforcement tools that are intended to detect reporting deficiencies, but these tools did not always detect the reporting issues we identified. IRS can use Automated Underreporter (AUR) and the Nonfiler, as well as seven IDT-related indicators to mark a taxpayer’s account or W-2 if it has determined that the SSN was compromised (see sidebar). We compared data from these enforcement tools and IDT indicators to the 1.3 million individuals identified above and found that IRS did not mark all accounts or W-2s. Action Code 501: closed identity theft cases initiated by a taxpayer. Action Code 506: closed identity theft cases initiated by IRS. Action Code 524: deceased taxpayer. It prevents the use of a deceased taxpayer's identity on a federal income tax return. Action Code 525: mismatch between the identity listed on the W-2 and on the tax return. These are cases where returns filed with an Individual Taxpayer Identification Number include a W-2 with an SSN belonging to another person. Individuals with three or more W-2s for the same period. More than a million individuals with three or more wage records did not declare at least one W-2. Additionally, we found that, in general, the more W-2s an individual had, the less likely it was that all of them would be reported to IRS (see figure 2). For instance, individuals with three W-2s declared all of them 68 percent of time, while individuals with seven declared all of them 29 percent of the time. Using its enforcement tools, IRS identified some of these individuals with three or more W-2s. Of the 1.25 million individuals in our analysis with three or more wage records who did not include all W-2s in tax year 2016, about 600,000 had wages totaling more than $23,200, meaning that they were required to file a tax return. Of these, about 340,000 individuals had at least one of the seven IDT-related indicators or were pursued through AUR or Nonfiler. In addition, IRS pursued—with AUR or Nonfiler—about half of the nearly 100 individuals who had 50 or more W- 2s reported by employers for 2016. In addition, approximately 9,000 individuals with wages totaling more than $23,200 and that did not include all W-2s in tax year 2016 also lived in five or more states (see figure 3 for an illustrative example). Deceased individuals. IRS did not apply IDT-related indicators to some of the accounts of deceased individuals we identified as having employer-reported wages not included on a tax return. Out of the 11,573 deceased individuals who reported earned income, we identified nearly 2,627 who earned at least $23,200, a threshold requiring the filing of a tax return. Of these, about 2,441 had at least one of the seven IDT-related indicators or were pursued under IRS’s AUR or Nonfiler enforcement programs. However, there were still 186 individuals that IRS did not identify. Elderly. Out of the 19,460 elderly individuals who reported earned income, we identified nearly 3,800 who earned enough to be required to file a tax return. Of these, about 1,700 had at least one out of the seven IDT-related indicators on their account or were pursued under IRS’s AUR or Nonfiler enforcement programs. However, there were still about 2,100 individuals that IRS did not identify. Children. For tax year 2016, individuals under age 14 were only required to file taxes if they earned more than $7,850. However, nearly 1,900 met this filing threshold and failed to include at least one W-2 on their tax returns. Of these, nearly 1,000 had at least one of the seven IDT-related indicators applied to their account by IRS or were pursued under IRS’s AUR or Nonfiler enforcement programs. However, there were still about 900 individuals that IRS did not identify. In considering employment-related identity fraud, IRS focuses on only one of the seven IDT-related indicators. Specifically, IRS considers mismatches between the identity listed on the W-2 and the identity on the tax return as a type of employment-related identify fraud. IRS does not consider other characteristics, such as individuals with multiple wage records, in its checks for employment-related identity fraud. Doing so would require the development of new codes or the modifications of existing ones. According to the Fraud Risk Framework, two leading practices for managing fraud risks include (1) identifying specific tools, methods, and sources for gathering information; and (2) designing and implementing control activities such as data-analytics activities to prevent and detect fraud. IRS addressed these leading practices, in part, through the AUR program, Nonfiler program, and seven IDT-related indicators, but there were still individuals in the population we examined that IRS did not identify. By assessing and documenting the feasibility of incorporating additional checks—such as multiple wage records or wage records for children under 14—into its checks of employment-related identity fraud, IRS may be able to develop a method for identifying additional taxpayers at risk of this type of fraud. Employment-Related Identity Fraud Can Reduce Tax Revenue IRS officials stated that employment-related identity fraud has limited tax consequences, as employees will nonetheless pay required taxes— including federal, state, and payroll taxes—through payroll withholding even if the fraudster fails to file a tax return. However, we found that federal income tax withholding was lower for SSNs that did not declare all the W-2s than for SSNs with all W-2s reported (see table 4). Additionally, we found individuals who did not withhold any federal income taxes across all of their related W-2s in 2016. Specifically, 37,868 individuals had at least one W-2 not declared on a tax return and withheld no federal income tax over the course of the year. Together, these individuals earned approximately $340 million in 2016. Further, 18 W-2s that were not reported on a tax return showed wages earned of more than $100,000 yet had $0 of federal income tax withheld (see figure 4 for example). IRS’s Code for Tracking Employment-Related Identity Fraud Likely Understates the Extent of the Problem Of the indicators IRS uses to track IDT, the only action code that directly relates to employment is Action Code 525, “Employment-related Identity Theft.” IRS applies the code to a taxpayer’s account when IRS processes a return filed by an individual with an Individual Taxpayer Identification Number (ITIN), and the return includes a W-2 with an SSN that does not belong to the person identified on the ITIN return. IRS refers to this situation as an ITIN/SSN mismatch. In 2018, IRS marked 818,097 accounts with Action Code 525. IRS officials acknowledged that forms of employment-related identity fraud, other than that captured by Action Code 525, are likely, but they said they do not systematically track these situations for several reasons. First, unless a taxpayer contacts IRS to say he or she did not earn the wages and disclaims them, the agency does not know whether a suspected case is employment-related identity fraud or someone who may not have included legitimate wages on his or her tax return. Second, IRS may be unable to distinguish between employment-related identity fraud and fabricated W-2s for jobs that were not worked (i.e., fake employees of a fake business). Third, while our analysis shows that employment-related identity fraud may be a more widespread problem than the ITIN/SSN mismatch that IRS currently tracks, IRS officials told us that other types of employment-related identity fraud would be identified and addressed through processes the agency applies broadly to all taxpayers, such as the AUR or Nonfiler programs. For example, according to IRS officials, if IRS receives a fraudulent W-2 from an employer using a legitimate taxpayer’s SSN, AUR or the Nonfiler program will detect it as IRS matches W-2s with tax returns. However, our analysis of NDNH and IRS data described earlier in this report shows that there are potential cases that these IRS enforcement programs did not identify. Standards for Internal Control in the Federal Government states that management should use quality information that is appropriate and complete to achieve the entity’s objectives, and that it should communicate quality information externally. However, our analysis of SSNs at risk of employment-related identify fraud indicates that the count of cases that IRS identifies under Action Code 525 likely understates the universe of employment-related identity fraud. By modifying the title of its employment-related IDT action code to more accurately reflect the data covered by the code, IRS can ensure that the agency is appropriately conveying the risk this specific type of employment-related identity fraud poses both to victims and tax administration without suggesting its statistics cover other types of employment-related identity fraud. SSA Is Taking Steps to Better Detect Inaccurate W-2s and Notify Potential Fraud Victims, but Faces Challenges Addressing Risks Associated with Some Victims SSA Detects Inaccurate W-2s and Monitors the Effectiveness of W-2 Accuracy Checks As illustrated in figure 5, SSA analyzes W-2s to detect inaccuracies. For W-2s determined to be accurate, SSA adds wages to the individual’s record on the Master Earnings File, a database that SSA uses to determine an individual’s eligibility for Social Security benefits and the amount of benefits paid. For W-2s determined to be inaccurate, SSA posts the wage information to the Earnings Suspense File. Inaccurate W- 2s may be attributable to various reasons, including employment-related identity fraud or administrative errors. SSA receives hundreds of millions of W-2s each year. SSA analyzes incoming W-2s to detect inaccuracies and adds inaccurate W-2s to the Earnings Suspense File. Based on SSA data from tax year 2016, SSA added millions of W-2s to the Earning Suspense File. On a daily basis, SSA electronically forwards IRS W-2s that it has analyzed, including both accurate and inaccurate W-2s. SSA monitors the effectiveness of its checks for inaccurate W-2s by testing its software prior to the filing season. Prior to each filing season, SSA creates test data that have characteristics of inaccurate W-2s. SSA then processes these data through the annual wage reporting software to ensure automated checks identify potentially inaccurate W-2s according to SSA’s criteria. SSA also has an electronic reporting system in place that SSA employees can use to identify and document problems for management throughout the year. SSA officials told us they have not identified any problems that have prevented checks from working as intended. This public report omits information that SSA has deemed sensitive related to (1) SSA’s efforts to improve W-2 accuracy checks, and (2) SSA’s challenges in addressing risks associated with employment-related identity fraud. SSA Is Taking Steps to More Effectively Communicate Relevant Information to Both Victims and Employers SSA is taking steps to more effectively communicate to both victims and employers information on potentially inaccurate W-2s, including potential employment-related identity fraud W-2s. When SSA detects a potentially inaccurate W-2, SSA may send a letter to the employer or employee listed on the W-2 that notifies them of the potential inaccuracy. SSA first sends letters to employers. Responses can help SSA resolve inaccuracies by identifying correct wage earners. Responses can also support SSA’s efforts to provide taxpayers with correct benefits. SSA sends different letters to employees and employers depending on the type of potential inaccuracy detected: Mismatched name and SSN. In March 2019, SSA resumed sending Educational Correspondence (EDCOR) letters to employers who submitted W-2s electronically, notifying them of the number of W-2s they electronically submitted with mismatched names and SSNs. The letters request that employers use SSA’s Business Services Online portal to view specific names and SSNs that did not match and provide necessary Form W-2C corrections. According to SSA, EDCOR letters are meant to educate employers about mismatches and help SSA post wages to correct earnings records. SSA officials told us that SSA had mailed about 577,000 EDCOR letters for electronically submitted W-2s as of June 2019 since resuming the process. Officials said the agency also began sending EDCOR letters for W-2s submitted on paper beginning in October 2019. SSA previously sent EDCOR notices from 1994 through 2007, but SSA stopped sending these notices in response to litigation surrounding a proposed DHS regulation that would have required employers to follow a prescribed course of action upon learning of an employee name or SSN mismatch. DHS rescinded its proposed rule in October 2009. SSA officials told us the agency decided to resume sending EDCOR notices in 2019 because employers are using Business Services Online to file more W-2s electronically. Therefore, employers may be more familiar with the system used to submit W-2C corrections. SSA has taken action to improve the effectiveness of EDCOR letters since the letters were discontinued in 2007. In 2008, the SSA OIG reported that EDCOR letters were not effective in either communicating wage-reporting problems to employers or identifying correct wage earners. For example, the OIG found that 74 percent of employers who reported W-2s with mismatched names and SSNs did not receive letters. Most employers that did not receive letters submitted 10 or fewer mismatched W-2s whereas SSA only sent letters to employers that submitted more than 10 mismatched W-2s. SSA officials told us that EDCOR letters sent beginning in 2019 are sent to every employer who submits a W-2 with a mismatched name and SSN. Disclaimed wages. When an individual disclaims wages, SSA staff have the option of sending a letter to the employer who paid the wages to attempt to identify the wage earner. In 2008, the SSA OIG found that SSA seldom sent letters to employers, and recommended that SSA consider generating a standard, annual letter to each employer that submitted a W-2, which was later disclaimed. SSA officials told us that, as of May 2019, SSA staff in all SSA region offices routinely send letters to employers notifying them of disclaimed wages. SSA officials reported the agency sent 20,945 letters in fiscal year 2018. IRS Has Not Assessed Opportunities to Expand Detection and Deterrence Activities IRS’s Use of Nonfiler to Detect and Deter Employment-Related Identity Fraud Is Limited IRS uses relevant information to detect inaccurate W-2s, including potentially fraudulent W-2s, and makes this information available to relevant enforcement programs, including Nonfiler, which IRS uses to follow up with individuals who appear to owe taxes but have not filed. IRS detects inaccurate W-2s using the results of SSA’s annual wage reporting checks and its own efforts to reconcile and correct some inaccuracies. As part of this process, IRS receives Earnings Suspense File W-2s that have mismatched names and SSNs from SSA and attempts to locate the wage earner’s correct name and SSN. IRS does so by identifying previously filed tax returns that list the same address as the mismatched W-2s. IRS then compares the names and SSNs listed on W- 2s to those on the tax returns to identify accurate name and SSN combinations. Accurate and inaccurate W-2s are then made accessible to IRS enforcement programs, including Nonfiler. Nonfiler and other programs that support IRS’s efforts to collect taxes owed from wage earners, including potential employment fraudsters, also may deter fraudulent activity by reducing the likelihood fraudsters succeed in not paying taxes owed. In reviewing IRS actions that may help deter employment-related identity fraud, we found that Nonfiler uses W-2 information to identify and follow up with individuals who appear to owe taxes but did not file required returns. However, we also found that IRS’s use of Nonfiler to collect taxes owed by potential employment fraudsters is limited. Nonfiler is capable of addressing cases involving certain types of employment-related identity fraudsters who appear to owe taxes—specifically fraudsters for whom IRS receives W-2s that have mismatched names and SSNs as well as SSNs associated with deceased persons or children. However, the agency has made limited use of Nonfiler to collect taxes owed on such cases and faces the following resource challenges in doing so: Reduced staffing capacity. IRS determines the number of noncompliance cases pursued by its enforcement programs based on available resources. IRS’s budget declined by about $2.1 billion (15.7 percent) from fiscal years 2011 through 2018 after adjusting for inflation, and corresponding staff reductions have been most significant within IRS enforcement programs, such as Nonfiler. In 2018, the Treasury Inspector General for Tax Administration (TIGTA) reported that resource constraints have left IRS with fewer resources to work cases involving individuals who do not respond to nonfiler notices. For example, TIGTA found that IRS created 430,000 new compliance cases in fiscal year 2017 involving individuals who did not respond to nonfiler notices compared to 1.6 million in fiscal year 2013. Competing priorities. IRS is focusing its resources on modernizing its information technology systems and implementing Public Law 115- 97—commonly referred to as the Tax Cuts and Jobs Act. This law was enacted in December 2017 and included significant changes to corporate and individual tax law. Costly follow-up contacts. According to IRS officials, collecting taxes owed by employment-related identity fraudsters typically requires IRS staff to make in-person contact with taxpayers by locating them at their places of work, which is resource intensive. According to IRS, in-person contact is typically required because employment fraudsters are unlikely to provide employers and IRS accurate address information on W-2s; therefore IRS often lacks information needed to reach employment fraudsters through mailed Nonfiler notices. IRS Has Not Assessed Opportunities to Expand Activities That May Deter Some Fraudsters Who Underwithhold To help reduce the number of nonfilers and underreporters, IRS uses the Withholding Compliance Program (WHC) to pre-emptively identify taxpayers who appear to be substantially underwithholding taxes based on prior year W-2 and other information. Through this program, IRS issues “lock-in letters” to employers of individuals who appear to be underwithholding. Lock-in letters require employers to adjust employees’ withholding amounts to rates specified by IRS rather than the employees. IRS adjusts withholding rates based on the number of withholding allowances IRS determines the taxpayer is entitled to claim. Employees are also sent lock-in letters informing them of changes to their withholding rates. WHC may be a more cost-effective opportunity than Nonfiler for IRS to collect appropriate taxes from those employment-related identity fraudsters who do not otherwise file returns and pay taxes owed. First, WHC lock-in letters would be more likely to reach their intended recipients, making them potentially more effective in obtaining their intended responses. IRS sends lock-in letters to employers, and IRS officials said the agency typically has accurate address information for employers. IRS also sends notices to employees affected by lock-in letters, but these letters do not request or require taxpayer action. Second, businesses that employ employment-related identity fraudsters may be more likely to comply with lock-in letters than fraudsters would to Nonfiler notices. According to a 2018 TIGTA report, compliance with lock- in letters could further be improved if IRS took action against employers who do not comply with the letters and adjust employees’ withholdings accordingly. TIGTA recommended that IRS penalize employers who do not respond. IRS has agreed to consider penalties, and officials told us the agency is evaluating opportunities to do so. Third, we have previously reported that IRS is less likely to collect taxes owed the longer it takes IRS to contact taxpayers. Therefore, it is likely more effective for IRS to use WHC to address potential tax liabilities before they accrue, rather than use Nonfiler to assess and attempt to contact fraudsters and collect taxes owed months after filing deadlines have passed. According to IRS officials, WHC issues lock-in letters to address underwithholding by some employees who use matching names and SSNs; however, the program does not issue lock-in letters for cases involving W-2s with mismatched names and SSNs because of privacy concerns. IRS officials said the agency has an obligation to protect all taxpayers, including potential employment-related identity fraudsters. IRS officials told us that IRS previously sent lock-in letters for cases involving mismatched names and SSNs but stopped in 2012 because the agency wanted to avoid potentially disclosing an employment-related identity fraudster’s identifying information, such as the names of their employers, to those individuals whose SSNs were used to commit employment fraud. However, IRS could also redact personally identifiable information in the lock-in letters as it already does this when mailing tax return transcripts. For example, in response to data privacy concerns, in September 2018 IRS began including just the first four characters of business names on tax return transcripts requested by taxpayers. This approach could also be used for sending lock-in letters to employees to reduce disclosures of personally identifiable information in instances where lock-in letters do not reach their intended recipients. IRS officials told us that WHC’s limited resources prevent the program from addressing all underwithholding cases currently identified by the program. Officials also said that, for that reason, expanding WHC to include cases with mismatched names and SSNs would not result in WHC selecting additional cases. However, by not including cases with mismatched names and SSNs, IRS may be missing an opportunity to identify and select a population of underwithholding cases that could lead to greater revenue collection. This is because some cases with mismatched names and SSNs may have greater underwithholding than those cases that are currently selected by WHC. If IRS were able to allocate more resources toward generating additional lock-in letters in the future, these potential benefits could also increase. In addition, WHC may be more affordable than other enforcement programs to administer on a case-by-case basis because unlike enforcement cases initiated through Nonfiler, WHC does not result in IRS pursuing taxpayers through progressively more costly methods of contact to collect additional revenue. IRS officials acknowledged this possibility and told us the agency has not assessed the potential costs and benefits of expanding WHC to include cases with mismatched names and SSNs. Internal control standards state that federal managers should use quality information to achieve their objectives, communicate relevant information throughout the agency, and both assess and address risks to their mission. Additionally, leading practices in managing fraud risks include considering the benefits and costs of controls for addressing fraud-related risks. Further, IRS’s Strategic Plan has goals to use data analytics to inform decision making and protect the integrity of the tax system. Because IRS has not evaluated and documented the costs and benefits of expanding WHC to address risks posed by employment-related identity fraudsters, the agency cannot determine whether or not expanding WHC to include mismatch cases would enable IRS to collect additional revenue and deter employment fraud. By conducting such an assessment, IRS could determine whether expanding WHC to include mismatch cases would likely enable IRS to collect additional revenue and deter employment fraud. IRS’s Approach to Managing Impacts on Victims Creates an Enforcement Gap To manage the impacts of employment-related identity fraud on victims, IRS limits the circumstances under which these victims may be selected by enforcement programs. In analyzing IRS data, we found about 3 million taxpayers who have either been identified as “employment-related identity theft” victims by IRS (Action Code 525) or who have identified themselves as victims to IRS (Action Code 501). Automated Underreporter (AUR) programming prevents these taxpayers from being selected due to wage discrepancies. Instead, AUR analyzes these taxpayers for reporting discrepancies for other income types, such as investment income. IRS officials told us excluding these taxpayers from AUR’s W-2 checks helps IRS avoid burdening some victims who may be otherwise selected based on wages earned by a fraudster using the taxpayers’ name and SSN. Selected victims would be required to follow up with IRS to avoid being assessed tax liabilities. Following up would be particularly burdensome for victims whose names and SSNs are used by fraudsters year after year. Taxpayers with IDT action codes on their accounts are eligible for analysis and selection by other enforcement programs based on discrepancies in W-2 reporting; however, these programs’ low selection rates suggest that it is unlikely IRS will follow up with these victims and notify them of these discrepancies. For example, although Nonfiler analyzes these taxpayers for evidence of income indicating a filing requirement, TIGTA found that IRS notified just 25,105 or 14 percent of all 179,878 nonfiler cases identified in fiscal year 2016 of these discrepancies. Likewise, although IDT victims may be selected for examination, IRS data show that the agency examined about 892,000 or 0.6 percent of all individual income tax returns in fiscal year 2018, the most recent year for which data are available. IRS officials acknowledge that some of the approximately three million taxpayers with Action Codes 501 or 525 may underreport their own incomes, and excluding these taxpayers from AUR’s W-2 discrepancy checks creates an enforcement gap, enabling some victims who actually underreported their own wages to avoid enforcement. IRS does not know how many of these taxpayers have underreported wage income. However, some IDT victims excluded from AUR’s wage discrepancy checks may be incentivized to underreport wages and pay less tax than they owe if they learn IRS is unlikely to hold them accountable for paying those taxes. Individuals could learn about this enforcement gap, for example, if they accidentally failed to report wages from an employer and were not later contacted by IRS. In addition other taxpayers may be incentivized to falsely claim they are IDT victims to take advantage of this enforcement gap. In its research into behavioral insights, IRS has found that taxpayers are more likely to be noncompliant when they perceive doing so can yield substantial benefits with minimal costs. We have also previously reported that the extent to which taxpayers misreport income closely aligns with IRS’s ability to detect such noncompliance. In some instances, IRS has information needed to distinguish wages earned by legitimate taxpayers from those potentially earned by employment-related identity fraudsters using that same taxpayer’s name and SSN. For example, IRS can reasonably conclude the legitimate taxpayer earned the wages if they are reported on a current- or prior-year return filed by the taxpayer, as this indicates the taxpayer attests to having worked for the employer who paid the wages. Because IRS excludes IDT victims from AUR’s W-2 discrepancy checks, IRS may not identify or collect taxes owed by some who unintentionally underreport their wages (e.g., by forgetting to include a W-2 from a second employer). In addition, IRS is missing an opportunity to incentivize taxpayers to accurately report their income and avoid intentional underreporting. As previously stated, federal internal control standards call for managers to both use quality information and respond to risks. According to IRS officials, modifying AUR to effectively identify the underreporting of wages actually earned by identity theft victims would require IRS to not only adjust AUR to include wage discrepancy checks for these taxpayers but also to change how AUR identifies wage discrepancies. IRS officials told us that when AUR evaluates a taxpayer’s wage information for discrepancies, the program evaluates taxpayers based on aggregated W- 2 information. AUR is not programmed to evaluate taxpayers by analyzing some of their W-2s but not others, such as potential employment fraud W- 2s. IRS officials told us modifying AUR to include W-2 discrepancy checks of these taxpayers while excluding potentially fraudulent W-2s would not be a cost-effective use of IRS resources at this time. Specifically, officials noted that AUR discrepancy checks are programmed in the legacy assembly language code, a low-level computer language initially used in the 1950s. Although they were unable to provide an estimate for the costs of modifying this code, IRS officials said the effort would be resource intensive. IRS is modernizing outdated information technology systems, and officials said it would be more cost effective for the agency to modify W-2 discrepancy checks once the assembly language is replaced. IRS plans to retire 75 percent of the agency’s legacy assembly language code and Common Business-Oriented Language code legacy by the end of fiscal year 2024. Officials told us the agency does not have a specific timeline in place for updating the assembly code that supports AUR, though doing so is a program goal. Modifying AUR to include wage discrepancy checks for IDT victims as part of IRS’s broader effort to update AUR’s programming code would enable IRS to avoid making costly and redundant changes to legacy coding that IRS plans to replace. It would also be consistent with a goal outlined in IRS’s Strategic Plan to advance the use of data and analytics to inform decision making and could potentially result in IRS collecting additional revenue by enabling IRS to analyze wage information for about three million additional taxpayers to identify any wage reporting discrepancies. Some of these taxpayers may have greater revenue collection potential than cases AUR would otherwise select. SSA and IRS Share Wage Reporting Data, but Opportunities Exist to Improve Collaboration SSA and IRS Collaborate on Combined Annual Wage Reporting with Defined Roles and Responsibilities SSA and IRS both have responsibility for parts of the Combined Annual Wage Reporting (CAWR) process to exchange W-2 information between the two agencies and to help ensure that taxpayers report and pay the proper amount of taxes on their wages. The CAWR Memorandum of Understanding (MOU), which was signed in 2007, is a key part of their collaborative effort, and SSA and IRS are legally bound to the mutually agreed upon purpose and functions. Specifically, the CAWR MOU covers the collaborative processes through which SSA and IRS share earnings information, including establishing clear roles and responsibilities for this effort, as called for by leading practices for inter- agency collaboration. IRS oversees tax administration, including ensuring compliance with tax laws. SSA acts as an agent to these activities by processing W-2s. As illustrated in figure 6, processes covered by the CAWR MOU include SSA sending accurate and inaccurate W-2s to IRS. Also, if wages are disclaimed through IRS, or IRS is able to correct a Social Security number-name mismatch using tax information, IRS sends this information to SSA. Federal law requires the Commissioner of Social Security and the Secretary of the Treasury to share W-2 information, and permits use of the CAWR MOU to effectuate this process. It also requires that the MOU remain in full force and in effect until modified or otherwise changed by mutual agreement of the heads of each agency. SSA and IRS Have Been Working to Update the 2007 CAWR MOU Since 2016 SSA and IRS have taken steps to update the 2007 CAWR MOU, but the effort has been underway for more than 3 years. As we reported in September 2012, continually updating agreements is an important part of the leading practice for written guidance and agreements. SSA and IRS officials told us that discussions about the update began in 2012 and the substantive work of updating the MOU began in August 2016. Since the MOU has not been updated in more than a decade, certain data-exchange materials and provisions in the MOU have become outdated, such as the references to microfilm. According to SSA and IRS officials, the MOU update has been driven by efforts at the staff level with executives briefed on the status. We have previously found that leadership involvement in collaborative efforts is needed to overcome the many barriers to working across agency boundaries. SSA officials noted that having highly involved executives would indicate problems with the MOU update process. IRS officials said that the staff level is the appropriate place to negotiate the MOU update with oversight from executives, as needed. However, at both agencies, officials at the staff level do not have the authority to agree to any updates or modifications of the MOU. SSA and IRS are responsible for ensuring the MOU update process is thorough, complete, and carried out in a timely manner. SSA and IRS officials stated that while the MOU is the cornerstone of SSA-IRS collaboration, completing the update is challenging because there are competing priorities. Additionally, the agencies are not legally required to update the MOU; instead, the MOU is in effect until modified or otherwise changed by mutual agreement of the Commissioner of Social Security and the Secretary of the Treasury (who delegated this authority to the Commissioner of Internal Revenue). In September 2019, SSA and IRS officials told us they plan to complete the update of the MOU in spring 2020, more than 3-and-a-half years after the effort to update the MOU began. Standards for project management call for developing a plan with specific actions and time frames. A plan could also identify the resources, processes, and individuals necessary to carry out the update. SSA and IRS officials acknowledged that they did not develop such a plan for the ongoing effort to update the MOU. By developing a plan for future updates that includes actions, time frames, and responsible individuals, including executive leadership, SSA and IRS would have greater assurance that the MOU would be updated when needed. SSA and IRS Have Not Developed Shared Goals and Performance Measures or Conducted Required Annual Reviews of the MOU Process While SSA and IRS have established joint functions in the CAWR MOU, the agencies do not have shared goals and performance measures to help track progress in implementing these functions and identify potential improvements. As we reported in September 2012, defining short- and long-term outcomes is an important part of the leading practice for outcomes and accountability for collaborative efforts. This includes defining and articulating common goals based on what the group shared in common and developing mechanisms, such as performance goals and measures, to evaluate the results. SSA officials said existing goals and measures in the MOU were sufficiently effective. However, we did not find evidence of goals and measures in the MOU and neither SSA nor IRS officials could provide documentation of specific examples of such. Establishing shared goals and performance measures for the CAWR MOU functions would help SSA and IRS monitor and evaluate its results, as well as identify potential weaknesses and potential improvements. While the MOU lacks goals and measures, it does contain provisions for the agencies to conduct annual studies of the CAWR process and to submit a report to each commissioner on the results. However, the agencies have not consistently implemented these provisions. Monitoring progress is an important part of the leading practice for outcomes and accountability for collaborative efforts. Continually monitoring agreements is an important part of the leading practice for written guidance and agreements. For SSA and IRS, this means monitoring progress toward fulfilling their legal obligation to implement the CAWR MOU. In the 2007 CAWR MOU, SSA and IRS agreed to the following monitoring provisions related to conducting an annual review of the CAWR process. Conduct annual joint studies of the CAWR process. Since the MOU was implemented in 2007, IRS and SSA have not conducted a joint study of the CAWR process. These reviews were intended to assist the required annual review of the MOU and help inform the agencies of potential improvements to the CAWR process. Specifically, the MOU requires that upon completion of the annual review, a joint SSA and IRS report should be sent to each commissioner consisting of the results of the review, a list of any changes that have occurred in the process, and any recommendation for changes. This is intended to serve as an important monitoring function for the MOU. IRS officials said the agencies have been unable to conduct annual joint studies or submit the required annual reports primarily because the MOU is extensive and affects many offices at both agencies. SSA and IRS officials said that they plan to change to a biannual interagency review of the MOU so they can do a better job of keeping the MOU updated and relevant. However, officials did not provide information about any steps they plan to take to ensure that the reviews would occur as required. According to SSA officials, SSA and IRS plan to meet every 3 or 6 months to review existing agreements, including the CAWR MOU. This may be a means of identifying necessary changes to the CAWR process since regular communication can facilitate effective collaboration; however, officials did not provide additional details on these potential new meetings. Conduct annual independent studies of the CAWR process. SSA had no records that it had conducted an independent study of the process in the past 3 years. IRS conducted two independent studies in 2018 on the CAWR process which primarily focused on IRS’s adherence to its policy guidance. Annual independent studies were intended to serve as another feedback mechanism to assist in the review of the MOU. According to SSA and IRS officials, they have not implemented these monitoring provisions because of resource constraints. As previously discussed, the agencies are updating the CAWR MOU and plan to finalize the updated MOU by spring 2020. Officials told us that, similar to the 2007 MOU, the updated MOU will include requirements for periodically reviewing the MOU to identify potential improvements to the CAWR process. However, the time frames may change. Developing and documenting a strategy for implementing the monitoring provisions in the updated MOU would provide greater assurance that SSA and IRS are periodically assessing the CAWR process and identifying opportunities for improvement, as required. SSA and IRS Have Developed Ways to Operate Across Agency Boundaries, but Lack Sufficient Common Terminology Related to the CAWR Process and Identity Fraud As we reported in September 2012, agreeing on common terminology and definitions is an important part of bridging organizational cultures. One way to operate across agency boundaries is to foster open lines of communication. SSA and IRS do this by holding interagency meetings, including quarterly executive-level and monthly technical-level meetings. In addition, officials from SSA and IRS said that the agencies have a strong working relationship and that officials at both agencies have frequent informal communication. The agencies also established a fraud working group, which held introductory meetings in 2018 and 2019. While the group does not have a formal mission statement, the general scope of responsibility for the group is to identify areas of common interest related to mitigating fraud and to collaborating on best practices and efforts to mitigate fraud risks. However, SSA and IRS have developed limited common terminology and definitions related to their CAWR collaboration effort. The agencies have agreed on 10 definitions in their MOU, but these definitions are very limited in scope; for example, two of these definitions simply spell out the agency names and none of the definitions are for the 20 data variables the agencies exchange daily. Both SSA and IRS officials stated that common terminology related to identity fraud would be helpful, and acknowledged that they use different terminology and have to call each other to ask what different terms mean. SSA officials cited the use of different terminology at SSA and IRS as a barrier to collaboration. Because of the absence of common terminology, IRS has been unaware of information it receives from SSA in some cases. For example, through the common format record exchange, SSA shares information with IRS about why SSA determined that a W-2 is inaccurate, but IRS was unaware of this information. SSA told us that it sends a table to IRS annually that includes code combinations for their data transfers and their meanings which explain why the W-2 was accurate or inaccurate. However, SSA officials were unsure of the extent to which IRS officials understood the codes. One reason that SSA determines a W-2 is inaccurate is if earnings with the same name, SSN, and EIN were disclaimed in previous years. SSA communicates this to IRS using codes within the W-2 record that are labeled “invalid due to SWED.” However, SSA and IRS have not defined “SWED” and IRS officials said that they were unaware of receiving information from SSA about previously disclaimed wages. Officials said they interpreted the information to relate to invalid wages due to name and SSN mismatches and spent time trying to resolve the mismatch issue. They said that such information could be useful for future enforcement efforts. Further, IRS officials said that they were also unaware of other code combinations that SSA officials told us they use to inform IRS about accurate and inaccurate wages. IRS attributed its unfamiliarity with the data elements coming from SSA to staff turnover since key IRS officials who were familiar with the data elements retired. However, IRS could have been aware of the meaning of the variables if the agencies had established and documented common definitions for these data elements. In addition, according to IRS officials, they have limited resources for following up on information that SSA is sharing because they have been focused on competing priorities, including implementing the Tax Cuts and Jobs Act of 2017. SSA and IRS officials noted that the next version of the MOU will define additional terminology that was not defined in previous MOU documents. For example, officials said that “EIN” and “TIN” are key IRS terminology that may be defined in the new MOU. Until SSA and IRS clearly define the data elements they exchange as part of the CAWR process, SSA and IRS are at risk of not communicating effectively about CAWR and, thus, missing opportunities to use data more effectively to identify fraudulent or otherwise inaccurate W-2s. This could be done, for example, by developing a shared data dictionary that clearly defines all of the data elements the agencies are exchanging. Conclusions Employment-related identity fraud can have negative impacts on victims and poses risks to both SSA and IRS. Victims may face IRS enforcement actions or a reduction in benefits for some federal programs based on wages earned by fraudsters. The full scope of this fraud is unknown. In 2018, IRS marked more than 800,000 taxpayer accounts with Action Code 525 “employment-related identity theft.” However, IRS’s use of the term “employment-related identity theft” likely understates the true scope and impact of this type of fraud and may be misleading to both agency decision makers and Congress. Additionally, by assessing the feasibility of incorporating additional compliance checks into its checks of employment-related identity fraud, IRS may be able to develop a method for identifying additional taxpayers at risk of this type of fraud. SSA and IRS rely on accurate W-2 information to carry out their missions and have taken steps to detect the submission of fraudulent W-2s. Evaluating the costs and benefits of expanding IRS’s Withholding Compliance Program (WHC) to include cases with mismatched names and SSNs may provide IRS an opportunity to increase revenue collection. Additionally, while IRS has taken steps to manage the impacts of identity fraud on victims, the agency’s decision to exclude approximately 3 million individuals with IDT action codes from Automated Underreporter’s (AUR) wage discrepancy checks has resulted in a gap in enforcement coverage. IRS plans to update most of the agency’s legacy programming code by the end of fiscal year 2024. Updating AUR’s programming to include these individuals would enable IRS to close this enforcement gap and potentially increase revenue. Further, SSA and IRS’s 2007 CAWR MOU plays an important role in IRS and SSA’s efforts to accurately report wage information and resolve mismatches. While the agencies expect to finalize their first update of the MOU by spring 2020, efforts to update the MOU have been ongoing for more than 3 years. Developing a plan for implementing future updates would provide greater assurance that the MOU would be updated when needed. Additionally, developing performance goals and measures for the MOU as well as a strategy for assuring the studies called for by the MOU are completed within the specified time frames would help ensure that SSA and IRS are periodically assessing the CAWR process, and identifying opportunities for improvement. Moreover, by clearly defining the data elements IRS and SSA exchange as part of the CAWR process, the agencies would be better positioned to effectively use the data to identify fraudulent or otherwise inaccurate W-2s. Recommendations for Executive Action We are making a total of 12 recommendations, including eight to IRS and four to SSA. The Commissioner of Internal Revenue should modify the title of IRS’s employment-related identity theft action code 525 to reflect the type of employment-related identity fraud encompassed by this action code. (Recommendation 1) The Commissioner of Internal Revenue should assess and document the feasibility of incorporating additional checks into its automated checks of employment-related identity fraud for populations at risk of employment- related identity fraud, such as children, elderly, deceased persons, and individuals associated with multiple wage records. (Recommendation 2) The Commissioner of Internal Revenue should assess and document the costs and benefits of using WHC to address compliance risks posed by potential employment-related identity fraudsters who owe taxes and take appropriate action, as needed. (Recommendation 3) The Commissioner of Internal Revenue should modify AUR to include wage discrepancy checks for victims of employment-related identity fraud once IRS has updated AUR’s legacy programming code. (Recommendation 4) The Commissioner of Internal Revenue should, in collaboration with the Commissioner of Social Security, develop and document a plan for updating future CAWR MOUs. The plan should identify actions, time frames, and responsible parties, including executive leadership. (Recommendation 5) The Commissioner of Internal Revenue should, in collaboration with the Commissioner of Social Security, develop and implement goals and performance measures for the CAWR MOU. (Recommendation 6) The Commissioner of Internal Revenue should, in collaboration with the Commissioner of Social Security, develop and document a strategy for assuring that the reviews required by the updated MOU are completed within the specified time frames. (Recommendation 7) The Commissioner of Internal Revenue should, in collaboration with the Commissioner of Social Security, clearly define data elements they exchange with SSA. (Recommendation 8) The Commissioner of Social Security should, in collaboration with the Commissioner of Internal Revenue, develop and document a plan for updating future CAWR MOUs. The plan should identify actions, time frames, and responsible parties, including executive leadership. (Recommendation 9) The Commissioner of Social Security should, in collaboration with the Commissioner of Internal Revenue, develop and implement goals and performance measures for the CAWR MOU. (Recommendation 10) The Commissioner of Social Security should, in collaboration with the Commissioner of Internal Revenue, develop and document a strategy for assuring that the reviews required by the updated MOU are completed within the specified time frames. (Recommendation 11) The Commissioner of Social Security should, in collaboration with the Commissioner of Internal Revenue, clearly define the data elements they exchange with IRS. (Recommendation 12) Agency Comments We provided a draft of the sensitive version of this report to IRS, SSA, the Federal Trade Commission, the Department of Health and Human Services, and the Department of Homeland Security for comment. In comments reproduced in appendix II, IRS neither agreed nor disagreed with the recommendations. In comments reproduced in appendix III, SSA agreed with the recommendations and noted that SSA and IRS officials are meeting on a recurring basis to complete an updated memorandum of understanding. IRS, SSA, the Department of Homeland Security, and the Federal Trade Commission provided technical comments which were incorporated as appropriate. The Department of Health and Human Services had no comments on the report. We are sending copies of this report to the appropriate congressional committees, the Commissioner of Internal Revenue, Commissioner of Social Security, Chairman of the Federal Trade Commission, Secretary of Health and Human Services, Acting Secretary of Homeland Security, Secretary of the Treasury, and other interested parties. In addition, the report is available at no charge on the GAO website at https://www.gao.gov If you or your staff have any questions about this report, please contact Jessica Lucas-Judy at (202) 512-9110 or LucasJudyJ@gao.gov, or Rebecca Shea at (202) 512-6722 or SheaR@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines (1) the potential scope of employment-related identity fraud, including what the Internal Revenue Service (IRS) knows about this type of fraud and what we could determine by analyzing the Department of Health and Human Services’ National Directory of New Hires (NDNH) and IRS data; (2) Social Security Administration (SSA) actions to detect and deter this fraud as well as notify victims; (3) IRS actions to detect and deter this fraud as well as notify victims; and (4) the extent to which SSA and IRS are collaborating to address the issue. To describe and analyze the potential scope of employment-related identity fraud, we took the following steps: 1. Identified groups at risk of identity theft. We first reviewed Treasury Inspector General for Tax Administration, SSA Office of the Inspector General, and our prior reports on Social Security number (SSN) misuse to determine common characteristics of individuals who are at risk of SSN misuse. These characteristics include being deceased, elderly, a child, or having three or more wage records during the 3-month period of our review. Based on these reports, we defined “elderly” as over age 84 and “children” as under age 14 for the purposes of this review. 2. Identified SSNs at risk of SSN misuse. We used SSA’s full death file for dates of death for deceased individuals, and its Numerical Index File (Numident) for dates of birth for living individuals.We next compared full death file and Numident data to a quarterly extract of NDNH data listing the names and SSNs of individuals who earned wages between August and October 2016. We selected data from this quarter because, at the time of our review, these were the oldest data for which relevant IRS tax data were also available. We used this comparison to identify individuals employed between August and October 2016 who also met at least one of these at-risk characteristics. NDNH is a database of individuals employed in the United States. Data are collected and reported by state workforce agencies and federal agencies, and the database is administered by the Department of Health and Human Services’ Office of Child Support Enforcement. NDNH data are comprised of three types: verified, unverified, and unverifiable. The verified data—used in this analysis— have been checked against SSA records to confirm that the name and SSN match SSA records. Unverified data include data that do not match on name or SSN, and unverifiable data include data that did not include enough information to attempt a match (e.g., when states submit partial or missing name information). According to the Department of Health and Human Services, there were 584,013,484 verified wage records, 18,629,720 unverified, and 91,134,352 unverifiable as of December 31, 2018. Verified data were used in this analysis to make the estimate more conservative since cases of potential synthetic identity theft—where the name and SSN do not match—are excluded from verified data. NDNH is designed to assist state child support agencies in locating parents and taking appropriate, interstate actions concerning child support orders. Some authorized agencies also use NDNH data to help prevent overpayments and detect fraud. For example, IRS has access to NDNH to administer the Earned Income Tax Credit. However, IRS and SSA are not authorized to use NDNH information to detect potential employment-related identity fraud. We were authorized to use NDNH through the GAO Access and Oversight Act of 2017, Pub. L. No. 115-3, 131 Stat. 7. Form W-2, Wage and Tax Statement (W-2) was not reported on a 2016 tax return. When possible, we also limited the analysis to cases where the taxpayer had a known filing requirement. We also identified cases that were consistent with misuse of SSNs for employment-related identity fraud, rather than taxpayer noncompliance. However, we were unable to determine the total extent of taxpayer noncompliance for taxpayers included in this analysis. Our analysis is not intended to be a comprehensive effort to identify all potential cases of employment-related identity fraud. We focused our analysis on cases where matching names and SSNs were used to obtain employment. These cases pose a risk to SSA, IRS, and victims, yet little is known about these cases. 4. Analyzed tax characteristics of potential employment-related identity theft victims and other taxpayers. Last, we used CDW to analyze selected tax characteristics of both individuals we identified as having at least one employer-submitted Form W-2 that was not reported on a 2016 tax return as well as those where employer- submitted Forms W-2 were reported. For example, we analyzed data on wage withholding rates, the prevalence of selected IRS identity theft indicators on taxpayers’ accounts, and IRS enforcement actions taken against these individuals. We assessed IRS procedures against the information gathering and data analytics leading practices in the Framework for Managing Fraud Risks in Federal Programs. We did not conduct a comprehensive fraud risk assessment of the IRS enforcement programs. Our assessment was limited to the control activities surrounding employment-related identity fraud. We assessed the reliability of the full death file, Numident, NDNH quarterly wage data, and selected elements of CDW by reviewing relevant documentation, interviewing knowledgeable agency officials, and performing electronic testing to determine the validity of specific data elements in the data. We determined that the data elements used in our analysis were sufficiently reliable for the purpose of our work to describe and analyze the potential scope of employment-related identity fraud. To assess IRS and SSA actions to detect and prevent employment- related identity fraud as well as notify victims, we reviewed relevant documentation including IRS’s Internal Revenue Manual and SSA’s Policy Operations Manual System. We also interviewed knowledgeable officials from both agencies on SSA and IRS processes for detecting and preventing employment-related identity fraud and notifying victims. We compared IRS’s and SSA’s efforts to relevant federal internal control standards. We also assessed the agencies’ efforts against IRS and SSA’s respective strategic plans as well as select leading practices to combat fraud, as identified in the Framework for Managing Fraud Risks in Federal Programs. To evaluate the extent to which IRS and SSA are effectively collaborating to address employment-related identity fraud, we reviewed relevant agency documents, such as IRS and SSA’s Combined Annual Wage Reporting Memorandum of Understanding, other IRS-SSA legal agreements, meeting minutes from IRS-SSA joint meetings, and policy manuals. Because of its role with assisting victims and collecting statistics on identity theft, we interviewed agency officials from the Federal Trade Commission in addition to knowledgeable officials from IRS and SSA. Because of its role helping employers verify the identities of employees, we interviewed officials at the Department of Homeland Security. We focused our assessment on SSA and IRS because those agencies are most directly involved in the wage reporting process used to detect and resolve employment-related identity fraud. We assessed IRS and SSA’s collaboration efforts against leading practices for collaboration we have identified in our prior work and against standards for project management. We identified key elements of each leading practice and assessed the extent to which SSA and IRS collaboration on employment- related identity theft aligned with leading practices or key elements. The performance audit upon which this report is based was conducted from November 2017 to January 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We worked with SSA from October 2019 to May 2020 to prepare this public version of the original sensitive report for public release. This public version was also prepared in accordance with these standards. Appendix II: Comments from the Internal Revenue Service Appendix III: Comments from the Social Security Administration Appendix IV: GAO Contact and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the individual named above, the following staff made key contributions to this report: Neil A. Pinney (Assistant Director), Philip D. Reiff (Assistant Director), Melissa L. King (Analyst-in-Charge), Priyanka Sethi Bansal, Heather A. Collins, Ann L. Czapiewski, Celina F. Davidson, Pamela R. Davidson, Julia C. DiPonio, Shannon J. Finnegan, Steven Flint, Robert L. Gebhart, James A. Howard, Grace H. Kwon, Krista Loose, Maria C. McMullen, Kevin C. Metcalfe, J. Daniel Paulk, Lindsay W. Swenson, Sonya Vartivarian, Ariel Vega, and Miranda J. Wickham.
Why GAO Did This Study Employment-related identity fraud poses risks to IRS's ability to collect taxes owed on wages and to SSA's ability to correctly calculate and manage Social Security benefits. GAO was asked to review employment-related identity fraud. This report examines (1) the potential scope of employment-related identity fraud, including what IRS knows about this type of fraud and what GAO could determine by analyzing Department of Health and Human Services' National Directory of New Hires (NDNH) and IRS data; (2) SSA and IRS actions to detect and deter this fraud as well as notify victims; and (3) SSA and IRS's collaboration on the issue. GAO analyzed 3 months of 2016 NDNH wage data and 2016 IRS taxpayer data to identify potential employment-related identity fraud. GAO also reviewed relevant IRS and SSA documentation and interviewed agency officials. This is a public version of a sensitive report that GAO issued in January 2020. Information that SSA deemed sensitive has been omitted. What GAO Found Employment-related identity fraud occurs when people use a name or Social Security number (SSN) other than their own to get a job. People may do this if they are not authorized to work in the United States or are trying to avoid child support payments, among other reasons. Victims may face Internal Revenue Service (IRS) enforcement actions based on wages earned by fraudsters. IRS identified more than 818,000 cases in 2018, but this included only one form of employment-related identity fraud—mismatches between the identity listed on the Form W-2, Wage and Tax Statement (W-2) and the identity on the tax return. The true scope of employment-related identity fraud is unknown. GAO reviewed additional forms of this fraud and identified 1.3 million SSNs that for 2016 had both (1) characteristics associated with employment-related identity fraud; and (2) wages reported by the employer on a W-2, but not reported by the employee on a tax return. This includes about 9,000 individuals whose employers reported W-2s in five or more states, but who did not include them all on their tax return (see figure). The Social Security Administration (SSA) processes W-2s before sending W-2 data to IRS for enforcement purposes. SSA has developed processes to detect some inaccurate W-2s and notify potential fraud victims. IRS uses W-2 information to deter some potential fraudsters, but has not assessed the costs and benefits of expanding its enforcement efforts to include certain individuals who may underwithhold taxes or not file returns. Doing so could help IRS determine if such an effort would enable the agency to collect additional revenue. SSA and IRS entered into a memorandum of understanding (MOU) to collaborate to exchange wage data. However, they have not established performance goals and measures for the MOU, implemented the MOU's monitoring provisions, or clearly defined the data elements they exchange. What GAO Recommends GAO is making 12 recommendations to IRS and SSA, including that IRS assess the feasibility of adding checks to its review of employment-related identity fraud, and assess the costs and benefits of expanding enforcement; and that both agencies improve the implementation of their MOU. SSA agreed and IRS neither agreed nor disagreed with the recommendations.
gao_GAO-20-165
gao_GAO-20-165_0
Background Roles and Responsibilities for the Recruitment and Retention of Military Physicians and Dentists The ASD(HA) serves as the principal advisor for all DOD health policies and programs. The ASD(HA) has the authority to issue DOD instructions, publications, and memorandums that implement policy approved by the Secretary of Defense or the Under Secretary of Defense for Personnel and Readiness and govern the management of DOD medical programs. The ASD(HA) also exercises authority, direction, and control over the President of the Uniformed Services University of the Health Sciences. Further, the ASD(HA) sets the maximum special and incentive pay amounts for all military physicians and dentists. The Army, the Navy, and the Air Force have the authority to recruit, train, and retain physicians and dentists. Currently, there is no joint DOD unit or process dedicated to recruiting medical students and accessing medical officers because recruiting and retention are the responsibility of the military departments. Each military department has its own organizational structure, responsibilities, and varying degrees of personnel resources for accessing physicians and dentists. The departments’ recruiting commands recruit medical and dental students into the scholarship program. In a separate process, the University recruits and admits a set number of medical students each year. Figure 1 shows the organizational structure of the Military Health System as it relates to the recruitment and retention of military physicians and dentists. Career Path of Military Physicians and Dentists DOD has two primary sources of recruitment for military physicians: the scholarship program and the University. DOD recruits most military dentists through the scholarship program. Participants in DOD’s scholarship program and the University accrue an active-duty service obligation in return for a tuition-free medical or dental education and certain financial benefits. Specifically, scholarship program participants enrolled in a civilian medical or dental school receive paid tuition, books and fees, and a monthly stipend. In some cases, participants are also offered an accession bonus. In exchange, scholarship program participants incur a 6- month active-duty service obligation for each 6 months of benefits received, with a 2-year minimum service obligation. Students at the University are enrolled in the DOD-sponsored medical school at no cost, enter active-duty service as medical students and receive the pay and benefits of an officer at the O-1 pay grade. In exchange, University medical students accrue a 7-year service obligation. Career paths for medical and dental school graduates can differ. For example, Army and Air Force medical school graduates typically become specialized before practicing medicine, while 55 percent of Navy physicians complete a General Medical Officer tour before becoming specialized, according to department officials. Moreover, dental school graduates typically practice as general dentists after completing licensure requirements before choosing to specialize. To become specialized, medical and dental school graduates apply to a medical or dental residency training program, which may require or include a 1-year internship, depending on the program or specialty. After residency, some physicians or dentists may decide to pursue further training, known as “fellowships,” in order to become subspecialists. For example, to become a cardiologist, a physician must complete an internal medicine residency followed by a cardiology fellowship. Residency training typically requires 3 to 7 years for physicians and 1 to 6 years for dentists. Fellowship training typically is 1 or more years in length for physicians and dentists. The required number of years depends on the specialty or subspecialty. After residency or fellowship training—hereafter referred to collectively as residency training—physicians and dentists become credentialed and privileged to practice the specialty or subspecialty that they trained in, and they are also eligible for board certification. Figure 2 portrays possible paths to becoming a military physician or dentist. As noted earlier, scholarship program medical and dental students incur 6 months of an active-duty service obligation for each 6 months of benefits received, with a 2-year minimum service obligation; University medical students accrue a 7-year service obligation. While training in a military residency program, residents receive the pay and benefits of an officer at the O-3 pay grade or higher, depending on prior years of service, and earn creditable years of service toward retirement. In exchange, participants incur an additional 6 months of an active-duty service obligation for each 6-months of residency training, with a minimum 2-year service obligation. However, according to DOD officials, the first year of postgraduate training (i.e., internship or 1 year of advanced education in general dentistry and general practice residency) does not accrue a service obligation and is considered obligation neutral. Currently, the two sets of obligations—the obligation for medical or dental school and the obligation for military residency training—are served concurrently, or at the same time, effectively resulting in the servicemember serving the longer of the two obligations. For example, a student who accepts a 4- year scholarship, trains in a 1-year internship, and then trains in a 4-year residency program will serve a total of 9 years. The first 5 years would be spent in internship and residency, and the final 4 years of this service would be spent discharging the active-duty service obligations concurrently (see figure 3, scenario 2). Depending on career path, years of active-duty service after completion of medical and dental school will vary (see figure 3). Cash Compensation for Military Physicians and Dentists DOD’s measure of cash compensation, known as regular military compensation, includes the sum of basic pay, basic allowance for housing, basic allowance for subsistence, and the federal income tax advantage that accrues from the non-taxable nature of the allowances. For example, according to DOD, in 2017 the average married military officer at the pay grade of O-3 received annual regular military compensation of around $99,000. Specifically, this average officer received around $67,000 for basic pay, $24,000 for the basic allowance for housing, $3,000 for the basic allowance for subsistence, and a federal income tax advantage of $5,000. In addition to regular military compensation, physicians and dentists may be eligible for various special and incentive pays which vary depending upon their status as residents, their service obligations, and their specialty. During residency training, physicians and dentists are eligible for select medical or dental corps incentive pays. Upon completion of residency training, they become eligible for higher rates of incentive pay and, if they become board certified, for Board Certification Pay. After fulfilling their active-duty service obligations from medical or dental school and residency training, in addition to special and incentive pays already received, physicians and dentists become eligible for a multi-year retention bonus. Cash Compensation for Military Physicians and Dentists Is Generally Less Than the Private Sector, but DOD Provides Substantial Deferred and Noncash Benefits Cash compensation for active-duty military physicians and dentists was generally less than the median compensation for private sector civilians in calendar year 2017 for most specialties we reviewed, including at key retention points. However, a substantial portion of the costs of DOD’s overall compensation package is comprised of deferred and noncash benefits provided to active-duty personnel, such as a pension in retirement and tuition-free medical and dental education, but the extent to which servicemembers value these benefits is difficult to determine. Cash Compensation for Military Physicians and Dentists Was Generally Below the Median of Private Sector Civilian Compensation in Comparable Specialties in 2017 Cash compensation for active-duty military physicians and dentists varied depending on pay grade, specialty, and decisions to accept retention bonuses or other special and incentive pays, but was generally less than the median compensation for private sector civilians in calendar year 2017 for most specialties. Although we could not make direct comparisons of military and private sector civilian cash compensation by years of service or experience, we estimated the minimum and maximum military cash compensation for specialized active-duty physicians and dentists in pay grades O-3 to O-6, which represented more than 99 percent of military physicians and dentists in fiscal year 2018. Specifically, we found that the minimum military cash compensation for all 21 physician and 5 of 6 dental specialties we reviewed was less than the civilian median for all pay grades; and the maximum military cash compensation for 16 of 21 physician (see figure 4 below) and 5 of 6 dental specialties (see figure 5 below) we reviewed was also less than the civilian median for all pay grades. Therefore, for many of these specialties, even the most senior military physician and dentists (i.e., pay grade O-5 or higher) at the top of the pay range were estimated to receive cash compensation below the private sector civilian median. The minimum and maximum of total military cash compensation, by specialty and pay grade, and how these compare to reported private sector civilian cash compensation are presented in appendix II. Cash Compensation for Military Physicians and Dentists Is Generally Less Than Private Sector Civilian Compensation at Key Retention Points Cash compensation for military physicians and dentists is generally less than private sector civilian compensation at key retention points. Specifically, we calculated 2017 cash compensation for military medical officers who completed their residency directly after medical school across 21 medical specialties and found that at their first unobligated year of service—after they fulfill their initial active-duty service obligations accrued from medical school and military residency training—all 21 specialties had cash compensation below the private sector civilian median. In addition, we found that all but one specialty (psychiatry) was less than the 20th percentile for private sector civilian compensation. Notably, nine specialties that DOD identified as critical trauma-related wartime specialties in 2019 were less than the 20th percentile. According to senior military department medical corps officials, the first unobligated year of service is a key point of retention for military physicians. A 2012 study of military physicians found that compensation had a large impact on the decision to remain in the military in the first unobligated year of service and just a small impact on retention in the years afterward. For DOD’s scholarship program participants, which constitute the majority of recruited military physicians, we estimate that initial service obligation fulfillment typically occurs about 4 years after successful completion of their residency, or at about 9 years of service. We also calculated cash compensation for military medical officers who (a) completed a 3-year General Medical Officer tour prior to specializing in a residency, or (b) attended the University and accrued a 7-year active- duty service obligation and found that all but three specialties (pediatrics, family medicine, and psychiatry) had cash compensation less than the 20th percentile for private sector civilian compensation, and all specialties were compensated below the median. We reviewed 2017 cash compensation for typical military dental officers across six dental specialties and found that at each of these retention points, military cash compensation was less than the median private sector civilian compensation, three of which were below the 25th percentile (orthodontics, endodontics, and periodontics). According to senior military department dental corps officials, two key points of retention for military dentists are (1) after they fulfill their scholarship service obligation by practicing as a general dentist for several years, and (2) after they have completed residency training for a dental specialty, such as orthodontics, and fulfill their residency service obligation. Unlike their physician counterparts, dental students typically do not begin residency immediately after graduation. According to military department dental corps Chiefs, dental student graduates generally complete a 1- year advanced education in general dentistry certificate, which does not incur a service obligation, then fulfill their dental school active-duty service obligation as general dentists before taking a general dentist’s retention bonus and beginning residency training. Cash compensation is just one factor that servicemembers may consider when making the decision to stay with or separate from the military. According to DOD medical and dental corps officials, other factors that may influence this decision include number and frequency of deployments, ability to function at full scope of practice for training, additional nonphysician duties and administrative requirements placed on active-duty physicians that their private sector counterparts do not experience, family considerations associated with permanent change of station orders, nonselection to residency of choice, nonselection for promotion, and retirement eligibility. Similarly, data from the 2017 DOD Status of Forces Survey show that among all officers, the most important factors that would be considered in a decision of whether to stay on active-duty were the military retirement system and personal choice/freedoms (e.g. control of where to work), as well as factors such as opportunities to be assigned to station of choice, family concerns, and pay and allowances. Moreover, a 2019 study of Army physician service obligations showed that military physicians who were most likely to continue serving after completion of their obligation and ultimately retire were those who had the most years of service accumulated when obligations were completed. That is, those who were close to retirement after completing their service obligations were more likely to stay to receive their retirement benefit. DOD Provides Substantial Deferred, Noncash, and Other Benefits Which Must Be Considered Alongside Cash Compensation, but Value to Servicemembers Is Difficult to Quantify In addition to cash compensation, DOD offers substantial deferred benefits, such as retirement pensions and benefits, and noncash benefits, such as tuition-free medical school education and health care, to its military physicians and dentists. In its report on military compensation, DOD noted that nearly half of military compensation is made up of deferred and noncash benefits, and that this proportion is considerably higher than in civilian compensation. Additionally, in 2011 we identified military personnel costs as an area where DOD could recognize long- term cost avoidance by using a total compensation approach to manage military compensation in a holistic manner that considers deferred and noncash benefits alongside cash compensation. Studies of military compensation highlight that assigning a value to deferred and noncash benefits and comparing them to the civilian private sector proves more difficult than for cash compensation because servicemembers value or use these benefits differently, various assumptions have to be made to assign value, and access to such benefits is not universal among private sector civilian workers. Additionally, it is difficult to measure the extent to which servicemembers discount the value of future benefits. We previously reported that it is generally accepted that some deferred benefits, such as a pension in retirement, are not valued as highly by servicemembers as current cash compensation. However, a recent study found that servicemembers, particularly military officers, may value deferred benefits more highly than was previously reported. For these reasons we did not compare the value of military deferred and noncash benefits to similar benefits in the civilian private sector; however, we describe certain types of deferred and noncash benefits available to physicians and dentists and provide estimates of their value where possible. DOD Deferred Benefits DOD provides access to two primary types of deferred benefits: its employer-sponsored retirement plans and retiree health and dental care. As mentioned previously, the likelihood of benefiting from DOD’s military retirement system is a factor that officers consider when deciding to stay on active duty. Retirement plans. In DOD’s traditional retirement system, known as the “High-Three System,” servicemembers are eligible to receive a defined benefit annuity based on their pay grade and years of service after a minimum 20 years of active-duty service, with no benefits provided to those who separate before then. This system was closed to new entrants at the end of 2017. Based on our estimates, under the High-Three System, the defined benefit for a physician or dentist who retires with 20 years of service in 2035 was estimated to be $2,457,253 (present value). New servicemembers from 2018 onwards were enrolled in the Blended Retirement System (BRS). BRS is a hybrid retirement system that includes a revised defined benefit plan requiring 20 years of active-duty service, a defined contribution plan with agency matching contributions, and a one-time direct cash payout—called continuation pay—distributed at the midcareer point (between 8-12 years of service). Based on our estimates, under the BRS, the defined benefit for a physician or dentist who retires with 20 years of service in 2035 was estimated to be $1,965,802 (present value). The defined contribution plan offers government automatic and matching contributions of up to 5 percent of basic pay to the servicemember’s Thrift Savings Plan, and vested servicemembers who separate before 20 years of active-duty service retain ownership of these contributions. The BRS was implemented in 2018 to modernize the military retirement system. As the Military Compensation and Retirement Modernization Commission reported in 2015, roughly 51 percent of military officers exited service before 20 years, meaning that most left without any retirement benefits under the High-Three System. The BRS is expected to provide retirement benefits for the majority of servicemembers, including those who serve fewer than 20 years, according to DOD. In our interviews, some DOD officials expressed concern about the effects of BRS on retention of military physicians and dentists, because, for example, they believed the opportunity to separate with defined contributions will reduce their incentive to remain for a longer period of active duty. Other DOD officials we interviewed stated that it is too soon to determine the effects of the BRS on retention, and noted that the inclusion of continuation pay as part of the BRS was designed to encourage servicemembers to continue serving at the mid-career point. Retiree health and dental care. Servicemembers retiring from active duty are eligible to enroll in TRICARE. Specifically, retired servicemembers and their eligible dependents are able to participate in TRICARE Prime which is comparable to a health maintenance organization (HMO) program, and TRICARE Select, which is comparable to a preferred provider organization (PPO) program. After they are eligible for Medicare, retired servicemembers and their eligible dependents with Medicare Part A and B can enroll in TRICARE for Life, which provides Medicare-wraparound coverage. Eligible retired servicemembers may also receive benefits from the Department of Veterans Affairs health care system. Specifically, active-duty servicemembers who served 24 continuous months or the full period for which they were called to active duty are eligible for Veteran Affairs’ health care. DOD Noncash and Other Benefits DOD provides access to a wide variety of noncash benefits, some of which are uncommon in the civilian sector, and may offset some of the discrepancies in military and private sector civilian cash compensation. However, limited information exists on the extent to which noncash benefits are used by military physician and dentists. Therefore, we have highlighted select benefits that may be used by military physicians and dentists. Tuition-free medical and dental school. Military physicians and dentists benefit from DOD’s scholarship program and the University, through which prospective medical and dental students receive tuition-free education in exchange for commitment to a number of years in active- duty service. This benefit allows physicians and dentists to avoid thousands of dollars of student debt. For example, according to the Association of American Medical Colleges, the average first-year medical student paid $36,755 for tuition, fees, and health insurance to attend a public medical school during the 2018-19 academic year, and the average first-year student attending a private medical school paid $59,076. Medical and dental care. DOD offers comprehensive health coverage to military personnel and their dependents through TRICARE, a managed care program. Care is provided in more than 650 military treatment facilities worldwide, supplemented by civilian providers. TRICARE offers two health care options to non-Medicare-eligible beneficiaries: TRICARE Prime and TRICARE Select. All active-duty servicemembers are automatically enrolled in TRICARE Prime, which is comparable to a private health maintenance organization plan. Under this program, active- duty servicemembers have no premium costs, deductibles, or out-of- pocket costs for servicemembers and no or low costs for dependents. Medical Expenditure Panel Survey data indicate that the average private sector civilian employee spent over $5,000 in health insurance employee contributions for family coverage in 2018. The TRICARE Active Duty Dental Program supplements the dental services available to active-duty servicemembers at military treatment facilities when necessary care is not available or the servicemember does not have ready access to a military treatment facility. Active-duty servicemembers do not pay premiums for this dental care, do not share in the costs of the care, and do not face any annual or lifetime maximums on the cost of care. Financial benefits during education and training. Medical and dental scholarship students receive O-1 pay and allowances for 45 days of active duty for annual training performed for each year the scholarship is awarded. Participants may also be eligible for a $20,000 signing bonus. During their education, medical and dental scholarship students receive a monthly stipend, and medical students at the University receive officer salary and benefits at grade O-1. After medical school, medical and dental residents receive officer pay and benefits at grade O-3 or higher, according to DOD officials. DOD Uses Incentives to Recruit and Retain Military Physicians and Dentists, but Does Not Consistently Collect Information to Help Inform Investment Decisions Based on our analysis of DOD’s incentives to recruit and retain military physicians and dentists, DOD generally (1) clearly defined the criteria used to determine when to offer incentives, (2) identified and incorporated opportunities for improvement, (3) identified and evaluated unique staffing situations, and (4) made investments to attract and retain top talent. However, we found that DOD did not consistently collect information on (1) replacement costs, (2) current and historical retention efforts, and (3) comparable civilian wages to help inform investment decisions in its package of incentives to recruit and retain military physicians and dentists. Fully applying these seven key principles of effective human capital management in its approach to recruit and retain military physicians and dentists is important to making fully informed investment decisions. DOD Generally Applied Four Key Principles of Effective Human Capital Management to Its Package of Incentives for Recruiting and Retaining Military Physicians and Dentists We found that DOD generally applied effective human capital management principles related to clearly defined criteria on when to use incentives, making investments based on expected improvement in agency results, identifying and evaluating unique staffing situations, and identifying and incorporating opportunities for improvement. To support its operational needs, DOD uses educational, training, and monetary incentives to recruit and retain physicians and dentists. Specifically, DOD’s package of incentives includes, among other things, a tuition-free medical school education through the scholarship program and the University, pay as an O-3 officer or higher during medical or dental residency, the opportunity for further training via a fellowship, and a series of special and incentive pays for fully trained physicians and dentists. According to DOD’s report on military compensation, special and incentive pay authorities provide the services with greater flexibility to target additional compensation where needed to address emerging staffing shortfalls and maintain staffing in critical or hard-to-fill skills. We found that DOD generally applied four of the seven key principles, as described below: Relied on clearly defined, well-documented, consistently applied, and transparent criteria. DOD and the military departments have established rules-based pay plans with clear eligibility criteria for special and incentive pays and recruitment and retention bonuses. Key principles for human capital management state that agencies should consider making targeted investments in specific human capital approaches, and that these approaches should have clearly defined, well-documented, transparent, and consistently applied criteria for making these investments. Identified opportunities for improvement and incorporated these opportunities into the next planning cycle. The services and officials from the Office of the ASD(HA) participate in the Health Professions Incentives Working Group to review recruitment and retention special pay and incentives and recommend adjustment to amounts offered as necessary. For example, as a result of working group discussions, DOD officials stated that they established a new 6- year retention bonus in the fiscal year 2019 pay plan for select medical and dental specialties, in part to ensure greater stability in the numbers of physicians and dentists within these specialties. Military department officials stated they plan to identify potential impacts and determine adjustments, if any, that need to be made. DOD’s report on military compensation advises officials to identify opportunities for improvement using analytical tools to model how changes in compensation might alter the force or career profile. It further states that taking a structured approach to determining both incentive pay eligibility criteria and amounts helps force managers optimize their limited special and incentive pay budgets. Such an approach also provides a mechanism to periodically conduct a rigorous assessment of such pays to ensure that they keep pace with changing conditions. Identified and evaluated unique staffing issues. According to military department officials, medical corps and dental corps community managers, specialty leaders, consultants, and others actively discuss military physicians’ and dentists’ career plans to help inform future staffing needs. Moreover, to attract physicians and dentists in specialties which DOD has identified as a critically short wartime specialty, DOD offers a Critical Wartime Skills Accession Bonus. However, as we reported in 2018, military department officials cited a number of challenges that make it difficult to attract and retain military physicians and dentists, including national shortages and competition with the private sector. Incentive pay and retention bonus amounts are specific to each specialty. DOD’s report on military compensation states that evaluation of unique staffing issues identified by community managers should be a core part of a systematic approach to assessing the application of a special or incentive pay. Similarly, key principles for human capital management note that agencies should tailor human capital strategies to meet their specific mission needs. Targeted investments to attract and retain top talent. The services are authorized to offer targeted monetary incentives in the form of special and incentive pays and recruitment and retention bonuses to eligible physicians and dentists who are in good standing. Moreover, military department officials stated that DOD offers Board Certification Pay to physicians and dentists who achieve and maintain this accreditation because it reflects that the physician or dentist is maintaining skills and qualifications and allows the department to better reflect the high level of the quality of care that is provided by the military health system. Similarly, we reported in 2018 that DOD and the military departments had established a set of minimum qualifications for medical school applicants applying to the scholarship program and the University. Key principles for human capital management state that targeted investments in human capital approaches should help the agency attract, develop, retain, and deploy the best talent and then elicit the best performance for mission accomplishment. The principles further state that decisions regarding these investments should be based largely on the expected improvement in agency results. Similarly, DOD’s Diversity and Inclusion Strategic Plan 2012-2017 notes that retaining top talent is essential to sustaining mission readiness that is adaptable and responsive. DOD Does Not Consistently Collect Information to Help Inform Investment Decisions in Its Package of Recruitment and Retention Incentives In three key areas of effective human capital management related to data on replacement costs, recruitment and retention, and civilian wages, DOD does not consistently collect information to help inform investment decisions in its package of incentives to recruit and retain military physicians and dentists, as described below: Did not identify replacement costs. Military departments do not consistently collect information on replacement costs of military physicians and dentists. Specifically, no military department was able to provide us with a comprehensive assessment of the replacement cost for military physicians and dentists. Replacement cost assessments can be found in other occupations within DOD. For example, in 2017, we reported that the Navy considers the high replacement costs of its nuclear propulsion personnel—up to $986,000 per trainee—in justifying a strategy that prioritizes investment in retention initiatives over new accessions or recruits. Moreover, DOD requires that the training investment and replacement cost for those qualified in the skill be considered when justifying the need for the critical skills retention bonus. DOD’s report on military compensation identified replacement costs and training costs as a factor in assessing incentive pay appropriateness. In 2018, we recommended that the ASD(HA) require that the University develop a reliable method to accurately determine the cost to educate its medical students. DOD partially concurred with our recommendation. In response to our recommendation, the University contracted with the Institute for Defense Analyses to determine the costs to educate University medical students. In its October 2019 final draft report, the Institute for Defense Analyses estimated total accession costs for a fully trained physicians through both the scholarship program and the University; specifically, the report estimated the total cost for a fully trained physician who completes 4 years of medical school and a 3-year military residency to be $878,000 for scholarship medical students and approximately $1.5 million for University medical students. In another similar ongoing effort, Navy officials stated that they have commissioned a Life Cycle Cost study with the Center for Naval Analyses. We are encouraged by these initiatives, which will provide the Office of the ASD(HA) and the military departments a foundation for formalizing the process of collecting information on replacement costs. With the benefit of this information, DOD can make more informed decisions regarding its packages of recruitment and retention incentives. Did not collect current and historical retention information. Military departments do not consistently collect and use current and historical retention information to help inform decisions about investment in retention incentives. Specifically, Navy and Air Force officials told us that they do not have readily available information to determine the percentage of those who accepted a retention bonus among the eligible population, and Army officials noted they do not have a framework in place to use retention information to determine the effectiveness of retention bonuses. Using retention data to measure effectiveness of retention incentives is performed by other communities within DOD. For example, in 2018 we reported that officials from the Navy, Marine Corps, and Air Force measured the effectiveness of aviation retention bonuses by monitoring bonus acceptance rates. DOD’s report on military compensation stated that a review of current and historical data on retention should be a core part of a systematic approach to assessing the application of a special or incentive pay. Further, key principles for human capital management note that periodic measurement of an agency’s progress toward human capital goals and the extent that human capital activities contributed to achieving programmatic goals provides information for effective oversight by identifying performance shortfalls and appropriate corrective actions. Without information on the acceptance rate among those eligible, the military departments cannot assess the effectiveness of the performance of their investment in retention bonuses. Did not assess private sector civilian wages. DOD does not consistently collect and use private sector wage information to help inform investment decisions in its special and incentive pays for physicians and dentists. Based on our review of the minutes of meetings of the Health Professions Incentives Working Group, which recommends changes to the rate and term of special and incentive pays, private sector compensation was occasionally raised as a challenge. However, it was not collected and used to help inform investment decisions on a consistent basis. According to officials from the Office of the ASD(HA) and the military departments, an assessment of civilian wages is not a driving factor when considering adjustments to special and incentive pays, in part because DOD cannot always match civilian sector compensation for military physicians and dentists. Officials from the Office of the ASD(HA) and the military departments acknowledged the disparity between military and civilian cash compensation varies by specialty; however, incentive pay and retention bonus amounts have largely remained the same for over a decade. DOD’s Ninth Quadrennial Review of Military Compensation states that pay at around the 70th percentile of comparably educated civilians is necessary to enable the military to recruit and retain the quantity and quality of personnel it requires. Based on our comparison of military and civilian cash compensation pay previously discussed, we found that the gap between military and private sector civilian varies by specialty and many fall below the civilian private sector median. Moreover, based on our review of cash compensation for medical officers who completed their residency directly after medical school across 21 medical specialties, we found that at their first unobligated year of service, all 21 specialties had cash compensation below the private sector civilian median. Additionally, all but one specialty (psychiatry) were compensated at less than the 20th percentile of private sector civilian compensation. Use of assessments of private sector civilian compensation can be found in other communities within DOD. For example, in 2017, we reported that the Navy justified its use of selective reenlistment bonuses for cyber-related occupations by noting the specific level of starting salaries for comparable civilians. DOD’s report on military compensation states that reviewing civilian wages is a key element in assessing the application of a special or incentive pay. Further, it states that periodic reviews, which should include the use of an analytical tool or model, will ensure that resources are directed at the most pressing staffing needs. For example, professions that consistently command higher pay in the civilian sector—such as the medical professions—may merit predictable pays over the long term. Yet in other areas, evolving mission needs, changing conditions in the civilian market, and other factors may call for increasing an incentive or, in some cases, may show that additional pay can be reduced or eliminated. According to a former Under Secretary of Defense for Personnel and Readiness and a noted expert on defense personnel issues, DOD would benefit from analysis to determine the point at which cash compensation for military physicians, including special and incentive pays, reaches a minimum threshold of attractiveness compared to the private sector. Assessing civilian wages could help DOD understand the relationship of any military and civilian pay discrepancies to its ability to fill particular specialties. For example, we found that in fiscal year 2018, all but three of the specialties we reviewed were below 90 percent of authorization by at least one of the services’ active components. By consistently collecting civilian wage information and using it to inform its package of incentives, DOD will be better positioned to make the most effective use of its recruitment and retention incentives. DOD officials stated that their approach to managing the package of incentives to recruit and retain military physicians and dentists is driven by a number of considerations. Specifically, DOD officials stated that the rates of special and incentive pays represent amounts that are affordable and that the military departments generally believe have allowed them to meet their personnel needs. Further, military department officials stated that budget considerations and statutory limitations hinder their ability to change the rate of special and incentive pays. Current statutory limits to the amount of the retention bonus, incentive pay, and board certification pay are $75,000, $100,000, and $6,000, respectively; there is currently no statutory limit on the critical skills retention bonus for health professionals, which can be paid in addition to other pays. While we believe these are valid considerations, collecting information on replacement costs, retention, and civilian wages would allow the Office of the ASD(HA) and the military departments to provide greater stewardship of available funding by ensuring its efficient application. Specifically, Standards for Internal Control in the Federal Government state that management should use quality information to achieve the entity’s objectives. For example, further analysis of replacement costs could reveal that retention of fully trained physicians is highly economical for DOD, and provide strong support for changes to retention incentives to safeguard significant investment in physicians and dentists. By collecting and using this information to inform its decision-making, DOD and the military departments would be better positioned to assess the effectiveness of their incentives to recruit and retain military physicians and dentists and make sound investment decisions for the future. Medical Students and Residents Perceive That Lengthening Service Obligations Could Negatively Affect Recruitment and Retention of Military Physicians Our surveys of medical students, focus groups with medical residents, and interviews with DOD officials showed there was a general perception that lengthening active-duty service obligations, such as through a system of serving obligations from medical school and residency training consecutively, could negatively affect recruitment and retention of military physicians. Moreover, DOD is considering reductions to the overall number of active-duty physicians, including targeted reductions to certain specialties, and participants in all eight focus groups with residents had concerns about the proposed reductions to authorizations for certain medical specialties. Medical Students and Residents Reported General Unwillingness to Accept Longer Service Obligations without Additional Cash Incentives In our surveys of medical students, we found that they generally would not have accepted the scholarship or attended the University if the service obligations from medical education and residency training were served consecutively. Specifically, an estimated 61 percent of scholarship recipients and an estimated 51 percent of University students in our representative survey responded that they would not have accepted DOD’s scholarship program or attended its University had they been required to fulfill these service obligations consecutively. However, our survey results indicated that students are willing to accept some additional active-duty service obligation for their current programs. Specifically, 68 percent of the University students and almost half (46 percent) of the scholarship students would be willing to accept an additional year of active-duty service obligation. Notably, a lower percentage of medical students would accept 2 additional years of active- duty service obligations—specifically 34 percent of University students and 16 percent of scholarship recipients. Our survey results found that medical students would be more willing to accept longer service obligations if accompanied by additional cash incentives. For example, 80 percent of University students and more than half of scholarship recipients (63 percent) would be willing to accept an additional year of service obligation if accompanied by additional cash incentives. (See figure 6 and appendix III for specific estimates and confidence intervals.) Similar to the survey responses, participants in all eight focus groups with medical residents also would not have accepted the scholarship or attended the University under a system of consecutive active-duty service obligations. However, participants in seven out of eight focus groups we conducted stated that they would be more willing to accept longer service obligations if accompanied by additional cash incentives, such as a larger accession bonus. Lengthening service obligations may also have unintended consequences without other changes to DOD policy. Specifically, participants in five out of eight of our focus groups with medical residents and DOD officials we interviewed expressed concern that lengthening service obligations would delay physicians’ eligibility for retention bonuses, resulting in a reduction of cash compensation over the course of a career. For example, under current policy, a physician who accepted a 4-year scholarship, completed a 1-year internship, and then trained in a 4-year residency training program would be eligible for a retention bonus after 9 years of service. Under a consecutive service obligation model, that same physician would be eligible for a retention bonus after 13 years of service (see figure 7). Further, as previously reported, cash compensation for military physicians is generally less than private sector civilian compensation, and participants in seven out of eight of our focus groups with residents expressed that lengthening service obligations would extend the amount of time they would not be paid comparably to their private sector civilian counterparts. Residents Stated That Longer Service Obligations and Reductions in Authorizations for Medical Specialties Would Likely Affect Their Decision to Continue Military Service Residents in our focus groups stated that lengthening active-duty service obligations would make residency training in a military hospital less attractive and would likely affect their decision to continue military service. Specifically, medical residents in most focus groups we held noted that lengthening service obligations would make them more likely to: fulfill their medical school active-duty service obligation by serving one or more tours as a General Medical Officer and then separate from the military in order to train in a civilian residency program; decline to participate in further medical training and specialization via a fellowship program within the military; and separate from the military sooner than planned, in part because a longer active-duty service obligation would delay their eligibility for certain special and incentive pays. Military department officials we interviewed expressed concern that lengthening active-duty service obligations, such as through a system of serving obligations consecutively, could encourage potential medical residents to choose shorter residency training programs over longer ones. However, participants in all eight focus groups we held with medical residents stated that the ability to train in a chosen medical specialty is more important than the length of the residency program, and a longer active-duty service obligation would not influence their chosen medical specialty. Further, residents who participated in our focus groups stated that the proposed reductions in authorizations—that is, funded positions—for certain medical specialties and associated reductions in residency program spots could negatively affect the attractiveness of residency training in a military setting. DOD has reduced authorizations for certain specialties based on our analysis of DOD’s Health Manpower Personnel Data System information and is considering additional reductions to the overall number of active-duty physicians as part of its budgeting process for fiscal years 2020-2024, including targeted reductions to certain specialties. For example, DOD reduced authorizations for the general pediatrics specialty by 40 percent from fiscal year 2015 through fiscal year 2018, and based on our surveys of medical students, 12 percent of scholarship recipients and 16 percent of University students in the clinical stage of medical school responded that they are interested in practicing the pediatrics specialty after they have completed all required training. Participants in all eight of our focus groups with residents commented that the ability to specialize in their medical specialty of choice was important when deciding to accept the scholarship or attend the University, and narrowing such opportunities would negatively affect the attractiveness of either program for future prospective participants. When reflecting on the proposal to reduce the range of available specialties, residents questioned their ongoing ability to practice their preferred specialty as an active-duty servicemember. In our focus groups, some residents expressed that this issue could play a role in their future decision to continue military service or separate and pursue civilian medical practice. Conclusions DOD’s ability to recruit and retain the right numbers and types of physicians and dentists depends in part on the effectiveness of the package of incentives in which the department invests. To initially recruit these physicians and dentists, DOD relies on its scholarship program and University, which come with active-duty service obligations. Changes to the structure of its active-duty service obligations could affect recruitment and retention of physicians and dentists. Given that DOD spends millions of dollars annually to train medical and dental students to become fully trained physicians and dentists and that almost half of DOD’s special pay budget is dedicated to retaining them, consistently collecting information to help inform investment decisions is critical to ensuring the efficiency of these significant resources. For example, information on the replacement costs of physicians and dentists would help DOD make decisions about whether it is more cost effective to train or retain these personnel. Further, consistent collection of information on the extent to which eligible physicians and dentists accept retention bonuses will help DOD monitor the effectiveness of an incentive that represents a significant investment by DOD. Our comparison of military to private sector cash compensation highlighted that military physicians and dentists generally receive less cash compensation than their private sector civilian counterparts for most specialties we reviewed. This differential, according to DOD officials, is one factor that servicemembers consider in deciding whether to continue service in the military. However, while DOD and military department officials stated that they are aware of how prevailing private sector civilian wages for medical and dental specialties compare to military cash compensation, they do not consistently collect information on this matter and that its role in setting military cash compensation is limited. By collecting and using such information to inform investment decisions, DOD will have better information to efficiently and effectively meet its mission of providing health care during times of war and peace. Recommendations for Executive Action We are making the following three recommendations to DOD: The Secretary of Defense should ensure that the Assistant Secretary of Defense for Health Affairs, in coordination with the military departments, collect consistent information on the replacement costs of military physicians and dentists and use this information to inform investment decisions in the package of incentives to recruit and retain military physicians and dentists. (Recommendation 1) The Secretary of Defense should ensure that the Assistant Secretary of Defense for Health Affairs, in coordination with the military departments, collect consistent information on current and historical retention data, to include data on the percentage of eligible physicians and dentists who accept retention bonuses, and use this information to inform investment decisions in the package of incentives to retain military physicians and dentists. (Recommendation 2) The Secretary of Defense should ensure that the Assistant Secretary of Defense for Health Affairs, in coordination with the military departments, collect consistent information on private sector civilian wages and use this information to help inform investment decisions in the package of incentives to recruit and retain military physicians and dentists. (Recommendation 3) Agency Comments We provided a draft of this report to DOD for review and comment. DOD concurred with all three recommendations and noted that it will take actions to incorporate them into policy within the next two years. DOD’s comments are reprinted in appendix IV. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Office of the Assistant Secretary of Health Affairs, the Secretaries of the Army, the Navy, the Air Force, and the President of the Uniformed Services University of the Health Sciences. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or FarrellB@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report addresses the following objectives: 1. how compensation for military physicians and dentists compares to private sector civilians with comparable skills in 2017; 2. the extent to which the Department of Defense (DOD) has developed an approach to recruit and retain military physicians and dentists through a package of incentives that reflect key principles of effective human capital management; and 3. the perceptions of military medical students, residents, and DOD officials regarding active-duty service obligations, including their effect on recruitment and retention. Objective 1 - Comparison of Compensation For our first objective, we compared cash compensation for military physicians and dentists to comparable private sector civilian specialties, described the deferred and noncash benefits available to military physicians and dentists, and created estimates of the value of DOD’s retirement benefit for officers with varying current years of service. To compare cash compensation for military physicians and dentists to comparable private sector civilian specialties, we estimated military cash compensation and compared that to civilian compensation data reported in surveys by the American Medical Group Association and American Dental Association. Specialty selection. To select DOD physician and dental specialties that have private sector civilian equivalents, we began with the list of 44 physician and 11 dental specialties in DOD’s Fiscal Year 2018 Health Manpower Personnel Data System report. We selected 21 physician specialties in consideration of the following factors: a comparable private sector civilian specialty existed; the majority of the physician workforce was represented; deploying specialties were included; a balance of procedural, surgical, and other specialties was included, and; specialties identified as critically-short, trauma-related wartime specialties were included. We selected six dental specialties in consideration of the following factors: a comparable private sector civilian specialty existed, and private sector civilian compensation information was available. Estimates of military cash compensation. To estimate cash compensation for military physicians and dentists for our selected specialties, we reviewed DOD policy and guidance and relevant statutes to identify any current measures of cash compensation and other key elements of cash compensation for physicians and dentists. DOD’s measure of cash compensation, known as regular military compensation, includes the sum of basic pay, average basic allowance for housing, basic allowance for subsistence, and the federal income tax advantage that accrues from the nontaxable nature of the allowances. Another key element of cash compensation is the special and incentive pays that DOD offers to eligible military physicians and dentists, such as incentive pay, Board Certification Pay, and retention bonuses. We collected information on basic pay, basic allowance for housing, and basic allowance for subsistence for married personnel from DOD’s fiscal year 2017 Greenbook publication, and information on incentive pays, Board Certification Pay, and retention bonuses from DOD’s fiscal year 2017 Health Professions Officer Special and Incentive Pay Plan. We selected fiscal year 2017 because it was the most recent year of available data amongst all of our sources, and we selected married personnel because according to a DOD report, the majority of officers in the pay grades O-4 to O-6 are married, which largely aligns with DOD's population of physicians and dentists. We estimated a range—the minimum and maximum—of military cash compensation by specialty for pay grades O-3 to O-6. The minimum and maximum are based on two scenarios that represent a range of pay that specialized physicians and dentists can expect to receive, considering only the special and incentive pays listed in the Health Professions Officer Special and Incentive Pay Plan. The minimum includes regular military compensation, Board Certification Pay, and incentive pay. The maximum includes the regular military compensation, Board Certification Pay, and incentive pay at a higher amount in conjunction with a 4-year retention bonus. Our estimates represent the sum of basic pay, average basic allowance for housing, basic allowance for subsistence, special and incentive pays, and the federal tax advantage that accrues from the nontaxable nature of the allowances. To calculate the federal tax advantage, we used the 2018 federal tax tables and applied the 2018 federal tax standard deduction and then converted the calculated federal tax advantage to 2017 dollars. According to a senior DOD dental corps official, most general dentists are not board-certified and do not receive Board Certification Pay; we therefore omitted Board Certification Pay in our estimates for the minimum and maximum military cash compensation of general dentists. Private sector civilian cash compensation information. To identify private sector civilian cash compensation for physicians in comparable specialties, we chose the American Medical Group Association’s 2018 Medical Group Compensation and Productivity Survey—2018 Report Based on 2017 Data because (1) it included all the specialties we selected to review, and (2) it contained information on physicians who practiced in settings that were similar to those in which federal physicians practiced.The survey data provided compensation amounts for each specialty by 20th percentile, median, and 80th percentile. The data excluded the value of any employer-provided malpractice insurance, but some physicians may incur costs for this coverage. Military physicians generally do not need to purchase malpractice insurance. To identify private sector civilian cash compensation for dentists in comparable specialties, we chose the American Dental Association’s Health Policy Institute, Income, Gross Billings, and Expenses: Selected 2017 Results from the Survey of Dental Practice because (1) it included all the specialties we selected to review, and (2) included the net income of dentists and specialists in private practice, which is comparable to military dentists who generally do not need to purchase malpractice insurance. We obtained net income information for full-time practitioners—those who reported working 35 hours a week or more— from the American Dental Association. The survey data provided compensation amounts for each specialty by 25th percentile, median, and 75th percentile. Both surveys represent salaries for 2017. To help determine the reliability and accuracy of private sector civilian compensation information, we checked these data for reasonableness and the presence of any obvious or potential errors in accuracy and completeness. We believe the data are sufficiently reliable for the purpose of this report. Comparisons of military and private sector civilian cash compensation. We compared our estimates of the ranges of military cash compensation by specialty and pay grade to the ranges of private sector civilian cash compensation by specialty from our selected surveys. As we could not make direct comparisons of military and civilian cash compensation by years of service or experience due to data limitations, we compared and presented the ranges of compensation as appropriate. We also compared military cash compensation at the first unobligated year of service to the range of private sector civilian cash compensation, by specialty. We estimated military cash compensation at the first unobligated year of service based on the length of each residency and, if applicable, fellowship—among other assumptions. We identified physician residency and fellowship length information by using the Accreditation Council for Graduate Medical Education’s Data Resource Book for Academic Year 2017-2018, and we requested information on military residency lengths from military department officials to confirm that residency lengths generally aligned with this information. We identified dentist residency and fellowship length information by requesting it from military department officials. For each specialty, we estimated the officers’ pay grade using the following assumptions: (1) no creditable service before medical or dental school; (2) a 4-year medical or dental school duration; (3) participants were commissioned at the O-3 pay grade after medical or dental school completion with 4 years of constructive credit—in accordance with entry grade credit guidance outlined in DOD Instruction 6000.13; (4) the first year of post-graduate medical or dental education does not accrue an active-duty service obligation, and; (5) were promoted to O-4 at 6 years of service, and to O-5 at 12 years of service—in accordance with DOD’s promotion schedule outlined in DOD Instruction 1320.13. The entry grade credit and promotion schedule practices were confirmed by DOD officials. For physicians, we assumed that the active-duty service obligations for medical school and residency were served concurrently, in other words we assumed immediate entry into a residency program. We performed our calculations twice, first assuming no tour as a General Medical Officer and second assuming that physicians completed a 3-year tour as a General Medical Officer—adding 3 years to their years of service at service obligation fulfillment. According to Navy medical corps officials, 55 percent of Navy physicians perform such a tour. When assuming no General Medical Officer tour, the majority of physicians reached this decision point at the O-4 pay grade with the exception of neurosurgeons and cardiac/thoracic surgeons, who were at the O-5 pay grade due to longer residency and fellowship lengths. When assuming a 3-year General Medical Officer tour, physicians in 12 specialties reached this point at the O-5 pay grade, with the remaining nine specialties at the O-4 pay grade. We also conducted this analysis for Uniformed Services University of the Health Sciences (University) students who accrued a 7- year active-duty service obligation. We found that assuming a 7-year obligation for University students produced the same results as assuming a 3-year tour as a General Medical Officer for Health Professions Scholarship Program (scholarship) participants. For dentists, we assumed that the dental school and residency obligations were not served concurrently because, according to the military department Dental Corps Chiefs, dental student graduates typically complete a 1-year advanced education in general dentistry certificate, which does not incur a service obligation, then fulfill their dental school active-duty service obligation as general dentists before taking a general dentist’s retention bonus and beginning residency training. We completed an analysis to understand how the pay grade at the first year of unobligated service may vary for general dentists who worked as a general dentist immediately after completing dental school or completed a 1-year advanced education residency. We found that general dentists generally reached this decision point at the O-3 pay grade; endodontists, orthodontists, pedodontists, and periodontists reached it at the O-4 pay grade, and; oral and maxillofacial surgeons reached it at the O-5 pay grade. Estimates of retirement benefit. To develop estimates of the value of the defined benefit portion for DOD’s two retirement benefit programs— the Blended Retirement System (BRS) and the High-Three—we developed two scenarios for a hypothetical officer who either chose to remain in the High-Three System or to opt into the BRS. We used DOD’s publically-available, online retirement calculators to generate an estimate for each scenario, which Office of the Under Secretary for Personnel and Readiness officials described as the best available tools to determine the value of military retirement benefits. Specifically, the estimates were for a physician or dentist who was commissioned as an O-3 officer in 2015 and assumed separation from service at 20 years. For these scenarios, we developed reasonable estimates to enter into the calculators. For example, in the personal information section of the calculators we estimated the pay entry base date assuming that the officer began earning creditable years of service toward retirement after medical or dental school and that they began active-duty service as an officer at the O-3 pay grade in the month of June after the completion of medical or dental school. The calculators produced an estimate of the present value estimated retirement benefit at 20 years of service, which is when the defined benefit portion becomes effective. Estimates were as of August 2019 and included a specific value for the defined benefit. DOD’s publically available retirement calculators use a discount rate of 5 percent per year, as of July 2018. We also consulted with a senior DOD official from the Office of the Under Secretary for Personnel and Readiness to corroborate the reasonableness of our approach. To help determine the reliability and accuracy of DOD’s retirement calculators, we checked the data for reasonableness and the presence of any obvious or potential errors in accuracy and completeness and interviewed DOD officials knowledgeable about the data. We believe the data are sufficiently reliable for the purpose of this report. Description of deferred and noncash benefits. To describe deferred and noncash benefits available to military physicians and dentists, we reviewed our prior reports, other relevant research, and publically available reports and information from DOD. We interviewed cognizant DOD officials to understand which benefits military physicians and dentists were most likely to utilize. Objective 2 – DOD’s Approach to Recruit and Retain Physicians and Dentists For our second objective, we reviewed pay plans, policies, and other documents developed by the Office of the Assistant Secretary of Defense for Health Affairs (OASD(HA)) and the respective military departments concerning DOD’s approach to recruitment and retention of military physicians and dentists. We also interviewed officials from OASD(HA) and the military departments concerning their decision-making processes in managing this package of incentives. We compared this information with seven key principles of effective human capital management, which was reported in our February 2017 report on military compensation. As we reported in that report, to identify key principles of effective human capital management, we reviewed a compilation of our body of work on human capital management, DOD’s Report of the Eleventh Quadrennial Review of Military Compensation, and the DOD Diversity and Inclusion Strategic Plan 2012 - 2017. The seven key principles of effective human capital management include (1) criteria for making human capital investments are clearly defined, well-documented, consistently applied, and transparent; (2) replacement costs of personnel are considered when deciding to invest in recruitment and retention programs; (3) decisions regarding human capital investments are based largely on expected improvement in agency results and implemented in a manner that fosters top talent; (4) unique staffing issues are identified and evaluated as part of establishing the incentive structure; (5) opportunities for improvement are identified and incorporated into the next planning cycle; (6) current and historical retention data are collected and reviewed as part of efforts to evaluate effects and performance of human capital investments; and (7) civilian wages are assessed and plans are updated as needed. In addition to using the key principles, we also compared aspects of DOD’s approach to recruitment and retention of military physicians and dentists with federal internal control standards, which state management should use quality information to achieve an entity’s objectives, and highlighted areas where DOD’s approach differed from these principles. Objective 3 – Perceptions Regarding Active-Duty Service Obligations For our third objective, to obtain perceptions of (1) military medical students, (2) residents, and (3) DOD officials regarding active-duty service obligations, including their effect on recruitment and retention, we utilized, respectively, (1) web-based surveys of military medical students, (2) focus groups with military medical residents, and (3) interviews with knowledgeable officials. Surveys. For our third objective, to obtain perceptions of military medical students regarding active-duty service obligations, including their effect on recruitment and retention, we conducted two web-based surveys with a generalizable sample of current scholarship and University medical students to obtain information on the students’ knowledge of the current program and willingness to accept different lengths of service obligations or a change to a consecutive service obligation model (see table 1). One survey was administered to current scholarship medical students, while the other was administered to current University medical students. The questions in both surveys were largely the same. The main differences reflected the different pay and benefits from accepting a scholarship or attending the University and the differences in length of active-duty service obligation. For example, scholarship students receive a monthly stipend and, sometimes, an accession bonus, while University students receive the pay and allowances for commissioned officers in the O-1 pay grade. Scholarship participants incur 6 months of an active-duty service obligation for each 6 months of scholarship benefits they receive, with a 2-year, minimum service obligation, while University medical students accrue a 7-year active-duty service obligation. A full listing of survey questions is provided in appendix III. We worked with our social science survey specialists to develop our survey questionnaires, applying generally accepted survey design standards. We conducted pretests of the survey with scholarship and University students who varied by number of years in medical school and military service. Pretesting is necessary to ensure common understanding of terms used and to minimize errors that might occur from respondents interpreting the questions differently than we intended. During each pretest, the subject was not provided the draft survey in advance, but instead was either provided the draft survey at the meeting, or the survey was emailed to the subject at the beginning of the teleconference. After the pretester completed the survey, we discussed all survey questions and response options with the pretester to ensure clarity. We revised the survey instruments based on the feedback we received during each of the pretests until clarity issues were reasonably addressed. We determined fourth-year medical students were less likely to participate in the survey for three reasons: (1) they were close to graduating from medical school at the time the survey instrument was launched; (2) they lose their school email addresses shortly after graduation; and (3) once they are out of medical school, they are further removed from the decision point about either accepting the scholarship or attending the University. Therefore, we excluded them from the sample population. Dental students were also excluded from the sample population because they generally practice as general dentists after graduating from dental school and before training in a residency program, which differs significantly from the career paths of scholarship and University medical students. We defined our target population to be all medical students in their first, second, or third school year under the scholarship program or enrolled at the University. By stratifying, as shown in table 1, the sample allowed us to estimate any population figure across the service with a predetermined statistical precision. We determined the target sample size needed to achieve precision levels of plus or minus 10 percentage points or fewer, at the 95 percent confidence level. We then increased the sample size within each stratum for an expected response rate of 25 percent. The resulting sample frame included 2,972 students, and we selected a stratified random sample of 1,355. We stratified the sampling frame into four mutually exclusive strata based on medical program and service. One survey was administered to current scholarship medical students from June 26, 2019 through August 26, 2019; the survey of current University medical students was administered from June 25, 2019 through August 6, 2019. We created two administrative email accounts, one for scholarship medical students and one for the University medical students, through which we sent an announcement email to the medical students in our sample population. We administered the survey through a web-based application and sent an email from the administrative email accounts stating that the survey was ready to complete. When we received bounce-back messages, we used secondary email addresses if available or called students to request updated contact information. To maximize our response rate, we sent two reminder emails and contacted nonrespondents by telephone to encourage them to complete the survey. Also, we took steps in the development of the survey, data collection, and data analysis to minimize nonsampling errors and help ensure the accuracy of the answers that were obtained. For example, a social- science survey specialist helped to design the questionnaire, in collaboration with analysts having subject-matter expertise. Then, as noted earlier, the draft questionnaire was pretested to ensure that questions were relevant, clearly stated, and easy to comprehend. Our unweighted survey response rate was 60.5 percent for scholarship students and 80 percent for University students, with 624 and 259 respondents, respectively. Per Office of Management and Budget (OMB) Standards and Guidelines for Statistical Surveys, a nonresponse bias analysis should be conducted for a survey with a response rate less than 80 percent (Guideline 3.2.9). The response rate for the survey of University students met this threshold, and we did not assess the potential for nonresponse bias. With respect to scholarship students, after conducting an analysis of propensity of responding to the survey to identify potential sources of nonresponse bias, we identified differential student response patterns by military department and marital status. We developed sampling weights based on the population size, divided by the number of sample students within each stratum. Weights were adjusted for overall nonresponse in University students and nonresponse by military department and marital status among scholarship students so that statistical estimates for survey response percentages are generalizable to the population of students. We expressed the precision of our particular sample’s survey responses as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we were 95 percent confident that each of the confidence intervals in this report included the true percentages of survey responses in the study population. All survey response percentage estimates presented in this report from this survey had a margin of error of plus or minus 6 percentage points or fewer, unless otherwise noted. Focus groups. We also conducted eight focus group meetings with a nongeneralizable sample of 79 military medical residents at three military treatment facilities to obtain the perspectives of military medical residents on issues related to: (1) the nature of active-duty service obligations, including their willingness to accept different lengths of active-duty service obligations; (2) the relative importance of the service obligations in relation to other factors at different decision points, including accepting the scholarship or attending the University; (3) participating in a military residency program, and; (4) choosing a medical specialty to pursue. These meetings involved structured small-group discussions designed to gain more in-depth information about specific issues that cannot easily be obtained from single or serial interviews. Consistent with typical focus group methodologies, our design included multiple groups with varying characteristics but some similarity in experience and responsibility. To identify focus group participants, we considered gender, number of residents who had accepted the scholarship or attended the University, medical specialties, military department affiliation, number of years in a military residency training program, and prior service as a General Medical Officer. The focus groups involved a range of seven to 15 participants during each meeting. We did not select participants using a statistically-representative sampling method, so the information collected from the focus groups is not generalizable and, therefore, cannot be projected across DOD, a military department, or any single military treatment facility we visited. The eight focus group sessions included two pilot focus groups at Walter Reed National Military Medical Center and two sessions for each of the three military departments (Army, Navy, and Air Force). To identify the focus group locations, we selected military treatment facilities that included a diverse mix of medical specialties and a large pool of residents from which to select participants in order to ensure sufficient participation in the focus groups. We traveled to military treatment facilities in Bethesda, Portsmouth, and San Antonio to conduct the focus groups. Table 2 illustrates the total number of focus group participants categorized by military treatment facility, military department, and whether they accepted the scholarship or attended the University. To conduct the focus groups, one of our trained facilitators moderated each of the sessions, following a protocol that included discussion guidelines and a set of eight questions (see table 3). The focus group protocol was validated by one of our methodologists with a social science background and knowledge of small group methods. The same focus group protocol was used at all military treatment facilities the engagement team visited, with some minor modifications made after the pilot sessions at Walter Reed National Military Medical Center. We assured participants that their names would not be directly linked to their responses, and that the results would generally be reported in the aggregate. Because of the limitations on the use of data derived from the focus group meetings, including the nongeneralizable sample and results reported in the aggregate, we did not rely entirely on focus groups, but rather used several different methodologies to corroborate and support our conclusions, including web-based surveys with medical students who either accepted the scholarship or attended the University, and interviews with DOD officials. We performed a content analysis on the responses to identify common themes from across the responses to determine their frequencies. For the qualitative analysis, we developed a standard coding scheme to identify common themes and determine their frequencies. We also identified other themes that we determined to be important based on our surveys with scholarship and University medical students and interviews with DOD officials. To obtain information concerning military dental residents’ views, perceptions, and feelings on issues related to the nature of active-duty service obligations, including their willingness to accept different lengths of service obligations and a change from a concurrent to a consecutive model of service obligation fulfillment, we conducted two focus group sessions with 20 Air Force dental residents who were in training at the Air Force Postgraduate Dental School, Joint Base San Antonio. The focus group participants had previously accepted the scholarship and varied by gender, rank, prior military service, dental specialty, and number of years in dental residency training. These discussions were conducted using a method and protocol that was similar to the approach for the medical students. After analyzing the results of these two focus groups with military dental residents and taking into consideration the interviews we conducted with DOD officials, we determined it was not necessary to conduct further focus groups with military dental residents or include dental students in our survey of current scholarship students. Dental students’ career paths differ in significant ways from medical students’ career paths. According to DOD officials and residents in the dental focus groups, military dentists are generally already serving consecutive service obligations by fulfilling their active-duty service obligation from dental school while serving as general dentists before training in a military residency program. As a result, a change from a concurrent to a consecutive service obligation model may not affect military dentists in a similar way that it would military physicians. Interviews. In addition, we conducted interviews with relevant DOD officials to understand their position on the effect of the length of active- duty service obligations on recruitment and retention of military physicians and dentists. Specifically, we interviewed officials from the Office of the Assistant Secretary of Defense for Health Affairs; the Office of the Under Secretary of Defense for Personnel and Readiness; the Defense Health Agency, and; various areas within the military departments with responsibilities related to medical or dental corps recruitment, retention, and education, such as the Offices of the Surgeons General, Manpower and Reserve affairs, and medical and dental corps or commands. We conducted this performance audit from September 2018 to December 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Cash Compensation of Specialized Military Physicians and Dentists Compared to Private Sector Civilians, 2017 Appendix II: Cash Compensation of Specialized Military Physicians and Dentists Compared to Private Sector Civilians, 2017 percentile) percentile) Below 20th percentile Below 20th percentile median percentile) percentile) Below 20th percentile Below 20th percentile median percentile) percentile) Below 20th percentile Below 20th percentile median percentile) percentile) Radiology- Diagnostic (Interventional) Radiology- Diagnostic (Non-Interventional) The American Medical Group Compensation and Productivity Survey information represents the total annual compensation of the physician, including base and variable compensation plus all voluntary salary reductions. Examples of total compensation would include, but are not limited to, the following: compensation paid as salary or production-based compensation plans, any type of additional bonuses or incentives, clinically-related medical directorships, call coverage, and ancillary or advanced practice clinical supervision stipends. Compensation excludes any fringe benefits and employer payments to any type of retirement, pension, Supplemental Executive Retirement Plan, or tax-deferred profit-sharing plan. Specialty compensation (25th percentile, median, 75th percentile) percentile) Below 25th percentile Below 25th percentile below median compensation (25th percentile, median, 75th percentile) percentile) The American Dental Association, Health Policy Institute, Survey of Dental Practice information represents the reported annual net income of specialists in private practice, 2017. We obtained net income information for full-time practitioners—who reported working 35 hours a week or more—from the American Dental Association (ADA). Payments toward a retirement plan are included in net income. Appendix III: Estimated Population Proportion of Questions from GAO’s Surveys of Military Medical Students We conducted two web-based surveys with a generalizable sample of current Health Professions Scholarship Program (scholarship) and Uniformed Services University of the Health Sciences (University) medical students to obtain information about the students’ knowledge of the current program and willingness to accept different lengths of service obligations or a change to a consecutive service obligation model. One survey was administered to current scholarship medical students from June 26, 2019 through August 26, 2019; the survey of current University medical students was administered from June 25, 2019 through August 6, 2019. The questions in both surveys were largely the same. The main differences reflected the different pay and benefits from accepting a scholarship or attending the University and the differences in length of active-duty service obligation. The survey provided to scholarship students also included questions about whether students considered attending the University, while the survey provided to University students did not include a question about whether they considered accepting the scholarship. As a result, the scholarship survey had additional questions than the University survey. Responses to selected questions we asked in the surveys that were directly applicable to the research objectives in this report are shown below. The surveys consisted of closed- and open- ended questions, including demographic questions that were used in our analyses of the students’ responses. In this appendix, we did not provide information on responses provided to the open-ended or the demographic questions. See appendix I for a full description of the survey and estimation methodologies. GAO Survey of HPSP Medical Students The U.S. Government Accountability Office (GAO), an agency of the United States Congress, is studying the active-duty service obligation associated with the Armed Forces Health Professions Scholarship Program (HPSP) and the Uniformed Services University of the Health Sciences (USUHS). As a part of this study, GAO is conducting a nationwide survey of medical students who are participating in the HPSP or attending USUHS. We appreciate your insights, as it is important for GAO to provide student views of the current program to the Congress. Question 1 - How much did the following factors contribute to your decision to accept the HPSP scholarship? 95 Confidence Interval – lower bound (percentage) 95 Confidence Interval - upper bound (percentage) NA e. Desire to provide care to military personnel, dependents, and retirees f. Desire to provide medical care while deployed g. Other (please specify below) Active-Duty Service Obligations Generally, participants in the HPSP incur a 1-year active-duty service obligation for each year of HPSP scholarship accepted. Similarly, a military residency may also result in an active- duty service obligation of 1 year for each year of residency. Currently, these two sets of obligationsare served at the same time, so a servicemember will effectively serve the longer of the two obligations. Residencies vary in length, and result in different service obligations. One example would be that a service member accepts 4 years of HPSP funding, requiring a 4-year active-duty service obligation, AND completes a 4-year military residency, which requires a 3-year active-duty service obligation. A 4-year military residency only requires a 3-year active-duty service obligation because the intern year or first year of residency does not result in a service obligation. Under the current system, this servicemember would serve both obligations (4 years and 3 years) at the same time. Completion of the first 3 years would satisfy the residency obligation and 3 of the 4 years of HPSP obligation; the final 1 year would satisfy the remaining HPSP obligation. Question 3 - When you decided to accept an HPSP scholarship, how familiar were you, if at all, with the active-duty service obligation requirements for HPSP and for completing a military residency? 95 Confidence Interval – lower bound (percentage) Alternative Active-Duty Service Obligations Responses 95 Confidence Interval – lower bound (percentage) 95 Confidence Interval - upper bound (percentage) a. An additional 1-year service obligation for 4 years of HPSP (1.25 years of commitment for each year of funding)? (No change in the service obligation for the medical residency.) (CHECK ONLY ONE ANSWER) No Don’t Know b. An additional 2-year service obligation for 4 years of HPSP (1.5 years of commitment for each year of funding)? (No change in the service obligation for the medical residency.) (CHECK ONLY ONE ANSWER) Additional service obligations and incentives c. An additional 1-year service obligation for 4 years of HPSP AND additional cash incentives? (No change in the service obligation for the medical residency.) (CHECK ONLY ONE ANSWER) 28.3 95 Confidence Interval – lower bound (percentage) 95 Confidence Interval - upper bound (percentage) d. An additional 2-year service obligation for 4 years of HPSP AND additional cash incentives? (No change in the service obligation for the medical residency.) (CHECK ONLY ONE ANSWER) e. Service obligations served one after the other? For example, a service obligation for 4 years of medical school with the HPSP scholarship and a 4 year military residency have two service obligations – 4 years for HPSP and 3 years for the residency. Service obligations served one after the other in this example would result in a term of 7 years. (CHECK ONLY ONE ANSWER) f. A 4-year active-duty commitment AND a 2-year selected reserve commitment? Currently, HPSP participants may be subject to an individual ready reserve commitment after the completion of their active-duty service obligation. With a selected reserve commitment, reservists typically drill about 1 weekend a month and 2 weeks a year, and may be activated in support of military operations. (CHECK ONLY ONE ANSWER) If ‘No’ or ‘Don’t Know’ to Questions 5a, 5b, 5c, 5d, 5e, or 5f Which of the following funding options, if any, would you have pursued instead of accepting the HPSP scholarship? (CHECK ALL THAT APPLY) Personal or family resources Yes National Health Service Corps Scholarship Program None - would not have attended medical school Other (please specify) GAO Survey of Uniformed Services University of the Health Sciences Students The U.S. Government Accountability Office (GAO), an agency of the United States Congress, is studying the active-duty service obligation associated with the Armed Forces Health Professions Scholarship Program (HPSP) and the Uniformed Services University of the Health Sciences (USUHS). As a part of this study, GAO is conducting a nationwide survey of medical students who are participating in the HPSP or attending USUHS. We appreciate your insights, as it is important for GAO to provide student views of the current program to the Congress. Question 1 - How much did the following factors contribute to your decision to attend USUHS? 95 Confidence Interval – lower bound (percentage) 95 Confidence Interval – upper bound (percentage) a. Desire to avoid or reduce medical school debt b. Officer pay while in school c. Desire to serve your country in the armed forces 34.1 95 Confidence Interval – lower bound (percentage) Very Great Contribution Substantial Contribution Some Contribution Little or No Contribution h. Other (please specify below) Active-Duty Service Obligations The active-duty service obligation for completing the 4-year program at USUHS is 7 years. A military residency also results in an active-duty service obligation of 1 year for each year of residency, with the exception of the first year or intern year, which does not result in an active duty service obligation. Currently, these obligations are served at the same time, so a servicemember will serve the longer of the two obligations. Residencies vary in length and result in different service obligations. An example would be that a servicemember completes medical school at USUHS, which requires a 7-year active-duty service obligation, AND completes a 4-year military residency, which requires a 3-year active- duty service obligation. A 4-year military residency only requires a 3-year active-duty service obligation because the intern year or first year of residency does not result in a service obligation. Under the current system, this servicemember would serve both obligations (7 years and 3 years) at the same time. Completion of the first 3 years would satisfy the residency obligation and 3 of the 7 years of USUHS obligation; the next 4 years would satisfy the remaining USUHS obligation. Question 2 - When you decided to attend USUHS, how familiar were you, if at all, with the active- duty service obligation requirements for attending USUHS and for completing a military residency? 95 Confidence Interval – lower bound (percentage) 95 Confidence Interval – upper bound (percentage) Question 3 - When you decided to attend USUHS, how familiar were you with the fact that the medical school and military residency service obligations are served at the same time? (CHECK ONLY ONE ANSWER) 95 Confidence Interval – lower bound (percentage) Alternative Active-Duty Service Obligations Responses 95 Confidence Interval – lower bound (percentage) 95 Confidence Interval – upper bound (percentage) a. An additional 1-year service obligation for attending USUHS? (No change in the service obligation for the medical residency.) 71.8 23.3 b. An additional 2-year service obligation for attending USUHS? (No change in the service obligation for the medical residency.) No Don’t Know Additional service obligations and incentives c. An additional 1-year service obligation for attending USUHS AND additional cash incentives? (No change in the service obligation for the medical residency.) NA 19.4 95 Confidence Interval – lower bound (percentage) 95 Confidence Interval – upper bound (percentage) d. An additional 2-year service obligation for attending USUHS AND additional cash incentives? (No change in the service obligation for the medical residency.) No Don’t Know e. Service obligations served one after the other? For example, a service obligation of 7 years for attending USUHS and a 4- year military residency has two service obligations – 7 years for USUHS and 3 years for the residency. Service obligations served one after the other in this example would result in a term of 10 years. f. A 7-year active-duty commitment FOLLOWED BY a 2-year selected reserve commitment? Currently, USUHS graduates may be subject to an individual ready reserve commitment after the completion of their active-duty service obligation. With a selected reserve commitment, reservists typically drill about 1 weekend a month and 2 weeks a year, and may be activated in support of military operations. (No change in the service obligation for the medical residency) If ‘No’ or ‘Don’t Know’ to Questions 4a, 4b, 4c, 4d, 4e, or 4f Which of the following funding options, if any, would you have pursued instead of attending USUHS? (CHECK ALL THAT APPLY) National Health Service Corps Scholarship Program 19.0 95 Confidence Interval – lower bound (percentage) 95 Confidence Interval – upper bound (percentage) Appendix IV: Comments from the Department of Defense Appendix V: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments: In addition to the contact named above, Lori Atkinson (Assistant Director), Adam Howell-Smith (Analyst in Charge), Taylor Bright, Timothy Carr, Breanne Cave, Alexandra Gonzalez, Caitlin Jackson, Ronald La Due Lake, Won (Danny) Lee, Kirsten Leikem, Amie Lesser, Amanda Miller, Dae B. Park, Stephanie Santoso, and Lillian Yob made key contributions to this report. Related GAO Products Defense Health Care: DOD's Proposed Plan for Oversight of Graduate Medical Education Programs. GAO-19-338. Washington, D.C.: March 28, 2019. Defense Health Care: Actions Needed to Determine the Required Size and Readiness of Operational Medical and Dental Forces. GAO-19-206. Washington, D.C.: February 21, 2019. Military Personnel: DOD Needs to Improve Dental Clinic Staffing Models and Evaluate Recruitment and Retention Programs. GAO-19-50. Washington, D.C.: December 13, 2018. Military Personnel: Additional Actions Needed to Address Gaps in Military Physician Specialties. GAO-18-77. Washington, D.C.: February 28, 2018. Defense Health Reform: Steps Taken to Plan the Transfer of the Administration of the Military Treatment Facilities to the Defense Health Agency, but Work Remains to Finalize the Plan. GAO-17-791R. Washington, D.C.: September 29, 2017. Military Compensation: Additional Actions Are Needed to Better Manage Special and Incentive Pay Programs. GAO-17-39. Washington, D.C.: February 3, 2017. Defense Health Care Reform: DOD Needs Further Analysis of the Size, Readiness, and Efficiency of the Medical Force. GAO-16-820. Washington, D.C.: September 21, 2016. Defense Health Care: Actions Needed to Help Ensure Full Compliance and Complete Documentation for Physician Credentialing and Privileging. GAO-12-31. Washington, D.C.: December 15, 2011. Military Cash Incentives: DOD Should Coordinate and Monitor Its Efforts to Achieve Cost-Effective Bonuses and Special Pays. GAO-11-631. Washington, D.C.: June 21, 2011. Military Personnel: Status of Accession, Retention, and End Strength for Military Medical Officers and Preliminary Observations Regarding Accession and Retention Challenges. GAO-09-469R. Washington, D.C.: April 16, 2009.
Why GAO Did This Study DOD invests in a number of incentives to recruit and retain its nearly 15,000 military physicians and dentists, such as providing a tuition-free education to medical and dental students who in return agree to serve as military physicians or dentists for a specific amount of time. Section 597 of the John S. McCain National Defense Authorization Act for Fiscal Year 2019 included a provision for GAO to review military physicians' and dentists' compensation, among other things. This report addresses, among other objectives, (1) how compensation for military physicians and dentists compared to private sector civilians with comparable skills in 2017, and (2) the extent to which DOD has developed an approach to recruit and retain military physicians and dentists through a package of incentives that reflect key principles of effective human capital management. GAO compared military and civilian cash compensation for 2017—the most recent year of data amongst data sources, assessed incentive packages against key principles of human capital management, and conducted surveys and held focus groups to obtain the perspectives of current military medical students and residents regarding military service obligations. What GAO Found In 2017, cash compensation for military physicians and dentists in most of the 27 medical and dental specialties GAO reviewed was generally less than the median compensation of private sector civilians, but the Department of Defense (DOD) provides substantial deferred and noncash benefits, such as retirement pensions and tuition-free education, whose value to servicemembers is difficult to determine. GAO found that for 21 of the 27 physician and dental specialties, the maximum cash compensation was less than the private sector civilian median within four officer pay grades (O-3 to O-6) (see figure for number of physician specialties by pay grade). Moreover, cash compensation for military physicians and dentists was less than the private sector civilian median at key retention points, such as after physicians and dentists fulfill their initial active-duty service obligations. DOD recruits and retains physicians and dentists through a package of incentives, including tuition-free medical or dental school and special and incentive pays, such as multi-year retention bonuses. However, DOD does not consistently collect information related to the following three key principles of effective human capital management to help inform investment decisions in its package of recruitment and retention incentives: Replacement costs . DOD does not consistently collect information on replacement costs of military physicians and dentists. However, DOD has previously identified replacement costs as a factor in assessing the appropriateness of incentive pays. Current and historical retention information . DOD does not consistently collect information on retention of physicians and dentists, specifically acceptance rates for retention bonuses, to help assess the effectiveness of these bonuses. Private sector civilian wages. DOD does not consistently collect information on private sector civilian wages. Officials stated that civilian wages are not a driving factor when considering adjustments to special and incentive pays, in part because DOD cannot always match civilian sector compensation for military physicians and dentists. By collecting and using this information to help inform its decision-making, DOD would be better positioned to assess the effectiveness of its incentives to recruit and retain military physicians and dentists and make sound investment decisions for the future. What GAO Recommends GAO recommends that DOD should collect and use information on (1) replacement costs of military physicians and dentists, (2) retention, and (3) private sector civilian wages to inform its investment decisions. In commenting on a draft of this report, DOD concurred with these recommendations.
gao_GAO-19-542T
gao_GAO-19-542T_0
Actions Needed to Address Weaknesses in TSA’s Pipeline Security Program Management In our December 2018 report, we found that TSA provides pipeline operators with voluntary security guidelines that operators can implement to enhance the security of their pipeline facilities. TSA also evaluates the vulnerability of pipeline systems through security assessments. Pipeline operators and industry association representatives who we interviewed also reported exchanging risk-related security information and coordinating with federal and nonfederal entities, including TSA. However, we also identified weaknesses in several areas of TSA’s pipeline security program management, including: (1) updating and clarifying pipeline security guidelines; (2) planning for workforce needs; (3) assessing pipeline risks; and (4) monitoring program performance. Exchanging Security Information and Coordinating with Federal and Nonfederal Entities We found in our December 2018 report that all of the pipeline operators and industry association representatives that we interviewed reported receiving security information from federal and nonfederal entities. For example, DHS components including TSA’s Intelligence and Analysis and NCCIC share security-related information on physical and cyber threats and incidents. Nonfederal entities included Information Sharing and Analysis Centers, fusion centers, industry associations, and subsector coordinating councils. Pipeline operators also reported that they share security-related information with TSA and the NCCIC. For example, TSA’s Pipeline Security Guidelines requests that pipeline operators report physical security incidents to the Transportation Security Operations Center (TSOC) and any actual or suspected cyberattacks to the NCCIC. According to TSA officials, TSOC staff analyzes incident information for national trends and common threats, and then shares their observations with pipeline operators during monthly and quarterly conference calls. Updating Pipeline Security Guidelines In our December 2018 report, we found that the pipeline operators we interviewed reported using a range of guidelines and standards to address their physical and cybersecurity risks. For example, all 10 of the pipeline operators we interviewed stated they had implemented the voluntary 2011 TSA Pipeline Security Guidelines that the operators determined to be applicable to their operations. Five of the 10 pipeline operators characterized the guidelines as generally or somewhat effective in helping to secure their operations, 1 was neutral on their effectiveness, and 4 did not provide an assessment of the guidelines’ effectiveness. Pipeline operators and industry association representatives reported that their members also use the Interstate Natural Gas Association of America’s Control Systems Cyber Security Guidelines for the Natural Gas Pipeline Industry, the American Petroleum Institute’s Pipeline SCADA Security standard, and the National Institute of Standards and Technology’s (NIST) Cybersecurity Framework as sources of cybersecurity standards, guidelines, and practices that may be scaled and applied to address a pipeline operator’s cybersecurity risks. We found that TSA’s Pipeline Security Branch had issued revised Pipeline Security Guidelines in March 2018, but TSA had not established a documented process to ensure that revisions occur and fully capture updates to supporting standards and guidance. The guidelines were revised to, among other things, reflect the dynamic threat environment and to incorporate cybersecurity principles and practices from the NIST Cybersecurity Framework, which was initially issued in February 2014. However, because NIST released version 1.1 of the Cybersecurity Framework in April 2018, the guidelines that TSA released in March 2018 did not incorporate cybersecurity elements that NIST added to the latest Cybersecurity Framework, such as the Supply Chain Risk Management category. Without a documented process defining how frequently TSA is to review and, if deemed necessary, revise its guidelines, TSA cannot ensure that the guidelines reflect the latest known standards and best practices of physical security and cybersecurity. We recommended that TSA implement a documented process for reviewing, and if deemed necessary, revising TSA’s Pipeline Security Guidelines at regular defined intervals. DHS agreed and estimated that this effort would be completed by April 30, 2019. In April 2019, TSA provided us with documentation outlining procedures for reviewing these guidelines. We are currently assessing this information to determine if it sufficiently addresses this recommendation. We also found that TSA’s Pipeline Security Guidelines lacked clarity in the definition of key terms used to determine critical facilities. TSA initially identifies the 100 highest risk pipeline systems based on the amount of material transported through the system. Subsequently, pipeline operators are to use criteria in the Guidelines to self-identify the critical facilities within those higher risk systems and report them to TSA. TSA’s Pipeline Security Branch then conducts CFSRs at the critical facilities identified by pipeline operators. However, our analysis of TSA’s data found that at least 34 of the top 100 critical pipeline systems TSA deemed highest risk indicated that they had no critical facilities. Three of the 10 operators we interviewed stated that some companies that reported to TSA that they had no critical facilities may possibly be taking advantage of the guidelines’ lack of clarity. For example, one of TSA’s criteria for determining pipeline facility criticality states that if a facility or combination of facilities were damaged or destroyed, it would have the potential to “cause mass casualties or significant health effects.” Two operators told us that individual operators may interpret TSA’s criterion, “cause mass casualties or significant health effect,” differently. For example, one of the operators that we interviewed stated that this criterion could be interpreted either as a specific number of people affected or a sufficient volume to overwhelm a local health department, which could vary depending on the locality. Without clearly defined criteria for determining pipeline facilities’ criticality, TSA cannot ensure that pipeline operators are applying guidance uniformly, that all of the critical facilities across the pipeline sector have been identified, or that their vulnerabilities have been identified and addressed. We recommended that TSA’s Security Policy and Industry Engagement’s Surface Division clarify TSA’s Pipeline Security Guidelines by defining key terms within its criteria for determining critical facilities. DHS agreed and estimated that this effort would be completed by June 30, 2019. Planning for Workforce Needs TSA conducts pipeline security reviews—CSRs and CFSRs—to assess pipeline vulnerabilities and industry implementation of TSA’s Pipeline Security Guidelines. However, the number of reviews conducted has varied widely from fiscal years 2014 through 2018. These reviews are intended to develop TSA’s knowledge of security planning and execution at critical pipeline systems and lead to recommendations for pipeline operators to help them enhance pipeline security. For an overview of the CSR and CFSR processes, see Figure 1 below. We found that the number of CSRs and CFSRs completed by TSA has varied during the last five fiscal years, ranging from zero CSRs conducted in fiscal year 2014 to 23 CSRs conducted in fiscal year 2018, as of July 31, 2018 (see Figure 2 below). TSA officials reported that staffing limitations had prevented TSA from conducting more reviews. TSA Pipeline Security Branch staffing levels (excluding contractor support) also varied significantly over the past 9 years ranging from 14 full-time equivalents in fiscal years 2012 and 2013 to one in fiscal year 2014 (see Table 1 below). TSA officials stated that, while contractor support has assisted with conducting CFSRs, there were no contractor personnel providing CSR support from fiscal years 2010 through 2017, but that contractors increased to two personnel in fiscal year 2018. TSA officials stated that they expected to complete 20 CSRs and 60 CFSRs per fiscal year with Pipeline Security Branch employees and contract support, and had completed 23 CSRs through July 2018 for fiscal year 2018. In addition, pipeline operators that we interviewed emphasized the importance of cybersecurity skills among TSA staff. Specifically, 6 of the 10 pipeline operators and 3 of the 5 industry representatives we interviewed reported that the level of cybersecurity expertise among TSA staff and contractors may challenge the Pipeline Security Branch’s ability to fully assess the cybersecurity portions of its security reviews. We found that TSA had not established a workforce plan for its Security Policy and Industry Engagement or its Pipeline Security Branch that identified staffing needs and skill sets such as the required level of cybersecurity expertise among TSA staff and contractors. We therefore recommended that TSA develop a strategic workforce plan for its Security Policy and Industry Engagement Surface Division, which could include determining the number of personnel necessary to meet the goals set for its Pipeline Security Branch, as well as the knowledge, skills, and abilities, including cybersecurity, that are needed to effectively conduct CSRs and CFSRs. DHS agreed and estimated that this effort would be completed by July 31, 2019. Pipeline Risk Assessments The Pipeline Security Branch has developed a risk assessment model that combines all three elements of risk—threat, vulnerability, and consequence—to generate a risk score for pipeline systems. The Pipeline Security Branch developed the Pipeline Relative Risk Ranking Tool in 2007 for use in assessing various security risks to the top 100 critical pipeline systems based on volume of material transported through the system (throughput). The risk ranking tool calculates threat, vulnerability, and consequence for each pipeline system on variables such as the amount of throughput in the pipeline system and the number of critical facilities using data collected from pipeline operators, as well as other federal agencies such as the Departments of Transportation and Defense. The ranking tool then generates a risk score for each of the 100 most critical pipeline systems and ranks them according to risk, which was information used by TSA to prioritize pipeline security assessments. However, in our December 2018 report we found that the last time the Pipeline Security Branch calculated relative risk among the top 100 critical pipeline systems using the ranking tool was in 2014. Since the risk assessment had not changed since 2014, information on threat may be outdated and may limit the usefulness of the ranking tool in allowing the Pipeline Security Branch to effectively prioritize reviews of pipeline systems. We recommended that the Security Policy and Industry Engagement’s Surface Division update the Pipeline Relative Risk Ranking Tool to include up-to-date data to ensure it reflects industry conditions, including throughput and threat data. DHS agreed and in March 2019 TSA officials reported taking steps to update the data in the Pipeline Risk Ranking Tool to reflect current pipeline industry data. We are currently reviewing those actions to determine if they sufficiently address our recommendation. We also found that some of the sources of data and vulnerability assessment inputs to the ranking tool were not fully documented. For example, threats to cybersecurity were not specifically accounted for in the description of the risk assessment methodology, making it unclear if cybersecurity threats were part of the assessment’s threat factor. We recommended that the Security Policy and Industry Engagement’s Surface Division fully document the data sources, underlying assumptions, and judgments that form the basis of the Pipeline Relative Risk Ranking Tool, including sources of uncertainty and any implications for interpreting the results from the assessment. In March 2019, TSA officials stated that they had taken steps to document this information. We are currently reviewing those steps to determine if they sufficiently address our recommendation. Monitoring Program Performance In our December 2018 report, we also found that TSA developed three databases to track CSR and CFSR recommendations and their implementation status by pipeline facility, system, operator, and product type. TSA officials stated that the primary means for assessing the effectiveness of the agency’s efforts to reduce pipeline security risks was through conducting pipeline security reviews—CSRs and CFSRs. However, while TSA does track CFSR recommendations, we found that TSA had not tracked the status of CSR recommendations for security improvements in over 5 years—information necessary for TSA to effectively monitor pipeline operators’ progress in improving their security posture. We recommended that TSA take steps to enter information on CSR recommendations and monitor and record their status. DHS agreed and estimated that this effort would be completed by November 30, 2019. Chairman Rush, Ranking Member Upton, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff members have any questions about this testimony, please contact me at (202) 512-8777 or russellw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals making key contributions to this work include Ben Atwater, Assistant Director; Steve Komadina, Analyst-in-Charge; Nick Marinos, Michael Gilmore, Tom Lombardi, Chuck Bausell and Susan Hsu. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study More than 2.7 million miles of pipeline transport and distribute natural gas, oil, and other hazardous products throughout the United States. Interstate pipelines run through remote areas and highly populated urban areas, and are vulnerable to accidents, operating errors, and malicious physical and cyber-based attack or intrusion. Pipeline system disruptions could result in commodity price increases or widespread energy shortages. Several federal and private entities have roles in pipeline security. TSA is primarily responsible for the federal oversight of pipeline physical security and cybersecurity. This statement summarizes previous GAO findings related to TSA's management of its pipeline security program. It is based on a prior GAO product issued in December 2018, along with updates as of April 2019 on actions TSA has taken to address GAO's recommendations from the report. To conduct the prior work, GAO analyzed TSA documents, such as its Pipeline Security Guidelines ; evaluated TSA pipeline risk assessment efforts; and interviewed TSA officials, 10 U.S. pipeline operators—a non-generalizable sample selected based on volume, geography, and material transported—and representatives from five pipeline industry associations. GAO also reviewed information on TSA's actions to implement its prior recommendations. What GAO Found The Department of Homeland Security's (DHS) Transportation Security Administration (TSA) has developed and provided pipeline operators with voluntary security guidelines, and also evaluates the vulnerability of pipeline systems through security assessments. However, GAO's prior work, reported in December 2018, identified some weaknesses and made recommendations to strengthen TSA's management of key aspects of its pipeline security program. Pipeline security guidelines . GAO reported that TSA revised its voluntary pipeline security guidelines in March 2018 to reflect changes in the threat environment and incorporate most of the principles and practices from the National Institute of Standards and Technology's (NIST) Framework for Improving Critical Infrastructure Cybersecurity. However, TSA's revisions do not include all elements of the current NIST framework and TSA does not have a documented process for reviewing and revising its guidelines on a regular basis. GAO recommended that TSA implement a documented process for reviewing and revising TSA's Pipeline Security Guidelines at defined intervals. TSA has since outlined procedures for reviewing its guidelines, which GAO is reviewing to determine if they sufficiently address the recommendation. Workforce planning . GAO reported that the number of TSA security reviews of pipeline systems has varied considerably over time. TSA officials stated that staffing limitations within its Pipeline Security Branch have prevented TSA from conducting more reviews. Staffing levels for the branch have varied significantly, ranging from 1 full-time equivalent in 2014 to 6 from fiscal years 2015 through 2018. Further, TSA does not have a strategic workforce plan to help ensure it identifies the skills and competencies—such as the required level of cybersecurity expertise—necessary to carry out its pipeline security responsibilities. GAO recommended that TSA develop a strategic workforce plan, which TSA plans to complete by July 2019. Pipeline risk assessments . GAO identified factors that likely limit the usefulness of TSA's risk assessment methodology for prioritizing pipeline security reviews. For example, TSA has not updated its risk assessment methodology since 2014 to reflect current threats to the pipeline industry. Further, its sources of data and underlying assumptions and judgments regarding certain threat and vulnerability inputs are not fully documented. GAO recommended that TSA update its risk ranking tool to include up-to-date data to ensure it reflects industry conditions and fully document the data sources, assumptions and judgments that form the basis of the tool. As of April 2019, TSA reported taking steps to address these recommendations. GAO is reviewing documentation of these steps to determine if they sufficiently address the recommendations. Monitoring performance . GAO reported that conducting security reviews was the primary means for TSA to assess the effectiveness of its efforts to reduce pipeline security risks. However, TSA has not tracked the status of key security review recommendations for the past 5 years. GAO recommended that TSA take steps to update information on security review recommendations and monitor and record their status, which TSA plans to address by November 2019 What GAO Recommends GAO made 10 recommendations in its December 2018 report to strengthen TSA's management of its pipeline security program. DHS agreed and has described planned actions or timeframes for addressing these recommendations.
gao_GAO-20-7
gao_GAO-20-7_0
Background TSA is responsible for securing the nation’s civil aviation system, which includes domestic and foreign air carrier operations to, from, within, or overflying the United States, as well as the foreign point-to-point operations of domestic air carriers. Air carriers are responsible for implementing TSA security requirements predominantly through TSA- approved security programs. These requirements for air carriers include, among other things, measures related to the screening of passengers, baggage, and cargo; training of employees in security and screening procedures; testing employee proficiency in screening; and access to aircraft. In addition, TSA may impose additional requirements in the form of security directives or emergency amendments when more immediate action on behalf of air carriers is necessary. Whereas security programs include standing regulatory requirements, directives are not intended to be permanent in nature and are expected to eventually be canceled, for example, should the threat or vulnerability cease to exist. If TSA determines that safety and the public interest require the incorporation of measures from directives into security programs, TSA will amend the programs after providing affected air carriers with notice and an opportunity for comment. TSA may impose directives based on the following: Threat information. Directives may focus on addressing specific threats. For example, in June 2017, TSA announced new security requirements in a directive on international aviation security that included, among other requirements, heightened screening of personal electronic devices larger than a cellphone for air carriers operating last point of departure flights to the United States. The directive was based on intelligence that terrorists were attempting to smuggle explosive devices in various consumer items (e.g., laptops). Events. Terrorist attacks, both successful and foiled, can also lead to the issuance of directives. For example, in response to a terrorist plot in July 2017, TSA issued security directives and emergency amendments in September 2017 requiring air carriers transporting cargo to the United States from last point of departure airports in Turkey to submit advance cargo data to DHS. Results of foreign airport assessments and air carrier inspections. TSA may issue directives requiring air carriers to implement security measures to account for vulnerabilities at foreign airports identified during TSA assessments (e.g., inadequate perimeter fencing). Through its foreign airport assessment program, TSA determines whether foreign airports that provide service to the United States maintain and carry out effective security measures. TSA does not have authority to impose or otherwise enforce security requirements at foreign airports and, therefore, often seeks to address security vulnerabilities it identifies by working with domestic and foreign air carriers to implement security measures to mitigate any identified vulnerabilities, as appropriate, while also working with the foreign governments to address the vulnerabilities. Measures required by directives to mitigate vulnerabilities identified during foreign airport assessments include screening passengers at the boarding gate and posting guards around parked aircraft. Air carriers must implement the security measures set forth in applicable directives in addition to other requirements imposed and enforced by TSA to remain compliant with TSA security requirements. However, TSA may approve the use of an alternative measure used in place of an existing measure required by a directive if TSA determines the alternative measure will achieve the required level of security. For example, an air carrier may request to use a different screening technology than specified in a directive, which TSA could approve if it determines the security outcome is commensurate, according to TSA officials. To ensure that air carriers meet applicable security requirements, including those imposed through directives, TSA conducts inspections of domestic and foreign air carriers. TSA Directives Most Often Apply to Passenger Air Carrier Operations in Specific Foreign Locations, and Over Half Were Issued Prior to 2014 As of March 2019, there were 46 TSA directives related to air carrier operations at last point of departure airports in effect. These directives most often applied to passenger operations in specific foreign locations (see fig. 1). The characteristics of the 46 directives vary in a number of ways. For example: Of these directives, 25 were for foreign air carriers and 21 were for domestic air carriers. More than half of the current directives were issued prior to 2014, and most have a stated duration of 2 years or less. According to TSA officials and corroborated by our analysis, threat-driven directives, just over 60 percent of all directives, are generally in effect for about a year. Our analysis also shows that all directives with 3-year durations pertain to cargo-related threats, which TSA officials said are unlikely to change in the near term. However, foreign airport vulnerability- driven directives may have time horizons of about 2 years because, according to TSA officials, it could take foreign governments or airport authorities longer than 1 year to take corrective actions to address the deficiencies. About 30 percent of directives apply to air carrier operations worldwide and 70 percent apply to air carrier operations at airports in certain countries. Specifically, there are 33 directives that apply to specific countries in Asia, Africa, the Caribbean, Central America, or the Middle East. The security policies the directives address also vary and include passenger screening (23 directives), cargo (23), checked baggage (12), and aircraft security (12), among others. Although TSA generally issues directives with expiration dates, it may decide to renew the directive based on the threat or vulnerability. TSA has renewed or updated the 46 directives related to air carrier operations at last point of departure airports an average of five times through its review process. TSA Reviews Directives, but Industry Coordination Is Inconsistent, and TSA Has Not Fully Developed Procedures for Incorporating Directives into Security Programs TSA has developed a process for reviewing directives that requires intra- agency coordination across TSA offices, and we found that the agency generally implemented this process in the 43 reviews it conducted from January 2017 to March 2019. However, TSA has not defined when or how it is to coordinate with air carriers and other industry stakeholders in reviewing directives. In addition, when TSA officials have coordinated with domestic and foreign air carriers, they have not documented the input air carriers provided. Further, TSA has not defined the process for cancelling or incorporating directives into air carrier security programs and certain directives are longstanding. TSA Developed and Implemented a Process for Reviewing Directives That Requires Intra- Agency Coordination TSA issued a management directive in 2012 and associated standard operating procedures in 2016 to guide the development and review of directives, among other policies. The management directive provides high-level TSA policy for the development, external coordination, and issuance of, among other things, directives. Further, the management directive describes the roles and responsibilities individual TSA offices have when developing directives, which are shown in table 1. The standard operating procedures describe the process that TSA is to apply to ensure that subject-matter experts coordinate to identify the problem and formulate solutions while obtaining appropriate stakeholder input from air carriers and their associations. TSA is to develop and review directives in accordance with steps identified in the TSA management directive and associated standard operating procedures, which include creating a team, developing a problem statement and options, drafting the policy document, and obtaining interoffice and management approval. Figure 2 shows how TSA is to apply this process to the development and review of directives. The directive development process can take weeks if, for example, the directive is merely expanding the applicable locations from an existing directive, or several months, as was the case of the broad-scoped worldwide directive regarding personal electronic devices and other international aviation security measures. Based on our review of TSA documents and meetings with TSA officials, TSA has generally adhered to its internal process to update or cancel directives in the 43 reviews conducted from 2017 to March 2019. Key steps of this process include the following: Initiate review process and create team. TSA initiates the directive review process because of (1) new intelligence, (2) feedback received from air carriers, (3) new information received from foreign airport assessments or air carrier inspections, or about 90 days before a directive is to expire, according to TSA officials. After initiation, TSA’s standard operating procedures state that all TSA offices that have equity in the security policy subject matter are to be invited to participate in the directive review team. TSA may also include other DHS components or government agencies in the team. According to our review of TSA documentation, in all 43 reviews TSA created an interoffice team that included Policy, Plans, and Engagement; Global Operations; and Chief Counsel. Our analysis also shows that at least 28 reviews included TSA Intelligence and Analysis. Further, certain teams reviewing vulnerability- driven directives included TSA field staff, such as TSA international industry representatives, TSA representatives, and regional operations center managers who have responsibility for the overall planning and conduct of assessments and air carrier inspections at foreign airports. In addition, according to TSA officials and corroborated by TSA documentation, they coordinated as needed with other federal partners— including DHS, the State Department, where TSA has a liaison embedded, and the National Security Council. Develop problem statement and options. To understand the environment and the nature of the threat, the team is to request a threat summary from TSA Intelligence and Analysis and, based on the intelligence summary, prepare a problem statement outlining the threat and vulnerability. The team is also to develop a proposed solution to the problem statement, and the team may decide to propose to either update or cancel the directive through an action memo written for TSA leadership. TSA officials stated that criteria for updating and canceling directives include whether the threat or vulnerability remains, intelligence, feedback from air carriers, and the results of air carrier inspections and airport assessments. Updates can result in a renewal of the policy with no significant changes or a revision to the security measures. All reviews developed a problem statement and documented proposed solutions in action memos that also included draft updates to the directives, as applicable. Further, Intelligence and Analysis officials stated that they provided the team with updated threat information and recommendations on whether the directive required a change or could be canceled. Obtain final approval and disseminate directive. If the team does not decide to cancel a directive, the completed drafts are to be routed to TSA offices for review and then to the administrator or assistant administrator for final approval. After final approval, TSA is to post worldwide directives to DHS’s Homeland Security Information Network. However, if the directive is country or region-specific TSA officials stated that they post an announcement on the network that the affected air carriers should contact their TSA international industry representatives for more information. According to our file review, TSA documented interoffice approval to the updates or cancellations for at least 41 of the 43 reviews. Further, the teams obtained administrator or assistant administrator approval in all 43 reviews. TSA headquarters officials and international industry representatives as well as air carrier representatives confirmed that directives are posted to the Homeland Security Information Network. TSA Does Not Consistently Coordinate with Air Carriers and Other Industry Stakeholders When Reviewing Directives TSA’s Standard Operating Procedures for Security Policy Development, Coordination, and Issuance requires TSA officials to obtain input from key stakeholders and representatives of affected regulated parties (e.g., air carriers), as appropriate, as shown in figure 2. However, the standard operating procedures do not explain what “as appropriate” means. Figure 3 shows a TSA international industry representative briefing foreign air carrier representatives on the 2017 international aviation security emergency amendment. TSA is also to incorporate key stakeholder input into the final draft as appropriate. TSA officials stated that they generally obtain mostly informal feedback from domestic air carriers and their associations during quarterly meetings with industry or through air carriers’ regular coordination with TSA international industry representatives. However, TSA officials stated that the extent to which they include air carriers and aviation associations in the review process varies. For example, TSA officials may share drafts of the directives with the air carriers for feedback or decide to only discuss the content of the directive at a high level, depending on the threat or vulnerability, air carriers involved, whether the changes needed are time-sensitive, and countries involved. While TSA’s standard operating procedures state that TSA is to coordinate with air carriers and other industry stakeholders, the feasibility of doing so when issuing or updating directives (particularly when the time frame is short and security measures must reach the industry rapidly due to a specific threat or recent event) is limited, according to TSA officials. These officials noted that engagement is more likely to take place when a directive is up for renewal or is being updated. Representatives from domestic air carriers confirmed that TSA has coordinated with them but also told us that the coordination has been inconsistent. Officials from four of the five domestic air carriers (three passenger and one all-cargo air carrier) and two associations representing domestic air carriers we met with told us that coordination with TSA on directives has improved since 2017. The air carrier representatives also stated that coordination with their TSA international industry representatives on directives was helpful. For example, all three domestic passenger air carriers we met with stated that TSA international industry representatives coordinated closely with them during the multiple revisions of the 2017 directive pertaining to international aviation security and that TSA made changes based on the feedback or approved alternative security measures they requested. However, representatives from both passenger and all-cargo domestic air carriers and an association that represents them identified ways that TSA coordination has been inconsistent when reviewing directives. For example, representatives from one of these air carriers stated that TSA sometimes coordinates with them when revising directives but generally seeks feedback from the same one or two air carriers that fly globally or operate out of the most last point of departure airports and does not always coordinate with air carriers that do not have a large global operation. In addition, a representative from another air carrier told us that TSA only coordinated with them after they insisted on being included in the process to revise a security directive; TSA did not proactively seek their input. Similarly, representatives from an association told us that TSA did not coordinate with them on the 2018 revision of a security directive issued to increase security requirements applied to cargo shipments originating in, transiting through, or transferring from Egypt until the association first reached out and that the process was not fully transparent. Although TSA verbally shared anticipated changes, representatives from the association were not clear what the new language would say or what it meant. While TSA sometimes includes domestic air carriers in the directive review process, foreign air carriers are generally not included, according to their representatives. Representatives from four of the five foreign air carriers we met with told us that they have a productive relationship with their TSA international industry representative and that TSA has made changes to emergency amendments based on alternative security measures they have requested. However, representatives from all five foreign air carriers noted that TSA generally does not solicit their input when reviewing emergency amendments. Representatives from the association that represents foreign air carriers told us that TSA’s coordination is sporadic; sometimes TSA would coordinate with industry when revising directives, and other times TSA would not—even though such coordination was necessary, in their view. For example, the representatives from this association stated that TSA has not consistently provided them with draft directives to review prior to issuance. These officials also stated that TSA coordination usually comes after they request being included in the process. All three international industry representative groups responsible for coordinating with foreign air carriers confirmed that TSA generally does not include their air carriers or the association that represents them when revising emergency amendments. Instead of coordinating with TSA, foreign air carriers may provide their input to domestic code-share partners, according to one TSA international industry group and representatives from a domestic air carrier. Representatives from both domestic and foreign air carriers and their associations identified negative effects of inconsistent coordination with TSA during the directive review process and stated that improved coordination would lead to more efficient and effective security measures. For example, according to representatives from six air carriers and two associations we met with, TSA did not include them at all or early enough in the directive review process. These carriers and associations identified a number of issues with the revised directives because of this lack of coordination, such as directives that were vague, less effective, or difficult for carriers to implement. For example, representatives from an association and one air carrier noted that cargo directives are not always effective because they do not fully account for how cargo moves around the world (e.g., shippers may transport cargo by truck from one country to another before loading it onto a U.S.-bound aircraft to avoid security measures specific to certain foreign airports). Representatives from two air carriers provided an example of vague requirements in directives related to aircraft cabin search procedures that has led to TSA international industry representatives and inspectors offering different interpretations of the same requirement. As a result, representatives of the air carriers said they do not know how to implement, and have at times been found in violation of, the requirement. In addition, according to representatives from one foreign air carrier, had TSA included them and other foreign air carriers early on in the review process, the changes to the 2017 emergency amendment pertaining to international aviation security measures would have been more efficient and effective. For example, within 3 months of issuance, TSA revised the directive twice to, among other things, change screening requirements for personal electronic devices (e.g., allowing for alternative screening methods). According to representatives from this air carrier, TSA could have reduced or eliminated the need for such revisions had TSA officials better coordinated with air carriers. Moreover, representatives from one association stated that when TSA does not involve them or the air carriers in the directive review process, TSA is missing an opportunity to implement the most effective security measures and may actually inadvertently create security vulnerabilities. TSA’s 2018 Administrator’s Intent states that TSA is to coordinate with external customers early and often for diverse perspectives and to develop trusted relationships to grow opportunities for mission success. Moreover, the Administrator’s Intent has a goal to effectively secure and safeguard the transportation system through contributions from a diverse and interconnected community of stakeholders, which includes actively seeking stakeholder input. The goal further states that coordinating with industry and other partners will enable timely and well-informed decisions and increase security effectiveness. In addition, TSA’s Standard Operating Procedures for Security Policy Development, Coordination, and Issuance requires TSA officials to obtain input from key stakeholders and representatives of affected regulated parties (e.g., air carriers), when developing the problem statement, developing options, and drafting the directive (as appropriate), as discussed above. TSA is also to incorporate key stakeholder input into the final draft as appropriate. TSA officials identified several reasons why coordination with air carriers and their association may be inconsistent. For example, TSA does not have guidelines that are specific as to how it is to coordinate with air carriers and their associations, and coordination can be difficult to define. In addition, the level of coordination with industry stakeholders is to some extent driven by the discretion of TSA administrators and assistant administrators. As the personnel in these positions change, so too does the level of expected coordination with industry. According to TSA officials, they cannot write specific requirements for each of the over 200 air carriers with U.S.-bound operations and necessarily must choose which air carriers to seek input from. In addition, TSA officials noted that they coordinate with one or two domestic air carriers that chair the security committee within the association that represents both passenger and all-cargo air carriers. Further, TSA officials may decide not to share much information at all with air carriers owned and operated by certain foreign governments because of potential security concerns. Although TSA’s Standard Operating Procedures for Security Policy Development, Coordination, and Issuance require TSA officials to obtain input from air carriers and key stakeholders, the current procedures do not provide clear guidance on the circumstances under which coordination should occur. Better defining (e.g., develop guiding principles) how to coordinate with air carriers and other stakeholders during the review of directives and implementing such guidance would help TSA ensure that it more consistently coordinates with air carriers over time, addresses air carriers concerns, and issues directives that enable air carriers to effectively secure their operations against the identified threats or vulnerabilities. TSA Does Not Document Input Provided by Air Carriers during Its Directive Review Process When TSA officials have coordinated with domestic and foreign air carriers, they have not documented the input air carriers provided. Based on our review of the 43 directive reviews TSA conducted from 2017 to March 2019, TSA officials did not document the input they have received from air carriers. TSA did provide us with emails and appointments with associations and air carriers to obtain their input during revisions to the 2017 directives pertaining to international aviation security, but this documentation did not capture a summary of the discussions or stakeholder concerns. TSA’s Standard Operating Procedures for Security Policy Development, Coordination, and Issuance requires that stakeholder and regulated party input be documented and include the entity consulted, date, location, and a brief summary of the discussion and specific stakeholder input, to include any concerns. In addition, Standards for Internal Control in the Federal Government states that effective documentation assists in management’s design of internal control by establishing and communicating who, what, when, where, and why of internal control execution to personnel. Documentation also provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel, as well as a means to communicate that knowledge as needed to external parties, such as external auditors. According to TSA headquarters officials, TSA does not document its coordination with air carriers and their associations because the feedback that it solicits and receives from air carriers and associations is mostly informal. TSA officials stated that for the 2017 directives pertaining to international aviation security, for example, they had to adjudicate many requests through dialogue with air carriers and their associations but the discussions were not documented, as it would have been too burdensome. However, TSA officials stated that most directives do not have the broad scope or apply to as many air carriers as the 2017 directive pertaining to international aviation security. Documenting the input provided by air carriers during the directive review process, even if the input is deemed informal, would better ensure that TSA provides insight on shared air carrier views or concerns, and retains knowledge about who, what, when, where, and why coordination occurred. In addition, TSA would be able to reference documented information for decision-making purposes, which could help ensure that TSA is consistently coordinating with air carriers during the review of directives and addressing their concerns. TSA Cancels Directives for Various Reasons but Has Not Defined a Process for Incorporating Directives into Air Carrier Security Programs In general, directives are not meant to be permanent, and TSA has canceled some of them in recent years. Specifically, of the total of 78 directives related to air carrier operations at last point of departure airports in effect at some point from fiscal year 2012 to March 2019, 46 remain current while 32 were canceled for a variety of reasons (see fig. 4). One reason TSA might cancel a directive is if the agency incorporates the directive’s security measures into air carrier security programs. When this occurs, TSA initiates the directive review process and the directive will be canceled simultaneously with the security program change taking effect, according to TSA officials. TSA officials stated that they follow a similar process when they cancel a directive and include that directive’s security measures in a new directive. As a result, there is no lapse in security measure requirements. Although TSA has canceled some directives, others are longstanding. According to TSA officials, they have incorporated threat-based directives into air carrier security programs but not foreign airport vulnerability- based directives because the latter are site-specific and would not apply to all air carriers. However, as shown in figure 5, more than half (25 of 46) of directives related to last point of departure airports have been in effect for more than 5 years, and about one quarter (12) were threat- based. According to TSA officials, the threat pertaining to these directives still exists. Further, certain security measures predate the issuance of the directives that remain in effect. As shown in figure 4, the security measures within one-third (12) of the canceled directives were incorporated into new directives. According to TSA officials, there are security measures in certain directives that predate the creation of TSA in 2001. Representatives of the air carriers and associations we met with identified directives that have, in their view, persisted for too long, which can create redundant and confusing security requirements. Specifically, half of the air carrier representatives we met with told us that some directive requirements conflict with requirements in the air carriers’ security programs, are redundant, or could be incorporated into the security programs. According to representatives from one air carrier, without an exit strategy or plan to help TSA determine when it can cancel directives, the directives may be in effect beyond their useful time frame and are in some instances outdated or redundant. For example, representatives from this air carrier stated that directives require air carriers to identify baggage in a manner to thwart an attack in which passengers check their baggage with explosives in it but do not board the plane. However, given advancements in screening technology, such security measures are no longer required, according to these representatives. In addition, according to representatives from another air carrier, there are often conflicts between the directives and the security programs, which may cause confusion and sometimes misinterpretation of security requirements. Further, representatives from a third air carrier and one association also told us that there is value in incorporating directives into air carrier security programs because it removes the uncertainty involved, and air carriers can better plan for security requirements. TSA headquarters and field officials told us that there are directives that can be incorporated into air carrier security programs. For example, TSA headquarters officials stated that they have identified several such directives, including a 2012 emergency amendment and a 2017 security directive and emergency amendment related to passenger international aviation security; a 2014 security directive regarding the handling of items containing liquids, aerosols, and gels (e.g., personal hygiene products) brought into the aircraft cabin by passengers; and security directives and emergency amendments pertaining to cargo from certain Middle Eastern and African countries. Further, three groups of TSA international industry representatives told us that TSA should incorporate certain directives into security programs. Further, they stated that certain directives overlap, have outdated requirements, or contradict each other. For example, they highlighted overlap between requirements found in the 2012 emergency amendment and 2017 emergency amendments related to passenger international aviation security, as well as the air carriers’ security programs. Both emergency amendments have security requirements pertaining to passenger screening, aircraft security, and catering. According to one group of international industry representatives, there is confusion among themselves and air carriers over which emergency amendment supersedes the other. Although TSA officials have identified directives that they may be able to cancel by incorporating them into security programs, TSA does not have a defined process for doing so. TSA’s standard operating procedures provide step-by-step guidance for issuing new or revised security requirements through the directive review process, but it does not provide similar guidance for incorporating directives into security programs. Specifically, TSA officials have not resolved how they will accomplish key steps in incorporating certain long-standing directives into the security programs. For example, TSA officials stated that they are considering incorporating a 2011 security directive and emergency amendment pertaining to security measures for cargo from Yemen. However, TSA officials are unclear how they might request comments from air carriers because not all air carriers transport cargo from that country. Further, TSA officials stated that they have not determined whether or how they might incorporate vulnerability-driven directives into security programs. In addition, according to TSA officials, TSA’s reorganizations, personnel changes, and limited staff availability have delayed efforts to incorporate longstanding directives into security programs. TSA officials stated they have been attempting to incorporate the 2012 international aviation security emergency amendment into the security programs for foreign air carriers for the past 10 years. Specifically, in 2012 TSA consolidated over 20 worldwide threat-based emergency amendments issued from 2001 to 2012 into one emergency amendment covering a number of different types of security measures with the plan to next incorporate it into the security program, according to TSA officials. However, since that time, TSA has renewed the emergency amendment 13 times, each time with a new expiration date. TSA officials stated that it is easier to renew directives to ensure that the security measures remain in place than to incorporate them into security programs. Despite these challenges, TSA officials stated that they are mapping out how to incorporate certain directives into air carrier security programs. Further, they may be able to develop the changes to the programs and draft action memos for the TSA Administrator to approve by the end of 2019, according to these officials. As of July 2019, TSA officials had identified the directives it first planned to migrate into security programs and begun the process. However, these officials had not yet finalized plans for doing so. TSA Management Directive 2100.5 provides high-level TSA policy for the development, external coordination, and issuance of security programs and directives. It states that during the creation of all directives (i.e., security directives and emergency amendments), a sunset date will be assigned. This date is to serve as the date where a decision will be made by the agency to either cancel the directive or convert it into a security program change. Factors for this decision will include a comprehensive intelligence review, assessment of risk-based relevance, and operator performance and compliance. According to the management directive, this lifecycle analysis will ensure that directives are not permanent in nature and that the security program change process is routinely used as the vehicle for long-term regulatory requirements. However, the management directive does not preclude continuation of a directive, and TSA may decide to renew the directive, as appropriate. Further, according to the standard operating procedures associated with this management directive, the goal of the policy development process is to enhance TSA’s ability to make sound and timely policy decisions. In addition, Standards of Internal Control in the Federal Government states that management should define objectives clearly to enable the identification of risks and define risk tolerances. This involves clearly defining what is to be achieved, who is to achieve it, how it will be achieved, and the time frames for achievement. By defining the process for cancelling or incorporating directives into security programs, including expected time frames, and taking actions to implement this process, as applicable, TSA could better ensure that it clarifies and streamlines the security requirements for air carriers that operate at last point of departure airports in a timely manner and in a way that uses limited resources efficiently. Further, taking these steps would help ensure that requirements in directives that should become permanent are incorporated into security programs. Conclusions Given that terrorist groups continue to target international aviation, it is paramount that TSA effectively update and issue security directives and emergency amendments in response to threats. For the approximately 300 airports in foreign countries offering last point of departure flights to the United States, TSA may issue directives when immediate action on behalf of air carriers is necessary and has developed a review process for these directives, but it has not defined the circumstances under which TSA is to coordinate with air carriers and other industry stakeholders throughout the process. Better defining (e.g., develop guiding principles) how TSA is to coordinate with air carriers and implementing such guidance would help TSA ensure that it more consistently coordinates with air carriers over time, air carriers concerns are addressed, and it issues directives that enable air carriers to effectively secure their operations against the identified threats or vulnerabilities. In addition, documenting the input provided by air carriers during the directive review process would help TSA better ensure that it captures stakeholder views or concerns and retains knowledge about who, what, when, where, and why coordination occurred. TSA would also be able to reference documented information for decision-making purposes, which could help ensure that TSA is consistently coordinating with air carriers during the review of directives and addressing their concerns. Further, TSA has not always canceled longstanding directives or incorporated them into air carrier security programs. However, according to TSA Management Directive 2100.5, directives are not meant to be permanent. Recognizing that threat-driven exigent circumstances may preclude consultation, better defining the process for cancelling or incorporating directives into security programs, including expected time frames, and taking actions to implement this process, as applicable, could better ensure that TSA clarifies and streamlines the security requirements for air carriers that operate at last point of departure airports in a timely manner and in a way that uses limited resources efficiently. Recommendations for Executive Action We are making the following three recommendations to TSA: The Administrator of TSA should ensure that the Assistant Administrator for Policy, Plans, and Engagement and the Assistant Administrator for Global Operations better define (e.g., develop guiding principles) how TSA is to coordinate with air carriers and other stakeholders during the review of security directives and emergency amendments, and implement such guidance (Recommendation 1). The Administrator of TSA should ensure input provided by air carriers and other stakeholders is documented during the security directive and emergency amendment review process (Recommendation 2). The Administrator of TSA should ensure that the Assistant Administrator for Policy, Plans, and Engagement defines a process for cancelling or incorporating security directives and emergency amendments into security programs, including time frames, and take action to implement this process, as applicable (Recommendation 3). Agency Comments and Our Evaluation We provided a draft of our report to DHS for review and comment. In written comments, which are included in appendix I and discussed below, DHS concurred with our three recommendations and described actions taken to address them. DHS also provided technical comments, which we have incorporated into the report, as appropriate. With respect to our first recommendation that TSA better define how to coordinate with air carriers and other stakeholders during the review of security directives and emergency amendments, and implement such guidance, DHS stated that TSA is developing a process for more formal and consistent coordination with air carrier and industry association stakeholders. With regard to our second recommendation that TSA document the input provided by air carriers and other stakeholders during the security directive and emergency amendment review process, DHS stated that TSA will require international industry representatives and other TSA officials to keep records of all communications related to review and feedback on directives. TSA officials plan to incorporate substantive feedback into action memos associated with the review of directives. With respect to our third recommendation that TSA define a process for cancelling or incorporating security directives and emergency amendments into security programs, DHS stated that TSA will establish milestones at which TSA will conduct a formal review to determine if long- standing directives should be consolidated into a security program or otherwise cancelled. We are sending this report to the appropriate congressional committees and to the acting Secretary of Homeland Security. In addition, this report is available at no charge on the GAO website at http://gao.gov. If you or your staff members have any questions about this report, please contact William Russell at (202) 512-8777 or russellw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Homeland Security Appendix II: GAO Contact and Staff Acknowledgments GAO Contact William Russell (202) 512-8777 or russellw@gao.gov. Staff Acknowledgments In addition to the contact above, Kevin Heinz (Assistant Director), Paul Hobart (Analyst-in-Charge), Charles Bausell, Michele Fejfar, Sally Gilley, Eric Hauswirth, Tom Lombardi, and Adam Vogt made key contributions.
Why GAO Did This Study Approximately 300 airports in foreign countries offer last point of departure flights to the United States. When threat information or vulnerabilities at foreign airports indicate an immediate need for air carriers to implement additional security measures, TSA may issue new or revise existing security directives (for domestic air carriers) and emergency amendments (for foreign air carriers). The TSA Modernization Act includes a provision for GAO to examine TSA's review process for directives that apply at last point of departure airports. This report (1) identifies key characteristics of the TSA directives and (2) assesses TSA's process to review directives. GAO reviewed TSA policies and procedures, analyzed TSA program information, and interviewed TSA officials and representatives from a nongeneralizable sample of 10 air carriers, selected to represent carriers with high numbers of U.S.-bound flights, and three industry associations. What GAO Found As of March 2019, there were 46 Transportation Security Administration (TSA) security directives and emergency amendments (i.e., directives) in effect related to air carrier operations at foreign airports. Twenty-eight directives addressed threats (e.g., explosives in laptops) and 18 pertained to vulnerabilities identified at foreign airports (e.g., inadequate perimeter fencing). TSA reviews directives, but its process does not fully define how to coordinate with industry representatives and TSA has not incorporated the security measures of many longstanding directives into air carrier security programs in accordance with TSA policy. Representatives from four domestic air carriers stated that coordination with TSA on directives has improved. However, representatives from six air carriers and two associations indicated that TSA has issued revised directives that are vague or difficult to implement—which, for example, contributed to TSA officials offering different interpretations of aircraft cabin search requirements—because TSA did not sufficiently include them in the review process. Better defining how TSA coordinates with air carriers and other stakeholders would help ensure that TSA issues directives that enable air carriers to effectively secure their operations against the identified threats or vulnerabilities. In addition, when TSA officials have coordinated with air carriers, they have not documented the input provided. Documenting the input could help ensure that TSA is consistently addressing air carrier concerns and retaining knowledge about who, what, when, where, and why coordination occurred. Further, TSA policy states that directives are not intended to be permanent and are expected to eventually be canceled or incorporated into security programs. GAO analysis found that TSA issued more than one half (25) of the directives prior to 2014, meaning they have been in effect for more than 5 years. Several have been in effect for more than 10 years (see figure). As of July 2019, TSA officials had begun the process to migrate directives into security programs as deemed appropriate, but had not yet finalized their plans for doing so. Defining the process for incorporating directives into security programs, including expected timeframes, and taking actions to implement this process, as applicable, could better ensure that TSA clarifies and streamlines security requirements in a timely manner. What GAO Recommends GAO recommends that TSA (1) better define how to coordinate with air carriers when reviewing directives, (2) document air carrier input, and (3) define a process, including time frames, for cancelling or incorporating security measures from directives into security programs. DHS concurred with all three recommendations.
gao_GAO-19-485
gao_GAO-19-485_0
Background NVRA Overview In passing the NVRA in 1993, Congress found that unfair registration laws and procedures can have a direct and damaging effect on voter participation in federal elections. The NVRA was intended, in part, to establish procedures to increase the number of eligible citizens who register to vote in federal elections, as well as to protect the integrity of the electoral process and ensure accurate and current voter registration rolls. As such, the NVRA includes provisions focusing on both increasing opportunities for voter registration and improving voter registration list maintenance. Table 1 below includes a summary of these provisions. The Help America Vote Act of 2002 (HAVA), which amended the NVRA, requires states to implement an interactive computerized statewide voter registration list and perform regular list maintenance. HAVA requires states to perform regular list maintenance by comparing their voter registration lists against state records on felons and deaths. HAVA also established the Election Assistance Commission to assist the states regarding HAVA compliance and to serve as a national clearinghouse of election administration information, among other purposes. DOJ Role in NVRA and Election Fraud Enforcement In the United States, the authority to regulate elections is shared by federal, state, and local officials. DOJ is responsible for (1) civil investigations and enforcement under federal voting rights laws, such as the NVRA, and (2) criminal investigations and prosecutions under federal election crime statutes, such as those prohibiting double voting or voting by noncitizens. With regard to enforcement of NVRA provisions: the Civil Rights Division’s Voting Section (Voting Section), within DOJ, enforces the civil provisions of federal laws that protect the right to vote, including provisions of the NVRA, as well as HAVA, the Voting Rights Act of 1965, and the Uniformed and Overseas Citizens Absentee Voting Act, among others. In addition to DOJ’s role in enforcing the NVRA, the law also allows a private party (a person or organization) who is aggrieved by a violation of the NVRA to bring a civil action against the state or local agency responsible for voter registration. With regard to enforcement of federal election crime statutes: the Criminal Division’s Public Integrity Section supervises DOJ’s nationwide response to election crimes, such as voter fraud and campaign finance offenses, and reviews all major investigations and criminal charges proposed by U.S. Attorneys’ Offices relating to election crime. Public Integrity Section attorneys investigate and prosecute selected cases involving alleged corruption (including election crimes) by federal, state, or local government officials. U.S. Attorneys’ Offices investigate and prosecute a wide range of criminal activities, including federal election fraud, within their respective federal judicial districts. Each U.S. Attorney exercises wide discretion in the use of his or her resources to further the priorities of the local jurisdictions and needs of their communities. DOJ’s civil and criminal enforcement actions are recorded in case management systems which differentiate between matters and cases. A matter is defined as an activity, such as an investigation of an allegation, that has not yet resulted in the filing of a complaint, indictment, or information in court. A matter may eventually become a case, or may be closed without further action. A case is defined as an activity that has resulted in the filing of a complaint, indictment, or information in court. Cases typically start as matters. The process for initiating matters and filing cases varies across the three DOJ components we reviewed. For example, within the Criminal Division, staff are to open a matter when they have worked on an investigation for a minimum of 30 minutes. State and Local List Maintenance Roles and Responsibilities Under the NVRA States are responsible for the administration of state and federal elections, and states regulate various aspects of elections including, for example, registration procedures, absentee and early voting requirements, and Election Day procedures. Within each state, responsibility for managing, planning, and conducting elections is largely a local process, residing with about 10,500 local election jurisdictions nationwide. Under the NVRA and HAVA, states are required to have a voter registration list maintenance program, and state and local election jurisdictions are responsible for ensuring that the registration lists are accurate, and that ineligible voters are lawfully removed. The NVRA specifies certain categories under which jurisdictions may remove registrants from voter registration lists, including: if a registrant has moved outside of a jurisdiction and either (1) confirmed the move in writing or (2) failed to respond to address confirmation mailings and failed to vote in two consecutive federal general elections subsequent to the mailing; death of the registrant; criminal conviction of the registrant, as provided for in state law; and mental incapacity of the registrant, as provided for in state law. State and local election officials can only remove registrants from the voter registration list after meeting certain requirements outlined in the act. Specifically, the NVRA stipulates that list maintenance activities must be uniform, non-discriminatory, and in compliance with the Voting Rights Act; and that programs to systematically remove ineligible voters must not be undertaken within 90 days of a federal election, except under certain circumstances. As noted above, election officials may remove a registrant from the voter registration list for change of residence if the registrant confirms the move in writing, or fails to respond to an address confirmation notice and fails to vote in two subsequent federal general elections following the mailing of the address confirmation notice. While state procedures differ, states generally designate registrants who are sent an address confirmation notice or fail to respond to the address confirmation notice in a timely manner as “inactive.” The “inactive” status generally indicates that the election officials may need to receive information from the registrant or other sources to confirm the registrant’s address. See figure 1 for an illustration of the NVRA confirmation and removal process for registrants who may have moved outside of the jurisdiction. States and local jurisdictions use different data sources and different processes and procedures to obtain information under the NVRA removal categories and to maintain accurate voter registration lists. For example, election offices in some states collaborate with their state’s motor vehicles agencies—such as a Department of Motor Vehicles—to acquire information on changes to registrants’ addresses or other identifying information. Some states also participate in interstate exchanges—such as the Electronic Registration Information Center (ERIC) and Crosscheck—to compare information from their voter registration lists and other state and local sources. States may also use national databases— such as the U.S. Postal Service’s NCOA database or the Social Security Administration’s public DMF—to identify registrants who have moved to another jurisdiction or state, or who have died. Multiple factors such as state laws, costs, the security of voter registration information, and related privacy considerations play a role in election officials’ list maintenance activities and procedures. In some states, the state maintains the responsibility for matching some data sources (such as data on deaths and moves) against the voter registration list and removing certain ineligible voters; whereas in other states, local jurisdictions have a larger role in the list maintenance process. DOJ Engaged in Various Efforts to Enforce the NVRA and Address Election Fraud from Fiscal Years 2001 through 2017 DOJ’s Voting Section Initiated Matters, Participated in Cases, and Engaged in Other Efforts to Enforce the NVRA’s Registration Opportunity and List Maintenance Requirements Within DOJ, the Voting Section has the authority to initiate a matter or pursue a case under the NVRA, among the other voting laws for which it is responsible. According to Section officials, the Section identifies potential NVRA violations through several means, including reviewing publically available federal elections and other data, reviewing publically available federal and third party reports, receiving complaints, and conducting compliance investigations that may include visits to state and local offices. Officials stated that after initiating and conducting an investigation (or matter), the Section makes a recommendation to the head of the Civil Rights Division who then decides which action to take, such as pursuing litigation by filing a case against a state or local election jurisdiction. The Voting Section categorizes its NVRA-related matters and cases as related to providing registration opportunities for voters (registration opportunities), or related to the rules regarding maintenance of voter registration lists under specified conditions, which includes both wrongful removals of eligible voters and failure to remove ineligible voters (list maintenance). In addition to enforcing the NVRA through initiating matters and filing cases, the Voting Section participated in NVRA cases as an amicus curiae or “friend of the court,” entered into settlement agreements with states to address issues related to NVRA provisions, and engaged in other efforts to assess compliance with NVRA requirements. More NVRA Registration Opportunity Matters than List Maintenance Matters Initiated According to Civil Rights Division data we analyzed, the Voting Section initiated 1,295 matters from fiscal years 2001 through 2017 to investigate issues related to provisions of statutes such as the NVRA, HAVA, and the Voting Rights Act. Of these 1,295 matters, 99 involved allegations under the NVRA. As shown in figure 2, the Section initiated the largest number of NVRA matters during this period in fiscal years 2008 (15) and 2011 (25). In initiating matters under the NVRA, the Voting Section investigated issues related to state and local jurisdiction efforts to provide registration opportunities for voters and issues related to list maintenance. Specifically, of the 99 NVRA matters the Voting Section initiated, 58 matters involved registration opportunity issues, 17 involved list maintenance issues, and 5 involved both registration opportunity and list maintenance issues. As shown in figure 3, the Section initiated registration opportunity matters in each year except fiscal year 2007. The Section initiated the most registration opportunity matters in fiscal years 2008 (13), 2011 (10), and 2013 (7). The Section did not initiate any list maintenance matters in some years, and initiated between one and four in other years. From fiscal years 2001 through 2017, the Voting Section participated in 234 cases, including those with claims brought under statutes such as the NVRA, HAVA, and the Voting Rights Act. Of the 234 total cases, 23 involved claims brought under the NVRA. Figure 4 shows the total number of cases, and the number of NVRA related cases, in which the Section participated, by fiscal year. In contrast to matters, the Voting Section filed more cases related to list maintenance allegations under the NVRA than cases related to registration opportunities. Of the 23 cases where the Section took action to enforce the NVRA, the Section was the plaintiff or plaintiff intervenor in 14 cases. As shown in figure 5, eight of the 14 NVRA cases the Section filed as the plaintiff or plaintiff intervenor involved allegations under the law’s list maintenance provisions, and two involved allegations under both the list maintenance and registration opportunity provisions. The remaining four cases involved allegations under the law’s registration opportunity provisions. Of the 10 total cases involving list maintenance allegations, eight were filed between fiscal years 2002 and 2007. See appendix II for a summary of each NVRA related case the Section filed from fiscal years 2001 through 2017. With regard to list maintenance cases, as shown in figure 5, the Voting Section filed 10 such cases from fiscal years 2001 through 2017. NVRA list maintenance cases may involve two types of allegations: (1) in conducting a required program to remove ineligible voters from the voter registration list, a state or local jurisdiction did not incorporate certain safeguards, thus unlawfully removing eligible voters; and (2) a state or local jurisdiction did not have an adequate program to remove ineligible voters from the voter registration list. We reviewed the allegations in each of the 10 cases involving NVRA list maintenance claims and found that: Four of the 10 cases (filed in fiscal years 2002, 2007, 2012, and 2017) involved claims that the state or local jurisdiction unlawfully removed voters from registration lists. For example, in one case the Section alleged that the state systematically removed voters from its voter registration rolls within 90 days of a federal election, in violation of the NVRA, among other claims. Four of the 10 cases (filed in fiscal years 2006 and 2007) involved claims that the state or local jurisdiction did not have an adequate program to remove ineligible voters from registration lists. For example, in one case the Section alleged that a state failed to conduct a program that makes a reasonable effort to identify and remove ineligible voters from the state’s registration list, and that, as a result, the state had counties with excessively high registration totals compared to the voting age population. Two of the 10 cases (filed in fiscal years 2004 and 2006) involved both types of claims. For example, in one case the Section alleged that a number of local jurisdictions in one state did not regularly remove persons who died from their voter registration lists, resulting in ineligible voters remaining on the lists. The Section further alleged that local jurisdictions in the state did not always follow NVRA notice and timing requirements with respect to voters who may have moved, resulting in the unlawful removal of voters from voter registration lists. With regard to registration opportunities, the Voting Section filed six cases involving allegations under the NVRA’s registration opportunities provisions from fiscal years 2001 through 2017. We reviewed the allegations in each of these six cases and found that: Three of the six cases involved claims that the state failed to offer voter registration opportunities in public assistance offices and offices that provide state-funded programs primarily serving persons with disabilities. For example, in one case the Section alleged that employees in state offices that provide public assistance, and employees in state-funded programs serving persons with disabilities, failed to distribute voter registration applications. The Section also alleged that such offices failed to train and monitor their employees to ensure that they distribute voter registration applications to clients and transmit completed applications to the state and local election offices. One of the six cases involved claims that the state failed to offer voter registration opportunities in both motor vehicle and public assistance offices. Specifically, the Section alleged that the state did not provide a voter registration form with the state’s driver’s license application form. The Section further alleged that employees in state offices that provide public assistance, and employees in state-funded programs serving persons with disabilities, failed to distribute voter registration applications, among other claims. Two of the six cases involved claims that local election jurisdictions failed to process and register voter registration applicants. For example, in one case the Section alleged that a local election office did not process voter registration applications submitted by applicants at least 30 days before an election in a timely manner, which resulted in eligible applicants not being able to vote in their appropriate precincts in that election. DOJ officials have provided various perspectives on the department’s NVRA enforcement efforts. For example, in October 2009, we reported that the Assistant Attorney General for the Civil Rights Division prioritized NVRA list maintenance cases from fiscal years 2001 through 2007. Specifically, we reported that, according to Voting Section officials, the department focused during this period on both ensuring states had a list maintenance program and ensuring that such programs incorporated required safeguards. In a 2013 report, the DOJ Office of Inspector General reported that Civil Rights Division leadership initiated an effort to enforce the NVRA’s list maintenance provisions in late 2004. The report further noted that Civil Rights Division leadership placed a higher priority on the enforcement of the NVRA’s ballot access, or registration, provisions between 2009 and 2012. Section officials we interviewed for this review did not identify any overall Section-wide priorities between fiscal year 2010 and fiscal year 2017 that focused specifically on either list maintenance or registration. These officials explained that the Section cycles through the various NVRA provisions over time, but provided limited details and did not directly attribute any increase in matters or cases over time to Section initiatives or priorities. Officials further noted that the Section pursued fewer NVRA related cases after 2010 in part due to resource limitations and other priorities within the Section. For example, officials stated that the Section handled a number of Voting Rights Act cases during this time, which required a significant amount of staff time and resources. NVRA Amicus Participation Increased Since Fiscal Year 2012 In addition to initiating matters, and filing NVRA cases as a plaintiff, the Voting Section engaged in efforts to enforce the NVRA’s registration opportunity and list maintenance provisions by participating as an amicus curiae or “friend of the court” in eight NVRA cases from fiscal year 2001 through fiscal year 2017. The Section participated in seven of these eight cases between fiscal years 2012 and 2017. Four of the eight cases involved registration opportunity complaints and four involved list maintenance complaints. According to Voting Section officials, amicus participation increased in these years in part because it was a way for the Section to participate in cases in a manner which did not require a significant amount of resources. Specifically, officials stated that filing an amicus brief takes considerably less time and fewer staff resources than litigating a case. Out-of-Court Settlement Agreements with States Addressed NVRA Registration Opportunities The Voting Section entered into five out-of-court settlement agreements with states (in lieu of filing a case) to address allegations of NVRA non- compliance between fiscal years 2008 and 2017. All five of the agreements were related to the law’s registration opportunity provisions. For example, in one settlement agreement, a state agreed to make modifications to its internet site and the forms, procedures, and electronic system used at its motor vehicle offices in order to meet the requirements of section 5 of the NVRA, which stipulates that states offer voter registration opportunities at state motor vehicle agencies. The state further agreed to produce a compliance plan to meet these goals and to develop and implement a mandatory NVRA training program, among other things. The agreement included monitoring procedures, such as requiring the state to provide DOJ with quarterly reports of the number of in-person driver’s license applications received and completed voter registration forms accepted and transmitted to county boards of elections. According to Voting Section officials, the determination of the appropriate type of enforcement action in a matter, such as a settlement agreement or court order, can depend on a range of factors. For example, officials stated that relevant factors can include the nature, scope, and length of the violation, the level of cooperation by relevant actors regarding remedies, and the authority of relevant officials under state law to take remedial actions. The NVRA settlement agreements we reviewed are all multi-year agreements and Section officials noted that they try to collaborate with the state or jurisdiction regarding the appropriate steps (e.g., generating monthly, quarterly or biannual reports) for measuring and monitoring compliance during the period of the agreement. Section attorneys monitor settlement agreements by reviewing each required report and conferring with managers about progress towards compliance. Efforts to Assess Compliance with NVRA Requirements According to Voting Section officials, the Section engaged in various efforts to assess state and local jurisdiction compliance with NVRA registration opportunity and list maintenance requirements, including conducting reviews of federal election administration and other data, and compliance investigations. Specifically, Section officials said that they conduct periodic reviews of the U S. Election Assistance Commission’s biennial Election Administration and Voting Survey (EAVS) to assess compliance with different NVRA provisions. For example, officials noted they may review EAVS data summarizing states’ motor vehicle agency driver license and voter registration transactions to help determine whether states are following NVRA section 5. In addition to using EAVS data, officials said they review publically available third party reports, which often include state specific registration data and other qualitative information about state processes. Section officials said this information can help them identify states that are potentially not in compliance with the NVRA. Officials also said that Section investigators have conducted observations at motor vehicle agencies and social services agencies as part of their efforts to assess and enforce NVRA compliance. Section officials noted that these efforts are not conducted on a regular schedule; rather, they are conducted periodically, on an intermittent, rolling basis. These officials said such efforts may lead them to request additional information from states, conduct compliance investigations, and initiate enforcement actions if necessary. For example: The DOJ Office of Inspector General reported that, in 2004, the Voting Section reviewed census and voter registration data for all 50 states to determine which states had more people registered to vote than the voting-age population. The Inspector General further reported that, based on the results of the research, the Section sent letters to 12 states requesting information on their efforts to remove ineligible voters from their registration lists, and ultimately filed two cases as a result of this enforcement initiative. In June 2017, the Voting Section sent letters to the 44 states subject to the NVRA requesting information related to states’ compliance with the law’s list maintenance provisions. Section officials stated that, as of March 2019, two actions have resulted from this effort: (1) the Section became a plaintiff-intervenor in a June 2018 case against Kentucky for having an inadequate list maintenance program; and (2) the Section entered into a February 2019 memorandum of understanding with the state of Connecticut regarding its efforts to identify registered voters who have died. Officials noted that the effort begun in 2017 does not have any specific time frames, goals, or objectives but that the Section is reviewing the data states provided and focusing detailed reviews on states whose data suggest possible non-compliance. Section officials said that in general, assessing compliance with NVRA section 8 (list maintenance) is more challenging than for the other sections, such as section 5 (voter registration opportunities at motor vehicles agencies). For registration opportunity provisions, they can send an investigator to the agency to observe whether the agency is offering people the opportunity to register as part of their standard transactions. However, officials noted there is no observation they can conduct to determine if list maintenance is occurring as required. As such, officials stated that DOJ is uniquely dependent on information and data from the states and local jurisdictions to indicate whether list maintenance efforts are taking place and what type. Officials further noted that they may have reduced time to analyze data or otherwise pursue more general enforcement efforts in time periods where the Section is overseeing a high number of defensive cases (ones in which the U.S. government is the defendant). DOJ’s Public Integrity Section and U.S. Attorneys’ Offices Initiated Matters and Filed Cases to Address Potential Election Fraud Federal, state, and local authorities share responsibility for addressing allegations of election fraud. Within the federal government, DOJ has jurisdiction over election fraud investigations and prosecutions in elections where a federal candidate is on the ballot. In the absence of a federal candidate on the ballot, DOJ may have jurisdiction where facts exist to support the application of federal criminal laws that potentially apply to both federal and non-federal elections. According to DOJ officials, federal authorities would ordinarily defer to state and local authorities in deciding who would pursue an election fraud investigation or case because of states’ primary authority over the election process. DOJ’s Federal Prosecution of Election Offenses states that election fraud usually involves the corruption of one of three processes: the obtaining and marking of ballots, the counting and certification of election results, or the registration of voters. Within DOJ, the Public Integrity Section and U.S. Attorneys’ Offices maintain certain data on the election fraud matters and cases they initiate and prosecute. Within their respective databases, DOJ attorneys select a program category for each matter and case, which helps define the type of criminal act being investigated or prosecuted, for example, election fraud or health care fraud. U.S. Attorneys’ Offices use the program category “election fraud” for all election related charges; attorneys in the Public Integrity Section use either “election fraud” or “election crime other.” According to DOJ officials, categorization of matters and cases as election fraud (or any other category) is at the discretion of the investigating or prosecuting attorney based upon an examination of the facts. We refer to matters and cases that were either categorized as election fraud or election crime other, or included individual charges we identified as “election fraud related.” Election fraud related matters and cases in the DOJ databases we reviewed included charges brought under a wide variety of statutes, including those related to providing false information in registering or voting and vote buying (52 U.S.C. § 10307(c)) and voting by noncitizens (18 U.S.C. § 611), as well as more general charges such as the general federal conspiracy charge (18 U.S.C. § 371). The Public Integrity Section Initiated 33 Matters and Filed 19 Cases Related to Election Fraud from Fiscal Years 2001 through 2017 From fiscal years 2001 through 2017, the Public Integrity Section initiated 1,408 matters, of which 33 were election fraud related, or about two percent of its overall matters. As shown in figure 6, the Section initiated 10 of the 33 election fraud related matters in fiscal year 2011, six in fiscal year 2013, four in fiscal year 2003, and four in fiscal year 2012. From fiscal years 2001 through 2017, the Public Integrity Section filed 695 cases; of which 19 were election fraud related, or about three percent of its overall caseload. As shown in figure 7, the Section filed election fraud related cases in five of those fiscal years, with seven of the 19 cases filed in fiscal year 2003 and five filed in fiscal year 2014. Public Integrity Section officials stated that the Section’s involvement in election fraud related matters and cases may vary over time depending on a variety of factors, including the number of complaints received and staffing levels within the Section. Officials stated that the Section allocates attorneys to work on election related matters and cases as needed, if resources allow. U.S. Attorneys’ Offices are required to consult with the Public Integrity Section with regard to all federal criminal matters that focus on corruption of the election process, in addition to federal patronage and campaign finance-related crimes. The Section reviews this information and consults with U.S. Attorneys’ Offices on their elections related work. U.S. Attorneys’ Offices may also request assistance with a case if they lack sufficient resources to prosecute a complex case, or if the office needs to recuse itself. If the Section does not have sufficient staff available, officials stated that they may not have the ability to offer assistance in investigating matters and prosecuting cases. In these circumstances, officials said that the U.S. Attorney’s Office would likely proceed with the case without the Section’s assistance, except in recusal cases. The Public Integrity Section initiated at least one election fraud related matter in 11 of 12 regional federal circuits as shown in figure 8. The Section initiated the most matters in the Sixth Circuit (10 of 33) and the Fifth Circuit (seven of 33). The Public Integrity Section filed election fraud related cases in four of the 94 federal judicial districts nationwide. These four districts are located in three states: Kentucky, Texas, and Massachusetts. Specifically, the Section filed 11 of its 19 cases in the Eastern District of Kentucky; five cases in the Southern District of Texas; two cases in the Western District of Kentucky; and one case in the District of Massachusetts. The Public Integrity Section prosecuted election fraud related cases with charges under six statutes. As shown in table 2, the Section most frequently brought charges under 52 U.S.C. § 10307(c) which was charged in 17 of the 19 cases the Section filed. This statutory provision prohibits giving false information for purposes of registering or voting, vote buying, and conspiring to vote illegally. Public Integrity Section officials stated the Section did not focus its efforts on particular types of election fraud, but vote buying (generally charged under 52 U.S.C. § 10307(c)) was the most frequent type of election fraud related crime the Section prosecuted during the period of our review. Officials said vote buying is the most common type of election fraud related crime that has come to their attention in recent decades and noted that it tends to occur in communities that are more insular and isolated and have higher levels of poverty. For example, officials observed that in rural communities with high levels of poverty, some residents may be more vulnerable to vote-buying efforts due to their difficult circumstances or the power of local officials who seek to buy votes to provide or cut off needed services. Officials stated that matters and cases tend to be geographically concentrated because, while the Section does not have any formal initiatives in particular circuits or districts, they are in close contact with U.S. Attorney’s Offices nationwide and can offer additional assistance in those areas that may be more vulnerable to recurring or frequent election fraud. Example of Public Integrity Section Election Fraud Prosecutions Seven cases filed in the Eastern District of Kentucky in fiscal year 2003, in which 10 defendants were charged, concerned the 1998 primary election for multiple Knott County government positions and candidates, including county judge executive (the county executive) and county clerk. The 1998 primary election also included a contest for federal office (U.S. Senator). The presence of a candidate for federal office on a ballot is sufficient to establish federal jurisdiction under most election fraud related statutes as the federal candidate’s election could be, or could appear to be, tainted by the fraud. U.S. Attorneys’ Offices Initiated 525 Matters and Filed 185 Cases Related to Election Fraud from Fiscal Years 2001 through 2017 From fiscal years 2001 through 2017, U.S. Attorneys’ Offices initiated more than 2.2 million criminal matters (i.e., investigations), of which 525 were election fraud related, or 0.02 percent of their overall matters. As shown in figure 9, U.S. Attorney’s Offices initiated between 11 and 65 election fraud related matters each year during this time period. U.S. Attorneys’ Offices initiated the most election fraud related matters in fiscal years 2003 (44), 2004 (53), 2005 (65), and 2011 (46). The percentage of election fraud related matters of all matters initiated ranged from 0.01 percent to 0.06 percent. From fiscal years 2001 through 2017, U.S. Attorneys’ Offices filed just over 1 million criminal cases. Of these, 185 cases were election fraud related, or 0.02 percent of their overall caseload. According to officials from EOUSA, which provides guidance, direction, and oversight to the U.S. Attorneys’ Offices, election fraud was one of the least frequent crimes addressed by U.S. Attorneys’ Offices. In fiscal year 2017, the most frequent felony cases filed by U.S. Attorneys’ Offices were for immigration, drugs, and violent crime offenses. Officials further noted that election fraud related cases were taken seriously and thoroughly investigated when facts supporting such charges were uncovered. As shown in figure 10, U.S. Attorneys’ Offices filed the most election fraud related cases in fiscal years 2003 through 2005, and in fiscal years 2007 and 2017, with 15 or more cases filed each fiscal year. U.S. Attorneys’ Offices filed fewer than five election fraud related cases during fiscal years 2001, 2002, and 2015. The percentage of election fraud related cases of all cases filed ranged from less than 0.01 percent to 0.03 percent. From fiscal years 2001 through 2017 U.S. Attorneys’ Offices initiated at least one election fraud related matter in 85 of the 94 federal judicial districts. As shown in figure 11, three districts cumulatively accounted for 145 out of 525 matters, or approximately 28 percent of all election fraud related matters initiated. Of these three, two judicial districts, the Southern District of Florida and the Eastern District of Kentucky accounted for nearly one quarter of all election fraud related matters U.S. Attorneys’ Offices initiated. About half of the 185 election fraud related cases filed by U.S. Attorneys’ Offices occurred in three of the 94 federal judicial districts. As shown in figure 12, the Southern District of Florida filed 42 cases (23 percent), the Eastern District of Kentucky filed 36 cases (19 percent), and the Eastern District of Wisconsin filed 15 cases (eight percent). U.S. Attorneys’ Offices filed the remaining cases (92 cases, or 50 percent) in 42 federal judicial districts; of these, 20 districts had only one election fraud related case during the time period. EOUSA officials said that there could be a number of reasons why cases occurred more frequently in some districts than others. These officials noted that individual U.S. Attorneys utilizing their prosecutorial discretion may have taken an interest in election fraud or encountered evidence of a series of election fraud related crimes that generated a number of matters or cases. For example, according to the respective U.S. Attorneys’ Offices: In the Southern District of Florida, a 2004 case involving allegations of noncitizen voting resulted in the U.S. Citizenship and Immigration Services referring a series of additional similar investigations to the U.S. Attorney’s Office; In the Eastern District of Kentucky, a drug investigation in 2003 revealed evidence of vote buying that led to a series of vote buying cases; and In the Eastern District of Wisconsin, 14 of the 15 cases filed were uncovered in a joint investigation regarding the results of the 2004 presidential election, which showed a discrepancy between the number of ballots counted and individuals voting in one Wisconsin county. That investigation ultimately determined the discrepancy was caused by clerical error, but also uncovered 10 individuals who voted despite being ineligible due to their felon status and four who voted more than once. U.S. Attorneys’ Offices utilized approximately 100 different statutes in bringing charges in election fraud related cases. Table 3 shows the statutes charged in 15 or more election fraud related cases filed by U.S. Attorneys’ Offices. The most frequently charged statute was 52 U.S.C. § 10307 (prohibited voting acts), charged in 52 cases, with subsection (c) (false information in registering or voting and vote buying) charged in 38 of those cases. The next three statutes of 18 U.S.C. § 371 (conspiracy), 18 U.S.C. § 1001 (false statements), and 18 U.S.C. § 611 (voting by noncitizens) were each charged in 38 or more cases. EOUSA officials explained that U.S. Attorneys’ Offices select charges based on the specific facts and circumstances of a case. These officials noted that the offices may use some statutes, such as 18 U.S.C. § 371 and 18 U.S.C. § 1001, more frequently in cases due to their generality, which makes them widely applicable to different types of criminal conduct. Selected Data Sources on Moves, Deaths, and Convictions Used to Maintain Voter Registration Lists, and Their Reported Benefits and Limitations Each of the selected data sources we reviewed is one tool election officials may use to maintain their voter registration lists. These selected data sources are used to identify (1) registrants who move—U.S. Postal Service National Change of Address (NCOA), Interstate Voter Registration Crosscheck Program (Crosscheck), and returned mail; (2) deceased registrants—the public version of the Social Security Administration Death Master File (DMF) and state vital records; and (3) registrants with disqualifying felony convictions—U.S. Attorneys’ records on felony convictions. State and local election officials may use a variety of other databases or lists (data sources) to identify ineligible registrants who should be removed from voter registration lists, and state policies and procedures for using various data sources to identify and remove registrants from voter lists vary. Despite variations, election officials with whom we spoke stated that list maintenance—including the use of the selected data sources—provides benefits such as cost savings, smoother Election Day processes, reductions in administrative burden, and fewer opportunities for election fraud. Moreover, election officials told us that each of the selected data sources helps improve voter registration list accuracy, despite some limitations. For example, officials identified benefits from using these data sources, such as helping reduce the number of address errors on voter registration lists and helping identify and remove registrants who have moved outside of the election jurisdiction, are deceased, or have a disqualifying criminal conviction from voter registration lists. Officials also identified limitations with using these selected sources. In particular, three of the six selected data sources consist of administrative records collected for purposes other than voter registration, which can present some challenges when election officials use these sources to maintain their voter registration lists. For example, election officials noted that such data sources may inaccurately indicate that registrants moved unless election officials conduct additional work to verify the information. In addition, these data sources may not include the records for some registrants who are deceased and should be removed from the voter registration lists. Appendix III includes a description of a range of data sources states may use to maintain their voter registration lists. With regard to possible election fraud, state officials from all five selected states we visited noted that list maintenance activities in general help to identify or prevent election fraud because accurate and complete voter registration lists make it more difficult for individuals to commit fraud. Specifically, duplicate registrations—more than one registration for the same person across election jurisdictions—and ineligible registrations, such as those for deceased individuals, if present in voter registration lists, may provide opportunities for a person to vote more than once or vote using someone else’s identity. Thus, registration lists that contain one registration for each eligible registrant with accurate and current identifying information help to prevent election fraud from occurring. The majority of election officials we interviewed did not specify any one data source used to identify election fraud; however, state officials from Michigan and Oregon noted that the limited instances of election fraud in their states, in their view, is in part the result of their strong voter registration list maintenance efforts which have helped to reduce opportunities for fraud. In using data sources as a tool for maintaining voter registration lists, state and local election offices utilize data-matching procedures by which attributes of one registration record are compared to attributes of another record from another database or list to identify registrants who should be removed from voter registration lists under the NVRA’s removal categories. States are required to have computerized statewide voter registration lists, which allow election officials to conduct electronic data matching of their voter registration list to other databases or lists. These other databases or lists may include federal or state administrative records, interstate databases, and local lists or other information. Information on Data Matching Procedures Procedures for determining that a voter registration record is a “match” to another record may vary across states, local election offices, and interstate data matching programs. In general, a “match” should accurately identify the same individual across the two data sources being matched. However, data matching may result in improper indications of a match when a non-match should be indicated (false positives). False positive matches pose risks that election officials may improperly remove registrants from voter registration lists. Data matching can also result in improper indications of a non-match when a match should be indicated (false negatives), posing risks that election officials may fail to remove ineligible registrants from voter registration lists. According to a National Academy of Sciences report, the quality of the underlying data (from either the voter registration list or other data sources used for matching) may contribute to false positive or false negative matches. National Academy of Sciences, Improving State Voter Registration Databases, The National Academies Press, 2010. Further, matching procedures may differ with regards to how data in specific data fields are compared across databases to determine a match. For example, some procedures may require that the name from the voter registration list exactly match the name from the other data source (e.g. each letter, hyphen, space, or apostrophe must match). An exact match requirement would not accept as a match the name entries “Mary Jones-Smith” and “Mary Jones Smith”, even if all other data fields match across data sources and the entries represent the same individual, thus resulting in a false negative match. Below we discuss in detail the selected data sources and their benefits and challenges, as identified by literature we reviewed and election officials with whom we spoke. Data Sources Used to Identify Registrants Who Move According to reports we reviewed, registrants who move from one election jurisdiction to another jurisdiction within the state or to another state account for the majority of ineligible registrants and duplicate registrations on voter registration lists. When individuals register to vote, their voter registrations are linked to their residential address. This connection between a voter’s registration and residence is intended to ensure reliable and accurate voter registration lists, and to ensure that voters only vote for races and ballot questions that affect the communities in which they live. According to the 2016 Election Administration and Voting Survey (EAVS), the most common reason for a registrant’s removal from the rolls was cross-jurisdiction change of address (31.1 percent of removals), followed by registrants failing to respond to a confirmation notice sent as part of the NVRA process and subsequently not voting in the following two federal elections (26.1 percent of removals). As previously discussed, under the NVRA, data that indicate a registrant’s change of address and a potential move can be used to start the address confirmation notice process, but cannot, on their own, result in the automatic removal of registrants from voter registration lists. U.S. Postal Service National Change of Address (NCOA) The NCOA database comprises change-of-address records with the names and addresses of individuals, families, and businesses who filed a change of address with the U.S. Postal Service. Election officials can access the NCOA data by obtaining a license to directly receive the data from the U.S. Postal Service or having their voter registration list processed by a licensed third-party service provider. Election officials in the five states we visited compare selected records or the entire voter registration list against the NCOA database at the state or local level and at varying frequency to identify registrants who have potentially moved and to start the address confirmation and registrant removal process. For example Nebraska, Oregon, and Virginia state election officials said that they compare their statewide voter registration lists to NCOA on a bi-annual, monthly, and annual basis, respectively, to identify registrants who have potentially moved. In contrast, Florida and Michigan officials said they do not use NCOA data at the state level, though state laws provide local election officials the option of comparing their local jurisdiction’s voter registration list to NCOA when they conduct list maintenance activities related to changes in address. Although initial data comparisons of NCOA with the voter registration lists can be conducted at either the state or local level, in all of the states we visited when the results of the NCOA data-matching indicated a potential move, local election officials managed the results of the confirmation notices that were sent to registrants to confirm their address. Local election officials subsequently updated addresses on the voter registration lists with responses they received from the confirmation notices, or flagged registrants for potential removal if the registrants did not respond to the confirmation notice or the notice was returned undeliverable. State election officials from Nebraska, Oregon, and Virginia, and local officials from five of the jurisdictions we visited, reported that the primary benefit to using NCOA data is that it helps them to maintain accurate voter registration lists by (1) providing current and accurate addresses for their registrants, and (2) identifying registrants who have potentially moved and no longer reside in the voting jurisdiction. For example, local officials in one jurisdiction reported that they mailed approximately 60 percent fewer confirmation notices in 2017 compared to 2010 due to improvements in the accuracy of address information in their voter registration lists after using NCOA data during this period. Local officials in another jurisdiction reported they used the NCOA data as part of a one- time list maintenance effort, which generated over 100,000 confirmation mailings and resulted in the removal of a number of ineligible voters who no longer resided in the jurisdiction. Election officials also noted that using NCOA data to update voter registration lists may result in administrative efficiencies such as a more efficient election administration process and cost savings. For example, state officials from Oregon, a vote-by-mail state, said that NCOA data help them to maintain clean voter registration lists by providing current and accurate addresses for their registrants, which reduces mailing costs incurred from sending ballots to individuals who have moved out of the state. Further, local officials from one jurisdiction said that using NCOA data helped to reduce the number of address errors in the poll books and, as a result, decrease the number of registrants voting by provisional ballots on Election Day. A report we reviewed and election officials we interviewed cited a number of limitations to using NCOA data for voter registration list maintenance purposes. Specifically, in 2015 the U.S. Postal Service Office of the Inspector General reported that the NCOA data do not capture all change of address information because people do not always notify the U.S. Postal Service when they move. As a result, election officials may not be able to identify registrants who do not report changes of address to the U.S. Postal Service. Another limitation election officials cited is that an indication of a change in address in NCOA data does not necessarily reflect a change in residence, which is what determines the eligibility of a registrant to vote in a given election jurisdiction. According to U.S. Postal Service officials, the main purpose of the NCOA database is to maintain current and updated addresses for mail delivery and a change of address form may reflect a change in mailing address rather than a permanent change in residence. Nebraska, Oregon, and Virginia state officials and officials from three local jurisdictions reported that they have difficulty determining whether a registrant’s change in address as indicated in the NCOA data is a permanent change in residence or a change in mailing address due to a temporary move or other mailing needs. For example, military personnel may prefer to maintain their voter registration at their home of record. Upon assignment to another duty location they may file a change of address with postal authorities for mailing purposes, even if it is not a change of residence for voting purposes. Officials from two local jurisdictions reported similar issues for individuals who retain residency in the jurisdiction while attending college outside the jurisdiction. Further, registrants who had vacation homes outside the jurisdiction in the summer or winter months could be identified as registrants who potentially changed residences on a permanent basis using the NCOA data, according to Nebraska election officials. As a result of the potential difference between mailing and residential addresses, Virginia state election officials and election officials from two local jurisdictions reported that registrants may be inaccurately flagged for confirmation mailings. They told us that registrants would not be automatically removed after being flagged for confirmation mailings; however, they would be required to respond to the mailing or vote in one of the next two federal elections, as prescribed by the NVRA, to stay on the voter registration list. Officials also told us that they may have to take additional steps to use NCOA data to identify registrants who potentially moved and to update voter registration lists. For example, officials from one local jurisdiction that matches its county voter registration list to NCOA data noted that it can take a significant amount of time and resources to standardize their voter registration data to the NCOA format and to calibrate their data matching procedures to avoid false positive matches. Such false positive matches would inaccurately indicate an address change. These local officials said that they take steps to ensure that they do not get an indication of a change in address based on the standardization of an address (e.g. a “Street” to “ST” difference in address between the two data sources). Oregon state election officials and officials from one local jurisdiction further noted that they may have to do additional work to determine the appropriate election jurisdiction to which the address in the NCOA data should be assigned. Officials explained that some street addresses or buildings, like apartment complexes, cross election jurisdiction boundaries, which makes it difficult to determine within which election jurisdiction an address or a specific unit of an apartment complex falls. Oregon state officials said that local tax assessor data may help election officials reconcile these jurisdictional boundary issues. Interstate Voter Registration Crosscheck Program (Crosscheck) Crosscheck is an interstate data sharing program that compares participating states’ voter registration lists against one another to identify registrants who are registered in more than one state, which may indicate a move, and to identify individuals who may have voted in more than one state. The Crosscheck program began in 2005 with four participating states—Kansas, Iowa, Missouri, and Nebraska—and had grown to include 31 participating states by 2016. To participate in the Crosscheck program, each state signs a memorandum of understanding upon joining the program. Then, in January of each year, member states provide information such as full name, date of birth, and address for registered voters, as well as turnout data for the previous calendar year to Crosscheck program administrators—the Kansas Secretary of State’s office—in a prescribed format. Using the information provided by member states, Crosscheck program administrators return to each participating state a list of registrations in that state that share the same first name, last name, and date of birth, with a registration in another participating state. Crosscheck results also include other identifying information that varies depending on whether the member states provided the data. There are no membership or annual fees associated with joining or participating in Crosscheck. Of the states we visited, Michigan, Nebraska, and Virginia participated in the Crosscheck program for multiple years, while Oregon and Florida each participated once in 2012 and 2013, respectively. Oregon and Florida state officials explained that they did not use the Crosscheck data they received to conduct any voter registration list maintenance activities. Michigan, Nebraska, and Virginia state officials said that they received and processed Crosscheck data at the state level before sending a subset of results to the local jurisdictions to conduct additional verification and list maintenance activities. According to some state and local election officials we interviewed, Crosscheck data can be beneficial as one of the data sources used to identify registrants who may have moved out of state or whose moves are not captured by other data sources. Specifically, officials from four local jurisdictions told us that using Crosscheck data in conjunction with other data sources, such as the NCOA, helps keep voter registration lists accurate. Further, state election officials from Virginia and election officials from one jurisdiction reported that the fact that neighboring states participate in the Crosscheck program is particularly beneficial to them because their residents are more likely to move to neighboring states and the Crosscheck data may capture the change in residence if these residents also registered to vote in the neighboring states. Nebraska state officials also noted that Crosscheck data complement the NCOA change of address data. In particular, Crosscheck data can provide information on registrants who did not record change of address information under NCOA, who had not responded to a notice sent as a result of NCOA data and had moved a second time, or whose moves were not recent and may not be captured in the most recent change of address information provided by NCOA. Nebraska state officials noted that the Crosscheck data were particularly helpful in this manner the first year that Nebraska participated in Crosscheck and whenever a new state joined the program. In addition, election officials from Nebraska and state officials from Michigan identified Crosscheck data on possible instances of double voting as a source which could potentially help determine whether an individual might have voted in two or more states. For example, officials from two local jurisdictions said that they identified a few potential instances of double voting using Crosscheck data. They referred these instances of potential double voting to their Secretary of State. According to reports we reviewed and state officials we interviewed in all five states we visited, Crosscheck data contain numerous matches when a non-match should be indicated (false-positive matches) because the program uses matching criteria that rely on data elements, such as names and birth dates, that may be shared by more than one person. Specifically, the Crosscheck program matches participating states’ voter registration information by comparing registrants’ first name, last name, and date of birth. However, according to reports we reviewed, the odds are sufficiently high that two registrants could have the same name and birth date in groups as large as statewide (or multistate) voter registration lists. Nebraska state officials noted that when there were four participating Crosscheck states in 2005, a match indicating a duplicate registration was more likely to be a valid match (rather than a false positive); however as the number of participating states increased, the quality of the matched results has dropped substantially. Oregon state officials told us that they submitted data to the Crosscheck program in 2012 and that many of the resulting 20,000 potential duplicate registration matches were false-positive matches. Florida state officials also expressed concern about the reliability and quality of the matching criteria, in addition to the number of false positive matches in the data they received. In addition, a study on double voting found that Crosscheck data may not provide enough information for election officials to determine whether a match indicating potential duplicate registrations or double voting is valid. As previously discussed, Crosscheck results for potential duplicate registrations are based on a match of the first name, last name, and date of birth. Crosscheck results provided to participating states may also include additional information—such as registrants’ middle name, suffixes, registration address, and the last four digits of a registrant’s Social Security number, if available—which election officials can use to help determine whether a match is a valid indication of a duplicate registration. In particular, the last four digits of the Social Security number can help distinguish between two distinct individuals who happen to share the same first name, last name, and date of birth. Using Crosscheck data returned to Iowa in 2012 and 2014, the study found that two-thirds of potential duplicate registrations identified by Crosscheck data did not include the last four digits of the Social Security number associated with at least one of the registration records in the match. Thus, the study concluded that more often than not, an election administrator would not have enough information to distinguish which matches are valid indications of duplicate registrations. Further, Nebraska state officials noted that the reliability of the data provided by participating states can affect the reliability of Crosscheck information on double voting. For example, Nebraska state officials reported that one state incorrectly sent Crosscheck its 2014 voting history data the year participating states were to provide their 2016 data to the Crosscheck program. These officials noted that the incorrect voter history data made it appear as though many people had double voted. Nebraska officials said that once they identified this issue, they omitted any matched results involving the state that had provided the 2014 data from their review of registrants who potentially double voted. According to the Crosscheck 2014 Participation Guide, processing the duplicate registrations and researching possible double votes require a commitment of time from state and local officials. State election officials from Michigan, Nebraska, Oregon, and Virginia and officials from two local jurisdictions told us that they have spent a significant amount of time and staff resources to review the Crosscheck data and determine which matched records represent valid matches. State officials from the three states that participated in Crosscheck for multiple years (Michigan, Nebraska, and Virginia) said they implemented additional criteria to refine the Crosscheck data they received in order to identify valid matches of potentially duplicate registrations and send confirmation notices, according to the NVRA requirements. For example, Michigan state officials said that they further filter the Crosscheck results they receive to determine valid potential matches of duplicate registrations. Specifically, they filter Crosscheck results to include duplicate registrations where the registrants’ first names, middle initials, last names, dates of birth, and last four digits of Social Security numbers are an exact match. In addition, state election officials review the registration dates provided in the Crosscheck results to confirm that the registrant’s most recent voter registration activity occurred outside of Michigan before providing a refined list of valid potential matches to responsible local officials who conduct the address confirmation process. In its June 2017 Annual List Maintenance Report, Virginia state officials reported that they also review whether the last four digits of the Social Security number on Crosscheck results they receive match, to determine valid potential matches of duplicate registrations. While election officials from two jurisdictions we visited identified Crosscheck as a source which helped them identify potential instances of election fraud, such as instances of double voting, Nebraska state officials also noted the data were not generally reliable for these purposes without additional investigation. According to one study we reviewed, Crosscheck data on both double voting and duplicate registrations yield a high number of false-positive matches. Additionally, in another report, the New Hampshire Department of State found that of approximately 90,000 match records of duplicate registrations New Hampshire received from Crosscheck in 2017, only a small portion of the records were considered potential instances of double voting. Election officials can use the returned mail from targeted list maintenance mailing efforts and returned “undeliverable” mail from other mailings to registrants to send address confirmation notices to registrants who have potentially moved outside the election jurisdiction. These confirmation notices are subsequently used to update addresses on the voter registration lists with results of the confirmation mailing or flag registrants for potential removal. Specifically, targeted list maintenance mailing efforts may include sending a notice to all or a group of registrants in order to determine whether the registrant may have moved from the address on record. For example, Florida law states that local election officials can send notices to registrants who have not voted in the last 2 years and who have not made a written request that their registration be updated during the two year period. Targeted list maintenance mailing efforts may result in either a response from the registrants or returned undeliverable mail. Returned undeliverable mail occurs when the U.S. Postal Service cannot deliver mail to the address specified on the label, indicating a potential change in the registrant’s address and therefore residence. In addition to targeted list maintenance mailings, election offices may send other notices—such as sample ballots, or information about changes in polling locations—which may also generate returned undeliverable mail. See figure 13 for an example of other voter registration notices (not part of a targeted list maintenance effort) that may be returned to election officials as undeliverable and therefore indicate a potential move. Election officials from all five states we visited use returned mail from targeted list maintenance mailing efforts, or from other mailings to voters, to update registrants’ addresses or to send a notice to the registrants to confirm their address. According to Nebraska state election officials, returned undeliverable mail is a valuable tool for identifying registrants who may have moved. Local election officials we spoke with also said that returned undeliverable mail can provide them with a timely indication that a registrant has potentially moved. Furthermore, election officials told us that because mailings can be conducted on a periodic basis, processing returned mail at the time of receipt can help election officials distribute the list maintenance workload throughout the year. Specifically, election officials from four local jurisdictions said that returned undeliverable mail from voter notices sent to registrants periodically throughout the year is usually a more recent indicator of registrants’ changes in address compared to largescale list maintenance activities such as an annual mailing based on NCOA data. Further, officials from one local jurisdiction also noted that staying on top of returned undeliverable mail throughout the year helps reduce the workload during the state’s annual NCOA confirmation mailing, which would otherwise be too big to manage if the jurisdiction only processed address changes once a year. According to reports we reviewed as well as officials we interviewed, returned undeliverable mail may not be a reliable indicator that a person has moved, which can result in an inflation of the number of registrants who are flagged as inactive. For example, in 2015, the U.S. Postal Service Office of the Inspector General reported that approximately 60 percent of returned undeliverable mail is a result of the mail not getting delivered by postal service employees or insufficient address information on the mail, as opposed to the registrant having moved without notifying the U.S. Postal Service. Further, according to one report we reviewed, a registrant may not have received the mailing, or the mailing may be returned undeliverable for a number of reasons, including that the registrant may be temporarily away from his/her permanent residence; may not be listed on the mailbox of the residential address such as when the registrant shares an address with roommates or family members; or may live in a non-traditional residence such as homeless shelter or government building that will not accept mail for residents. In addition, Virginia state officials noted that using returned undeliverable mail can inflate the number of registrants who are flagged as inactive and can also result in additional costs. Specifically, these state election officials told us that they usually have a low response rate from registrants for mailings, including targeted mailings for list maintenance purposes or confirmation mailings. Registrants who are sent a confirmation notice or do not respond to confirmation mailings are then generally flagged as inactive. Nebraska state officials said that having inactive registrants on the registration lists has resulted in costs to local jurisdictions in the past because local officials were formerly required to mail a ballot to all registered voters, including those that were on the inactive list, when a special election was conducted by mail. Further, local election officials in one state said that inflated numbers of inactive registrants on voter registrations lists may result in fewer than needed voting precincts, to the extent that election officials determine the number of precincts based only on the number of active registrants on the lists. Data Sources Used to Identify Deceased Registrants The NVRA provides for states to remove deceased registrants from registration lists by reason of death. This may be carried out by the state’s department of elections, local election jurisdictions, or a combination of the two, as provided by state law. According to the 2016 EAVS, states removed over 4 million registrants due to death from November 2014 through November 2016, which accounted for 24.6 percent of the total number or registrants removed from voter registration lists. According to a National Association of Secretaries of State 2017 report, in most states, information on deceased registrants is provided by a state office of vital statistics, the state department of health, or a similar state-level entity. Additionally, the report notes that a number of states permit election officials to remove a deceased registrant using information from sources such as obituary notices, copies of death certificates, and notification from close relatives. Social Security Administration’s Public Death Master File (DMF) The public version of the DMF contains nearly 101 million records of deaths reported to the Social Security Administration from 1936 through March 1, 2019. It is a subset of the Social Security Administration’s full death file; it does not include state-reported death data, but includes other death data reported by family members, funeral directors, post offices, financial institutions, and other federal agencies such as the Department of Veterans Affairs and Centers for Medicare & Medicaid Services. The public DMF accounts for about 19 percent fewer death records than the full death file. The Social Security Act limits the sharing of the full death file to federal benefit-paying agencies, and other specifically enumerated purposes. Generally, DMF records include the Social Security number, full name, date of birth, and date of death of deceased individuals. Agencies or other entities, including election administrators, having a legitimate business purpose for the information can purchase the DMF from the National Technical Information Service of the U.S. Department of Commerce, which is authorized to distribute the DMF. Subscribers to the DMF are required to purchase monthly or weekly updates to the DMF to ensure that the records are up-to-date. Of the five states we visited, Florida, Michigan, Oregon, and Virginia compare their statewide voter registration list against DMF data on a regular basis, and Nebraska used the data once in 2014, to identify and remove registrants who had died. Specifically, Florida and Michigan directly receive the DMF data and conduct data-matching with their state’s voter registration list to identify deceased registrants on a weekly basis. Oregon and Virginia use DMF data through their participation in the Electronic Registration Information Center (ERIC) program. On a monthly basis, ERIC provides Oregon and Virginia state election officials a report on their deceased registrants based on matches of DMF data with these states’ voter registration lists. States’ procedures for removing deceased registrants from the voter registration list vary, depending on requirements outlined in state law. For example, Virginia state law provides that local election officials have the authority to determine the qualification of an applicant for registration. Further, the Virginia law requires election officials to send a cancellation notice once a voter registration record is cancelled due to death. As a result, Virginia state officials forward all valid matches of potentially deceased registrants to the responsible local official who reviews the match, marks the registrant as deceased in the voter registration database, cancels the registration, and sends a cancellation notice. In contrast, Michigan law allows either state or local officials to cancel a voter registration upon receipt of reliable information that the registrant is deceased. In addition, according to Michigan state election officials we met with, there is no legal requirement for officials to send notices of cancellation due to death. Michigan state election officials told us that they cancel voter registrations based on data-matching with DMF data at the state level. State election officials from all five selected states and officials from one local jurisdiction reported that they have found DMF data to be useful for identifying registrants who have died. Further, state election officials from four selected states stated that the DMF data are accurate and reliable. For example, officials said that they have experienced very few instances where they have had to reverse cancelled registrations because a registrant was incorrectly identified as deceased based on DMF data. Nebraska and Oregon state officials also noted that DMF data are particularly useful for identifying registrants who died out of state. Officials said that out-of-state death information would not be captured by other data sources they use, such as state vital records data. In addition, Michigan state officials noted that historically they would receive notification of a person’s death closer to the date of death when using DMF data than when using death data from the state vital records office. We previously reported that state-reported deaths, which the DMF does not include, are expected to account for a larger proportion of all Social Security Administration death records over time. As a result, we reported that agencies that purchase the DMF, including election offices, will likely continue to access fewer records over time as compared with those government agencies that obtain the Social Security Administration’s full death file. We also reported that because the deaths reported by states are generally more accurate than other death information reported to the Social Security Administration by post offices, financial institutions, and other government agencies, it is likely that agencies using the DMF could encounter more errors than agencies using the Social Security Administration’s full death file. According to Social Security Administration officials, Social Security death data are accurate when used to administer the Social Security Administration benefit programs, which includes removing deceased individuals from the beneficiary rolls and informing surviving spouses and children of their eligibility for benefits. Virginia state officials further noted that DMF death information can be less timely in identifying an individual as deceased when compared to state death records because state records are collected during the death certification process while the Social Security Administration relies on the transmission of information after the death certification from other entities, such as other government agencies, to identify an individual as deceased. State Vital Records Election officials can also use state vital records to identify and remove registrants who are deceased from their voter registration lists. Due to the federal requirement for state election officials to coordinate with the designated state agency responsible for compiling records of deaths, most states receive state level information on deceased registrants from their state office of vital statistics. State death records are collected electronically by most states, and maintained in each state’s Electronic Death Registration System. As of December 2018, 46 states, the District of Columbia, and Puerto Rico used an Electronic Death Registration System to collect and maintain death data within their jurisdiction. All five states we visited receive data on deceased individuals at varying intervals from their state vital records office and match these records to the statewide voter registration list to identify and remove deceased registrants. For example, on a daily basis, Florida state election officials receive state death data electronically from the Florida Bureau of Vital Statistics. They use the information to identify potentially deceased registrants and provide a list of these individuals to local election officials. Nebraska state election officials receive state death data from their state department of health on a weekly basis, Oregon and Virginia receive death information from their respective state departments of health on a monthly basis, and Michigan officials said they receive the information periodically, on either a weekly or bi-weekly basis. According to state election officials from Florida, Nebraska, Oregon, and Virginia and local election officials from four of the jurisdictions we visited, state vital records on deceased individuals are generally accurate and reliable, in part because state vital records data are reviewed and validated. Specifically, state vital records data on deceased individuals are linked to information on the death certificate which is validated by authorized persons, such as physicians and funeral directors, during the death certification process. Virginia state officials said that in comparison, other sources such as the Social Security DMF data may include reported deaths that are not directly linked to the death certification, from entities such as post offices and financial institutions. Additionally, officials from one jurisdiction told us that state death records are helpful in identifying people who died in another jurisdiction in the state. Further, officials from this jurisdiction noted that in the past they reviewed obituaries to identify deceased registrants, but that they have seen a decline in the use of obituaries to announce deaths and state death records help fill the information gap previously provided by obituaries. Nebraska state officials also noted that state death records can help prevent fraudulent registrations because state officials are able to check new registrations against death records received from the state health department. Nebraska state officials and officials from two local jurisdictions said that one limitation of state death records is that they generally only include information on deaths that have occurred in the state, and as a result election officials lack death records for residents who died out of state. From our interviews with the state vital records officials in the states we visited and information we reviewed on national death sources, we learned that, in some states, state death records may include information on deaths that occurred out of state, through the state’s participation in interstate data exchanges. Additionally, while some state officials found state death records timely for updating voter registration lists, Michigan state officials said their state death records were not as timely as DMF data. Specifically, Michigan state election officials said they used to receive notification approximately six months after a person’s death when using state death records, compared to within two weeks of death using DMF data. Officials explained that the lag in the death notification when using the state death records was due to low participation rates in the state’s Electronic Death Registration System when the system was first implemented. Michigan state election officials noted that state death records have improved and are timelier as the participation rate in the state Electronic Death Registration System has increased in recent years. Oregon state officials also noted that state death records may be less timely than the data counties receive from their local health departments, and thus local election officials may have received notice of an individual’s death from the county health department prior to receiving the state vital records data. Data Source Used to Identify Registrants with Disqualifying Felony Convictions State laws regarding the voting eligibility of individuals with a felony conviction vary. In some states, individuals who were previously convicted of a felony are not permitted to vote unless they are pardoned, or their voting rights are specifically restored by the government; in other states, the right to vote is reinstated automatically at the end of the individual’s sentence or after a designated period of time following the end of the sentence. Additionally, in some states, individuals with felony convictions may vote if they are on probation or have been granted parole; and, in two states, felons are allowed to vote even while incarcerated. Election officials are generally required to remove registrants with a felony conviction from voter registration lists, in accordance with state law. U.S. Attorneys’ Records on Felony Convictions U.S. Attorneys are required by law to notify the states’ chief election officials of felony convictions in federal court. The notices must contain a person’s name, age, residence address, date of entry of the judgment, a description of the offenses of which the individual was convicted, and the sentence imposed by the court. Election officials from all five states we visited said that they receive records from U.S. Attorneys on residents who are convicted of a federal felony. Florida, Nebraska, and Virginia use this information to remove registrants from their voter registration lists given the nature of their state laws, which restrict voting eligibility after a felony conviction until rights are restored or for a period after completion of the sentence. Michigan and Oregon prohibit individuals from voting while serving their sentences after conviction, but voting rights are automatically reinstated once a person is released from prison. As such, state officials from Michigan told us that they do not use U.S. Attorneys’ records to remove voters from their voter registration lists. Oregon officials noted they use U.S. Attorneys’ records on federal felony convictions to change a registrant’s status to “inactive.” Election officials from three states in our review that use U.S. Attorneys’ felony conviction records to remove registrants from voter registration lists said that this information was valuable, as they would not be able to acquire information about federal convictions from state sources. While federal conviction information can be helpful to election officials, an official from one local jurisdiction said that it can be difficult to determine whether the individual identified by a U.S. Attorney’s Office as having a federal conviction is the same person as the registrant. This is because criminals may have used aliases or provided incorrect Social Security numbers when registering to vote, which results in a less confident match. In addition, the information state and local officials receive on federal convictions is not required to include an individual’s projected date of release or date of sentence completion, which state and local officials from Florida and Nebraska said could help them determine whether the registrant is ineligible to vote and thus should be removed from voter registration lists. This makes it difficult for election officials to determine if the registrant’s sentence was already completed by the time they receive the information. In Nebraska, where voting rights are reinstated two years after a sentence is completed, election officials said it is initially difficult to know whether the individual’s voter registration is valid without the date of release or sentence completion. To mitigate limitations related to the lack of a projected release date or sentence completion date, Florida election officials said that they review case judgments which provide the details of the case, including date of sentence completion, to determine if the registrant’s sentence was completed and then check if the registrant’s rights were restored. Nebraska state election officials said they review court records and also noted that they would contact the local U.S. Attorney’s Office to obtain the federal release date for a particular registrant. Agency and Third Party Comments We provided a draft of this report to DOJ, the U.S. Postal Service, the Social Security Administration, the Election Assistance Commission, the Crosscheck program, and election offices in the five states and ten local jurisdictions we visited. DOJ, the U.S. Postal Service, the Election Assistance Commission, and the Crosscheck program did not provide written comments. The Social Security Administration submitted a letter noting that it did not have any substantive comments, which is reproduced in appendix IV. We incorporated technical comments from DOJ, the U.S. Postal Service, the Social Security Administration, Crosscheck, and state and local officials as appropriate. We are sending copies of this report to the Attorney General, the Postmaster General, the Social Security Administration, the Election Assistance Commission, election offices in the five selected states and ten local jurisdictions that participated in our research, appropriate congressional committees and members, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix V. Appendix I: National Voter Registration Act of 1993 Cases Filed by Private Parties In addition to the Department of Justice’s (DOJ) role in enforcing the National Voter Registration Act of 1993 (NVRA), the law allows a private party (a person or organization) who is aggrieved by a violation of NVRA provisions to bring a civil action against a state or local agency responsible for voter registration. In some cases, DOJ may participate in these private party cases by intervening on behalf of the plaintiff (as a plaintiff intervenor) or defendant, or by filing an amicus brief. The NVRA includes provisions that focus on both increasing opportunities for voter registration and improving voter registration list maintenance. Table 4 includes a summary of these provisions. Methodology To identify cases filed by private parties that included a claim under the NVRA, we searched an online legal database (Lexis Advance) for U.S. Circuit Courts of Appeals decisions from fiscal years 2008 through 2018 that contained the term “National Voter Registration Act.” We reviewed the decisions and also obtained and reviewed related case documents, including district court decisions, dockets, and complaints, to determine whether a claim had been filed under the NVRA and the nature of the claim, among other case information. We focused on cases that reached the federal appellate level because decisions issued by the U.S. Circuit Courts of Appeals create binding precedent for all of the districts in that circuit, among other considerations. Summary of NVRA Cases Filed by Private Parties that Reached the U.S. Circuit Courts of Appeals from Fiscal Year 2008 through Fiscal Year 2018 We identified 19 cases that were filed by private parties with claims under the NVRA that reached the U.S. Circuit Courts of Appeals (federal appellate level) from fiscal years 2008 through 2018. Eleven of the 19 cases included claims that were related to NVRA provisions that require states to provide registration opportunities. Six cases included claims related to the NVRA requirement to remove voters from registration lists under specified conditions (list maintenance). Registration Opportunity Cases Private parties filed 11 cases involving claims under the NVRA’s registration opportunity provisions that reached the U.S. Circuit Courts of Appeals. We reviewed the claims in each of the 11 cases and found that: five of the 11 cases involved a claim under section 5 related to voter registration opportunities at motor vehicle offices; four of the 11 cases involved a claim under section 7 related to registration opportunities at public assistance offices; two of the 11 cases involved a claim under section 6 related to mail-in registration application forms; and one of the 11 cases involved claims under section 8 related to the requirement that states register voters whose applications are received at least 30 days before an election. List Maintenance Cases Private parties filed six cases involving list maintenance claims under the NVRA that reached the U.S. Circuit Courts of Appeals. NVRA list maintenance cases may involve two types of allegations under section 8: (1) in conducting a required program to remove ineligible voters from the voter registration list, a state or local jurisdiction did not incorporate certain safeguards, with the potential effect of unlawfully removing eligible voters; and (2) a state or local jurisdiction did not have an adequate program to remove ineligible voters from the voter registration list. Five of the six cases included a claim under section 8 related to the potential unlawful removal of voters from voter registration lists. The sixth case included a claim under section 8 related to the inadequate removal of ineligible voters from voter registration lists. DOJ Participated in Nearly Half of All NVRA Private Party Cases that Reached the U.S. Circuit Courts of Appeals from Fiscal Years 2008 through 2018 DOJ submitted an amicus brief or statement of interest in nine of the 19 NVRA cases filed by private parties that reached the U.S. Circuit Courts of Appeals between fiscal years 2008 through 2018. Five of the nine cases in which DOJ participated involved issues related to registration opportunities: DOJ participated in all four of the cases that included a claim under section 7 related to registration issues involving public assistance offices. For example, in one case, plaintiffs alleged that the state of New Mexico failed to provide voter registration forms to applicants for public assistance who did not decline, in writing, to register to vote. DOJ submitted a brief in support of the plaintiffs. DOJ participated in one case that included a claim under section 6 related to mail-in voter registration application forms. DOJ also participated in one case under section 8 that related to the public disclosure of records concerning voter registration list maintenance activities. The remaining three cases involved issues related to list maintenance, specifically allegations that an election jurisdiction’s list maintenance program did not have appropriate safeguards to protect against the unlawful removal of eligible voters. For example, in one case, plaintiffs alleged that the state of Ohio violated the NVRA by using failure to vote as the sole trigger to start the confirmation process for removing voters from registration rolls based on a change of residence. In 2016, DOJ filed an amicus brief in support of the plaintiffs. In 2017, the case was appealed to the U.S. Supreme Court and the department reversed its original position and filed a brief supporting the state’s list maintenance practices. In June 2018, the Supreme Court upheld Ohio’s process for removing voters on change-of-residence grounds and ruled that failure to vote could serve as evidence that a registrant had moved. Appendix II: Cases with National Voter Registration Act of 1993 Claims Filed by the Department of Justice, Voting Section Within the Department of Justice (DOJ), the Civil Rights Division’s Voting Section enforces the civil provisions of federal laws that protect the right to vote, including the National Voter Registration Act of 1993 (NVRA), the Help America Vote Act, the Voting Rights Act of 1965, and the Uniformed and Overseas Citizens Absentee Voting Act, among others. From fiscal years 2001 through 2017, the Voting Section participated in 234 cases, including 14 cases involving NVRA claims in which the Section was the plaintiff or the plaintiff intervenor. Cases with NVRA claims included allegations related to providing registration opportunities for voters, and allegations related to the requirement to remove voters from registration lists under specified conditions (list maintenance). Table 5 below provides a brief summary of the allegations in each case. Appendix III: Data Sources and Site Selection Methods To address how selected data sources are used at the state and local level and to obtain perspectives on how these sources help maintain voter registration lists, we selected and reviewed six commonly received data sources that may be used to remove ineligible voters who have moved, died, or committed a disqualifying criminal conviction. We also selected state and local election offices in five states and conducted interviews with election officials to obtain information on policies and procedures for using selected data sources, and perspectives on their benefits and limitations. This appendix describes our data source and site selection methodologies, and additional information on the data sources and sites we selected. Data Source Selection To determine which data sources to include in our review, in June 2018 we sent a structured questionnaire to state election directors for each of the 49 states and the District of Columbia with voter registration requirements to identify commonly received data sources which states can potentially use to conduct voter registration list maintenance. The National Voter Registration Act of 1993 (NVRA) specifies certain categories under which election officials may remove registrants from voter registration lists including: 1. if a registrant has moved outside of a jurisdiction and either (a) confirmed the move in writing or (b) failed to respond to an address confirmation mailing and failed to vote in two consecutive federal general elections subsequent to the mailing; 2. death of the registrant; 3. criminal conviction of the registrant, as provided for in state law; and 4. mental incapacity of the registrant, as provided for in state law. We asked state election directors to identify the sources from which data were received at either the state or local level at any point between January 2017 and May 2018. We summarized responses from election directors in 35 states and the District of Columbia to identify commonly received data sources. Table 6 provides a summary of responses to the structured questionnaire, with the data sources organized according to the NVRA categories that may be used to remove registrants from voter lists. From the list of commonly received sources above, we then selected six data sources that can be used to address the following NVRA categories for removing registrants—move outside election jurisdiction, death, and, disqualifying criminal conviction. These categories each account for more than 1 percent of total removals from voter registration lists nationwide, based on the most recent data reported to the U.S. Election Assistance Commission. We did not select any data source that addresses the “disqualifying mental incapacity” NVRA removal category since it accounted for less than 1 percent of total removals nationwide for this time period. Specifically, we selected three sources that address moves, two sources that address deceased registrants, and one source that addresses disqualifying criminal convictions, to generally reflect recent data reported on the distribution of registrant removals, by removal category, from voter registration lists nationwide. We also selected (a) at least one nationwide source that captures data from all states; (b) at least one source that only includes data specific to the particular state or local jurisdiction that receives data from the source; and (c) one interstate data exchange that involves the sharing of data between multiple states. We selected sources from each of these categories in order to identify potential issues that may arise when election officials match their voter registration data with various other types of data sources. Table 7 presents the data sources we selected for further review. State and Local Jurisdiction Selection To obtain information on policies and procedures for using selected data sources for voter registration list maintenance, and election officials’ perceptions on the benefits and limitations of using them, we selected five states that indicated in their responses to our questionnaire that they have received data from at least five of the six selected data sources between January 2017 and May 2018. We also considered variation in states’ population size, when possible, and geographic diversity in order to capture possible regional differences in election administration practices. See table 8 for a list of the states we selected and a summary of the selected data sources received by each state. For each of the five selected states, we selected two local election jurisdictions (counties or cities/towns)–one with a larger population and one with a smaller population–based on the recommendation of the state election officials, population size, and other factors. See table 9 for demographic information on the states and local jurisdictions we visited. Appendix IV: Comments from the Social Security Administration Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Tom Jessor (Assistant Director), David Alexander, Justine Augeri, Colleen Candrl, Jamarla Edwards, Jonathan Ferguson, Alana Finley, Eric Hauswirth, Richard Hung, Amanda Miller, Heidi Nielson, Kevin Reeves, Christine San, Janet Temko-Blinder, Jeff Tessin, and Sarah Turpin made key contributions to this report.
Why GAO Did This Study The NVRA was intended to increase the number of eligible citizens who register to vote in federal elections, protect the integrity of the electoral process, and ensure that accurate and current voter registration rolls are maintained. GAO was asked to examine issues related to the NVRA's voter registration and voter registration list maintenance requirements, as well as issues related to election fraud. This report addresses (1) DOJ's efforts to ensure states and localities comply with NVRA requirements to offer registration opportunities and administer voter registration list maintenance programs, and address potential instances of election fraud; and (2) how selected data sources are used at the state and local level to help maintain voter registration lists, and perspectives on how these data sources help ensure list accuracy and address potential voter eligibility and fraud issues. GAO analyzed data on DOJ's efforts to ensure NVRA compliance and address election fraud–as measured by matters initiated and cases filed--for fiscal years 2001 through 2017 (the last full year of data available when requested from DOJ). This period covered eight federal elections. GAO also interviewed DOJ officials. GAO selected six commonly received data sources that may be used in list maintenance efforts. GAO reviewed literature and interviewed state and local election officials in five states for perspectives on how the data sources are used and any benefits and limitations. These states used at least five of the data sources and provided geographic diversity. The results from these five states are not generalizable, but provide insight into state and local perspectives on list maintenance. What GAO Found From fiscal years 2001 through 2017, the Department of Justice's (DOJ) Voting Section (which enforces the civil provisions of voting rights laws) initiated matters (e.g., investigations), filed cases against state or local governments in federal court, and engaged in other efforts to enforce provisions of the National Voter Registration Act of 1993 (NVRA). Specifically, the Voting Section: initiated 99 matters involving allegations of NVRA violations related to voter registration opportunities and list maintenance; filed 14 cases involving allegations of NVRA violations; eight included list maintenance allegations; four included registration opportunities allegations; and two included both types of allegations; and DOJ's Public Integrity Section (which supervises nationwide election law enforcement and prosecutes selected cases involving alleged corruption by government officials), and U.S. Attorneys' Offices (which enforce criminal laws within their districts) engaged in efforts to address election fraud from fiscal years 2001 through 2017, including filing cases against individuals in federal court. For example: The Section initiated 33 matters and filed 19 cases related to election fraud, accounting for about three percent of its overall caseload. Of these cases,17 involved vote buying and false information charges. U.S. Attorneys' Offices initiated 525 matters and filed 185 cases related to election fraud, accounting for about .02 percent of their overall caseload. Of these cases, 52 involved charges such as vote buying and voting more than once, and 49 involved conspiracy. GAO reviewed six data sources election officials may use to maintain voter registration lists and remove voters who become ineligible due to a move, death, or disqualifying criminal conviction: (1) the U.S. Postal Service's National Change of Address (NCOA), (2) the Interstate Voter Registration Crosscheck Program, (3) returned mail, (4) the public version of the Social Security Administration's Death Master File, (5) state vital records, and (6) U.S. Attorneys' records on felony convictions. Election officials GAO interviewed and literature reviewed reported benefits and limitations associated with each source. According to officials, each source helps improve list accuracy, despite some limitations, and list maintenance efforts in general help reduce opportunities for election fraud. For example, officials said that NCOA data helped them maintain accurate lists by identifying registrants who moved outside the election jurisdiction; however, they also noted that NCOA data may not capture all address changes because people do not always notify the U.S. Postal Service when they move. GAO incorporated technical comments provided by federal agencies and state and local election officials as appropriate.
gao_GAO-20-377
gao_GAO-20-377_0
Background Generally, Congress provides budget authority to agencies through the passage of appropriations acts each fiscal year. Appropriations allow agencies to incur obligations and make payments for specified purposes. When an appropriation expires, and a new one is not enacted, a lapse in appropriations, also called a funding gap, results and the affected agency or program may lack sufficient budget authority to continue operations. Funding gaps can occur at the beginning of a fiscal year when new appropriations, or a continuing resolution, have not yet been enacted. Funding gaps also can occur any time during the year when a continuing resolution expires and may affect a few agencies or all agencies across the federal government. We have previously reported that funding gaps, actual or threatened, are both disruptive and costly. The ADA prohibits agencies from obligating or expending funds in excess or in advance of an available appropriation unless otherwise authorized by law as well as from accepting voluntary services for the United States except in cases of emergency involving the safety of human life or the protection of property. During a lapse in appropriations, employees may continue working if they are exempt from the lapse in appropriations or if an exception to the ADA applies (see figure 1). Exempt and excepted employees are defined as follows. Exempt employees are those who perform activities funded with budget authority that remains available despite the lapse in appropriations, such as multiple-year or no-year carryover balances. Available balances can also come from other authorities such as fee income that Congress made available for obligation. For the purpose of this report, we call employees who perform such functions exempt employees. Excepted employees are those who perform activities pursuant to a statutory authority that expressly authorizes an agency to enter into an obligation in advance of an appropriation, or to address emergencies involving the safety of human life or the protection of property, as described under the ADA. We have also recognized, in our prior legal opinions, other limited exceptions that may, under some circumstances, allow functions to continue during a lapse in appropriations. For example, Congress and the Executive branch may incur obligations to carry out core constitutional powers. Agencies also may incur those limited obligations that are incidental to executing an orderly shutdown of agency activity. Over the past 29 years, there have been six lapses in appropriations that led to government shutdowns, ranging in duration from 2 days to 35 days (see figure 2). Three shutdowns occurred in the past 7 years and two of these shutdowns were prolonged in that they lasted longer than 5 days (in fiscal years 2014 and 2019). In the event of a government shutdown, OMB is responsible for ensuring that agencies have addressed the essential actions needed to effectively manage the government shutdown. OMB does so by providing policy guidance and shutdown-related instructions. Specifically, OMB Circular A- 11 directs federal agencies to develop contingency plans for use in the event of a government shutdown and to update these plans on a recurring basis. These plans are key documents that help ensure an orderly shutdown following a lapse in appropriations, as well as continuity of appropriate agency operations. These plans also communicate policies and procedures to employees and external stakeholders that could be affected by the shutdown of operations. Three of Four Selected Agencies’ Contingency Plans Generally Followed OMB Guidance but None Addressed a Potential Prolonged Shutdown OMB’s Circular A-11 directs agencies to prepare contingency plans in anticipation of a lapse in appropriations. According to the guidance, contingency plans are to include information such as: (1) summaries of activities that will continue and those that will cease; (2) the amount of time needed to complete the shutdown activities; (3) the number of employees on-board prior to the shutdown; and (4) the number of employees to be retained during the shutdown. Agencies are also to explain the legal basis for each of their determinations to retain employees, including a description of the nature of the agency activities in which these employees will be engaged. Additionally, agencies’ contingency plans are to explicitly describe any changes in operations that would be necessary should a lapse in appropriations extend past 5 days. According to OMB officials, OMB reviews agencies’ contingency plans, but it does not formally approve plans. Agencies are ultimately responsible for determining which activities will continue during a lapse in appropriations and which activities will cease. Using OMB Circular A-11, we identified 14 key information elements for agencies’ contingency plans and used these as criteria to assess the selected agencies’ plans. Of the selected agency components, ITA and CBP operated under the contingency plans of their respective agencies. Similarly, USTR officials said that the component operated under EOP’s contingency plan. IRS, in contrast, had its own contingency plan for the fiscal year 2019 shutdown because Treasury did not have a department- wide plan. Agency contingency plans governing the shutdown operations of CBP, IRS, and ITA included most of the key information elements described in OMB Circular A-11. EOP did not address a majority of the key information elements in its contingency plan, which governed USTR’s shutdown operations. Figure 3 shows how selected agencies’ contingency plans aligned with OMB’s guidance. Three of our four selected agencies—Commerce, IRS, and DHS— provided summary information at the beginning of their contingency plans about activities that would and would not continue during a lapse in appropriations. EOP’s contingency plan did not include any information on activities that would and would not continue. The following table shows examples of exempt and excepted work activities from our selected agencies’ contingency plans (see table 2). All four agencies provided the total number of employees on-board before the shutdown and how many would continue to work during the government shutdown. However, EOP’s contingency plan did not break down these employees by ADA exception categories that may include addressing emergencies involving the safety of human life or the protection of property or carrying out core constitutional powers, as specified in OMB guidance. While the break out of employees by ADA exception category was not in the DHS department-wide plan, CBP’s component-level portion of the plan, which is not publically available because of law enforcement sensitivities, contained these details. This information is important for an agency to ensure it has proper oversight of operations and the right personnel performing excepted work to be in compliance with the ADA. None of the agencies we reviewed provided a complete description of potential changes to their activities and operations in the case of a prolonged lapse in appropriations—one lasting longer than 5 days—within their contingency plans. Officials at some of the selected agencies told us that the purpose of their contingency plans was only to document operations for the first 5 days of a shutdown, contrary to what is required in OMB’s Circular A-11 for planning and documenting operations in the anticipation of a potential prolonged shutdown. While three of the four agency contingency plans that we reviewed— Commerce, IRS, and DHS—provided some minimal details on how operational changes would be made in the event of a prolonged shutdown, such as designating personnel responsible, none provided the level of detail called for in OMB guidance. As discussed later, three of four selected agency components—ITA, IRS, and USTR—did have internal discussions on changes to operations in the event of a prolonged shutdown, according to officials. However, these discussions were not documented in the agency contingency plans. Given that shutdowns longer than 5 days have occurred in the past, it is important for agencies to consider and document the effects that a potential prolonged shutdown would have on operations in their contingency plans. Planning for potential prolonged shutdowns may assist the agencies with effectively managing changes in operations, and documenting these plans in public contingency plans may provide transparency to agency actions as a shutdown continues. OMB’s guidance states that if an agency anticipates changes during a potential prolonged shutdown, contingency plans should include information such as points in time when the furlough status of an employee may change, how many employees would be affected, and the legal basis for the changes. This information element is mentioned in two separate sections of Circular A-11 rather than in one consolidated location. Contingency plans for all selected agencies did not include complete information about (1) flexibilities available to supervisors if furloughed employees were unable to return to work on the day specified by the agency, including use of annual leave, compensatory time off, or credit hours; and (2) procedures for resuming program activities, including steps to ensure appropriate oversight and disbursement of funds upon the end of a shutdown, as specified in OMB’s guidance. Officials at selected agencies said that this information was available in internal guidance and fact sheets for employees, but that they did not include it in contingency plans, which are accessible to all employees during a shutdown. Including this information in contingency plans is important because it helps clarify agencies’ expectations for returning employees, and its inclusion may help agencies experience a more timely resumption of activities following a shutdown. As previously mentioned, USTR, as a component of EOP, operated under EOP’s contingency plan and did not have a separate plan for the fiscal year 2019 shutdown. While EOP’s contingency plan contained some information on USTR such as total number of employees on-board before the shutdown and employees to be retained during the shutdown, the plan did not fully address 10 of the 14 information elements outlined in OMB’s Circular A-11. Information that was not provided includes: (1) a breakout of exempt and excepted positions by category (e.g., available budget authority, emergencies involving safety of human life or protection of property, etc.); (2) summaries of activities that would or would not continue during a lapse; (3) designation of personnel responsible for implementing and adjusting the contingency plan if conditions change; and (4) methods for notifying employees that the shutdown has ended and when to return to work. Formal contingency plans that address the information elements specified in OMB guidance help agencies prepare for and oversee shutdown operations, and provide transparency to agency actions during a lapse in appropriations. Without a plan that covers these elements, USTR risks miscommunication with employees and other stakeholders that could negatively impact an orderly shutdown and the effective resumption of activities at the end of a lapse. Three of Four Selected Agency Components Discussed Potential Changes during a Prolonged Shutdown, and All Made Operational Changes IRS, ITA, and USTR Planned for Potential Operational Changes Needed in the Event of a Prolonged Shutdown, but CBP Did Not Officials at IRS, ITA, and USTR discussed anticipated operational changes in the event of a prolonged shutdown internally while planning for the fiscal year 2019 shutdown. In one instance, an agency component documented these discussions in planning documents separate from agency contingency plans under which the component operated. However, as mentioned previously, potential operational changes were not documented in any of the contingency plans of our selected agencies. Potential operational changes generally involved recalling additional employees who had been furloughed at the beginning of the shutdown to carry out activities that the agencies categorized as excepted from the ADA or exempt due to other funding sources. IRS: IRS officials said that their initial planning was for a shutdown lasting 5 days or less. Within their contingency plan IRS noted that it would amend the plan if the shutdown lasted longer. On December 27, 2018, 6 days into the shutdown, IRS issued an updated contingency plan. According to IRS officials, this updated plan was assembled by contacting each of IRS’s 23 organizational offices to find any new activity requirements that would lead to changes in the contingency plan. In its amended plan, IRS added approximately 60 positions as excepted or exempt. Examples of activities that employees in these positions would support included: (1) communications efforts through IRS websites, (2) end-of-month financial operations, and (3) managing on-boarding for employees hired under Public Law 115-97, commonly referred to as the Tax Cuts and Jobs Act. IRS officials said they knew about these operational changes at the time of initial shutdown planning, but did not document all these operational needs. IRS program officials told us that being informed of anticipated operational changes as early as possible would have helped them prepare for the shift in workload. For its fiscal year 2020 contingency plan, IRS asked plan contributors to identify “as needed” positions that could be activated during a potential prolonged shutdown. Plan contributors also identified positions that would be needed if a shutdown lasting more than 5 days were to occur as IRS approached the tax filing season. ITA: Prior to the shutdown, ITA officials prepared a list of upcoming activities for the next 30 to 60 days to determine the potential scope of activities affected by a government shutdown. Activities included trade shows, meetings, and other critical operational deadlines. According to officials, ITA worked with their General Counsel to determine if activities could continue as excepted from the ADA or exempted because funding was available from another source. For those activities that could not be deemed excepted or exempt, ITA officials said that they were prepared to notify affected parties of the cancellation or postponement of the activities. For upcoming activities, ITA established dates when preparation would need to begin. In addition, ITA officials prepared temporary exception requests for employees to be recalled from furlough status in time to conduct needed preparation and carry out scheduled activities. ITA officials told us that they used and updated a tracker daily during the shutdown to ensure that all information remained current. Prior to the shutdown, ITA officials also said that they collected information on official travel planned for around the anticipated time of the shutdown. They said that it was important to gather this information because once employees were furloughed it becomes more difficult to gather complete and timely information on these travel plans. USTR: Prior to the shutdown, USTR officials asked offices to provide lists of positions that would need to be excepted during the first 2 weeks of a potential shutdown. This allowed USTR to anticipate operational needs if a shutdown lasted longer than 5 days. USTR officials said that flexibility was important as the potential shutdown approached because it allowed offices to adjust excepted position lists based on additional excepted activities or postponement of activities. USTR officials said that, in their experience, it is difficult to anticipate all the operational changes needed in the event of a shutdown longer than 2 weeks, especially as the agency component relies on partners at other agencies that may or may not be affected by the shutdown. CBP: Although the non-public portion of the DHS plan for CBP included sections that describe functions that may resume in the event of a prolonged shutdown, CBP officials said that these sections were not used in anticipation of the fiscal year 2019 shutdown. Specifically, the sections provide the opportunity for officials to indicate how many employees would be recalled to perform functions, but in the fiscal year 2019 plan almost every section indicates zero employees. Despite OMB guidance on prolonged shutdowns, officials said they believe that OMB guidance was exclusively for the first 5 days of a shutdown. Use of these sections of the CBP plan would help provide clearer expectations to the agency component’s workforce about who may be recalled to perform work activities during a shutdown. For the fiscal year 2020 contingency plan, CBP officials said that they asked offices to analyze and communicate what, if any, additional employees would be needed to work if a shutdown were to extend past 5 days. However, our review of the 2020 CBP plan found that, similar to the plan for fiscal year 2019, it largely does not indicate how many employees would be recalled to perform functions in the event of a prolonged shutdown. All Selected Agency Components Made Operational Changes during the Prolonged Shutdown for Varying Reasons During the fiscal year 2019 shutdown, each agency component that we reviewed determined that changes needed to occur that affected the number of excepted employees working during the shutdown. According to agency component officials, these changes were due to the length of the shutdown, external events, and changes to the determination of excepted work. The length of the fiscal year 2019 shutdown was the most common reason cited by officials for operational changes. CBP: During the fiscal year 2019 shutdown, CBP responded to an increase in foreign nationals arriving at the southern U.S. border. In response to this external event, CBP officials told us that they identified a need to train additional law enforcement officers and agents to perform excepted activities. According to the DHS contingency plan, new hire training for law enforcement officers may be an excepted activity if the requesting agency component establishes a reasonable likelihood that a delay in new hire training would compromise the safety of human life or protection of property. According to DHS documents, this was a change from previous shutdowns, when new hire training was not an activity excepted from the ADA. CBP officials told us that they discussed this issue internally before the shutdown, but processing the change through DHS’s Chief Financial Officer, DHS General Counsel, and OMB occurred after the shutdown began. CBP has incorporated this change into its updated, non-public portion of DHS’s contingency plan. IRS: As the length of the shutdown increased, IRS identified mission requirements that it determined necessitated the recall of additional employees. For example, as it transitioned to its filing season operations, IRS recalled mail center employees to oversee the collection of taxes and protection of statute expiration. IRS’s updated fiscal year 2019 filing season contingency plan, published on January 15, 2019, incorporated this activity along with the additional 560 employees recalled for one division to perform the work. IRS said in the updated plan that the ADA exception for this work was the protection of life and property. According to IRS documents, Treasury officials evaluated plan updates for compliance with the ADA, and then shared the plan with OMB prior to implementing changes. IRS also made operational changes during the fiscal year 2019 shutdown that were based on changes to the determination of which work activities were excepted from the ADA. During the shutdown IRS announced that it would process tax returns beginning January 28, 2019, and refund taxpayers as scheduled. In 2011, OMB directed IRS not to pay tax refunds in the event of a lapse in appropriations. However, at the request of Treasury and IRS, OMB revisited this position and, on January 7, 2019, OMB informed Treasury that tax refunds may be paid during a lapse in appropriations. As a result of this determination, IRS added approximately 16,000 additional excepted positions to its filing season contingency plan for the purpose of issuing refunds. This change was documented in its updated contingency plan, published on January 15, 2019. In October 2019, we determined that the agency violated the ADA by processing tax returns and issuing refunds to taxpayers because it lacked available budget authority to support these activities and no exception to the ADA permitted IRS to incur these obligations. ITA: According to ITA officials, they updated ITA’s activity list during the course of the fiscal year 2019 shutdown. They said the update was needed to help determine which preparation activities could continue for future events, such as trade shows that bring international delegations, and which activities or events would have to be cancelled if the shutdown continued. ITA officials said they had to evaluate cancellation clauses in its contracts with these trade shows to decide whether and when to cancel. ITA recalled employees on a temporary basis, as needed, to perform these tasks. ITA officials told us that they followed departmental guidance in requesting employee recalls during the shutdown. ITA submitted proposed changes to Commerce’s Office of the Deputy Assistant Secretary for Administration, which coordinated department-level review and approval. Commerce officials told us that senior leadership discussed changes to the contingency plan with OMB officials over the course of the shutdown. Despite changes to the number of excepted employees, Commerce did not publish an updated contingency plan during the fiscal year 2019 shutdown. Commerce officials told us that, through discussions with OMB, they determined that publishing an updated plan was not necessary due to the relatively small number of changes to the total number of excepted and exempt employees. USTR: Prior to the beginning of the shutdown, USTR estimated that it could continue full operations for 3 to 4 weeks with available funding. Because the shutdown lasted beyond 3 weeks, USTR furloughed a majority of its employees on January 14, 2019, once those funds were no longer available. In the absence of available funding, USTR officials decided that some functions were excepted from the ADA under the justification that the agency component works to discharge the president’s constitutional duty and power to conduct foreign relations. USTR officials stated that component leaders identified the highest priority mission activities to continue during the shutdown, such as trade negotiations with China and work related to the North American Free Trade Agreement. Officials decided not to continue other activities, such as preparations for the 2019 Group of 20 Summit. USTR officials told us that, in consultation with OMB, USTR excepted more than 74 employees, the number listed in the EOP contingency plan published December 21, 2018. According to USTR documents, between 88 and 101 excepted employees were working during the last 2 weeks of the shutdown. Officials told us that these changes were made to carry out critical, excepted activities and that changes were communicated daily to EOP. Two of Four Selected Agency Components Documented Shutdown Procedures, and None Had Sufficient Controls for Workspace Access during a Shutdown Agency preparation for a government shutdown can require extensive changes in day-to-day operations. Having established policies and procedures prior to a shutdown can help agencies implement these changes successfully. Establishing these policies and procedures requires timely and transparent planning and communication to ensure that agencies function as effectively as possible during a shutdown. Internal controls related to planning for a government shutdown include designating roles and responsibilities, establishing processes for planning activities that help meet objectives, and documenting said processes. Internal controls related to communication prior to and during a government shutdown include ensuring that information communicated is timely, sufficient, and delivered to all appropriate individuals. Figure 4 summarizes the extent to which selected agency components incorporated applicable internal controls into their planning and operations prior to and during the fiscal year 2019 shutdown, as discussed in detail in the following sections. Selected Agency Components’ Policies and Procedures Were Generally Consistent with Internal Control Principles for Planning and Communication, Although Level of Documentation Varied Selected Agency Components Identified Shutdown Planning Roles and Responsibilities The agency components we reviewed identified staff needed to plan for the fiscal year 2019 shutdown and tasked each with certain responsibilities. According to Standards for Internal Control in the Federal Government (Internal Control Standards), agency component management should implement its control activities—processes, procedures, techniques, and mechanisms—through policies. Documenting roles and responsibilities for implementing the policies can help agencies meet their objectives related to managing a government shutdown. The following examples illustrate the roles and responsibilities of staff who helped determine which activities would continue during the fiscal year 2019 shutdown. CBP: CBP’s non-public portion of the DHS contingency plan described the key responsibilities and accountable parties for shutdown preparation. For instance, heads of offices determined which of their employees would remain at work to perform exempt or excepted functions during the shutdown. The CBP Hiatus Coordinator communicated daily with Hiatus Points of Contact within each CBP office who managed the offices’ shutdown processes. For instance, the Hiatus Points of Contact determined what functions would continue during the shutdown to help ensure activities aligned with OMB guidance. Officials said that CBP’s Office of Chief Counsel reviewed each excepted function to help ensure they met the legal standard for each ADA exception category. IRS: IRS internal process documents outlined the steps needed to prepare for a shutdown and the accountable parties for implementation. For example, IRS had a Lapse Program Manager who coordinated shutdown activities and helped develop the contingency plan, including identifying and evaluating excepted roles and aligning them with people, positions, and exception categories. IRS Chief Counsel was then responsible for reviewing the contingency plan for compliance with the ADA, followed by a review from Treasury’s General Counsel. According to the process documents, Treasury ultimately approves IRS’s contingency plan. ITA: ITA employed a “bottom-up” shutdown planning process, according to ITA officials. As part of this process, ITA officials said they identified activities to continue during a shutdown, as well as the ADA exceptions to justify the activities, before submitting plans to General Counsel for review. However, ITA did not document its roles and responsibilities because the component relied on the planning processes documented in Commerce’s shutdown contingency plan, according to ITA officials. The agency’s plan provided instructions for submitting component shutdown plans to Commerce’s Office of the General Counsel and Office of Human Resources Management. Commerce’s contingency plan did not, however, contain information about component-specific roles and responsibilities related to planning for a potential government shutdown. Without documenting roles and responsibilities, ITA cannot ensure that the appropriate officials take the necessary steps to effectively prepare and execute plans for any future potential government shutdowns. USTR: USTR instructed Assistant U.S. Trade Representatives to indicate which employees would perform excepted work based on the highest priority initiatives and activities. Two weeks prior to the shutdown, the Office of Administration and General Counsel used this information to develop a plan for the shutdown, followed by senior leadership approval, according to USTR officials. While USTR described the roles and responsibilities of its officials in planning for the shutdown, USTR did not document these roles and responsibilities because it used this same process in previous shutdowns, and responsible parties were accustomed to the process and knew their roles well, according to USTR officials. Documenting roles and responsibilities would help USTR ensure that the appropriate officials take the necessary steps to effectively prepare and execute plans for future potential government shutdowns, especially when officials currently familiar with the process no longer work for USTR. CBP and IRS Documented Their Shutdown Preparation Processes, but ITA and USTR Did Not Internal Control Standards states that agency component management should implement its control activities through policies. Agency components can effectively do so, in part, by documenting processes for implementing policies related to government shutdowns. Shutdown preparation process documents at selected agency components included descriptions of activities to complete prior to the shutdown, such as updating and reviewing contingency plans, and preparing guidance and communication for managers and employees, among other steps. CBP: CBP’s non-public portion of the DHS contingency plan contained actions necessary to prepare for an impending shutdown, in addition to the roles and responsibilities discussed above. For example, CBP officials would need to identify executive points of contact who would continue working during the shutdown, prepare employee communications such as furlough notices, and prepare and distribute guidance for employee training during the shutdown, according to CBP’s shutdown guidance. This guidance also included descriptions of services, such as facilities maintenance, mail operations, and use of information technology equipment that would remain available and how, if at all, that work would be accomplished during a shutdown. IRS: IRS developed detailed process maps for its shutdown processes to document its planning and implementation activities and help improve understanding of the roles and responsibilities of staff at each step, according to IRS officials. IRS’s planning process map showed the order in which staff should perform certain tasks, a description of each task, and the responsible party for each task. For instance, the document showed who should draft, review, revise, and approve the shutdown contingency plan, and when each step should occur by each party. Figure 5 shows a streamlined version of IRS’s process map for the shutdown planning phase. Similarly, IRS’s implementation process map detailed steps for communicating with employees prior to a shutdown and updating contingency plans during a shutdown. During the shutdown, IRS distributed tools and guidance with instructions for implementing each step, according to an IRS official. ITA: ITA prepared a list of activities scheduled for the first 80 days of the fiscal year 2019 shutdown and determined the activities that would continue during the shutdown. While ITA officials described the process of assembling this list to us, they did not provide evidence to show that they had documented the process. According to ITA officials, ITA performed a similar exercise during the fiscal year 2014 government shutdown. Additionally, ITA officials said that they followed Commerce’s contingency plan to plan for the fiscal year 2019 shutdown. However, that document provided general information at the agency level. It did not provide information on the shutdown planning processes used by ITA, such as ITA-specific actions to take in the planning process. ITA did not provide documents showing these processes. Documentation of shutdown planning procedures would help ITA ensure that officials take the necessary steps to effectively prepare for future potential government shutdowns. USTR: USTR officials described the agency component’s shutdown processes but did not have the processes fully documented. Instead, USTR relied on the institutional knowledge of its officials to prepare for the fiscal year 2019 government shutdown. USTR officials told us that staff implementing shutdown processes for the fiscal year 2019 government shutdown also did so during the fiscal year 2014 shutdown. These officials told us that they used the same processes for both shutdowns, and that the staff involved were familiar enough with the processes to implement them effectively in fiscal year 2019. USTR communicated through email the steps for employees to take prior to furloughs, such as providing personal contact information to supervisors. USTR also provided EOP’s shutdown guidance to employees, which included additional information for employees, such as limitations to work site access and seeking outside employment while furloughed. However, EOP’s guidance did not contain details about USTR’s shutdown preparation process. USTR provided guidance to Assistant U.S. Trade Representatives about identifying excepted employees, but this guidance did not include information about other planning processes. Without documentation of all shutdown planning procedures, USTR cannot ensure that officials take the necessary steps to effectively prepare for future potential government shutdowns. Selected Agency Components Informed Employees of Shutdown Procedures Internal Control Standards states that management should communicate sufficient information, such as policies and procedures for implementing shutdown processes, to all appropriate individuals in a timely manner. We found that selected agency components used a variety of methods to communicate shutdown-related plans with employees in a timely manner prior to or at the beginning of the fiscal year 2019 shutdown. Methods included distributing policies through managers, referring employees to internal websites, and component-wide emails. Additionally, all selected agency components communicated individual furlough decisions to employees once the shutdown began. Representatives from employee organizations whose members worked at CBP and IRS said that, despite minor communication challenges between components and employees, they generally found shutdown-related communication to employees to be adequate. CBP: CBP encouraged supervisors to communicate to employees what could be expected of them should a shutdown occur, according to CBP officials from the Office of Field Operations. DHS directed CBP to email furlough notices to affected employees once the shutdown began, according to CBP officials, and CBP received email read receipts to help ensure the notices reached all employees. Each office confirmed with CBP that notices were sent to all affected employees, according to CBP officials. CBP held daily meetings with management during the shutdown to answer questions and share information, including information about travel, pay, contract actions, review and approval of employee recalls, and updates to the CBP contingency plan, according to CBP officials. Organizational points of contact then shared this information with managers, who provided appropriate information to employees. Furloughed employees did not have permission to access internal online resources as CBP had instructed them not to use CBP systems during the shutdown except in limited circumstances. In response, CBP developed a mobile application so that furloughed employees could see such updates on their personal cell phones in the event of a future shutdown, according to CBP officials. Representatives of CBP bargaining unit employees told us that, aside from limited instances of inaccurate or delayed information, CBP effectively communicated shutdown information to employees using multiple communication channels. IRS: IRS hosted internal training sessions prior to the shutdown to clarify roles and responsibilities for managers and excepted, exempt, and furloughed employees. IRS also made resources available to employees on its website, according to IRS officials, including shutdown checklists and a Frequently Asked Questions document with information on preparing for an orderly shutdown, among other things. Two days prior to the shutdown, OMB authorized IRS to direct managers to verbally inform employees of their furlough or excepted status in the event of a shutdown, according to IRS officials. These officials told us that IRS directed managers to not distribute status letters until December 22, 2018, the first day of the partial government shutdown. IRS’s implementation process map also shows that officials were to send status letters at the start of a shutdown. A representative of IRS bargaining unit employees told us IRS was responsive to employee questions during the shutdown and tried to address all issues raised. The representative noted that IRS had some challenges communicating with recalled employees as the shutdown continued but also said that IRS did the best it could, given its limitations, and did not identify ways to improve employee communication. ITA: Commerce directed ITA to distribute notices to employees explaining individuals’ furlough or excepted status after the shutdown began, according to ITA officials. On December 26, 2018, the first working day of the shutdown, ITA officials said they issued these notices along with a fact sheet about tasks for employees to complete that day. The fact sheet also communicated policies regarding scheduled leave and workspace access during the shutdown, among other things. ITA asked employees to confirm receipt of the notices during the orderly shutdown period, after which ITA certified to Commerce that it had issued all notices, according to ITA officials. A representative for bargaining unit Foreign Service Officers at ITA suggested that employees might benefit from receiving some information prior to a shutdown, including standard processes that ITA has established in policy and that remain the same between government shutdowns. USTR: Prior to furloughing employees, USTR instructed employees to visit its public website daily to verify USTR’s operating status. The website provided information on transit benefits, unemployment compensation, and an employee assistance program, among other things. USTR also communicated changes in operating status through notifications to employees’ personal telephone numbers and email accounts during the shutdown, according to USTR officials. These officials told us that in-person communication worked well to convey information to staff due to the small size of the agency component, approximately 250 staff. USTR officials said they emailed all employees about furloughs that would begin on January 14, 2019, updated the operating status on its phone line and website, and directed employees to stay apprised of USTR’s shutdown status. Before furloughs began, USTR instructed employees to provide managers with personal contact information, which, according to officials, managers used to recall employees during the shutdown. USTR officials said that managers also communicated with individual employees regarding whether they would continue to work after January 14, 2019. USTR employees were not represented by an employee organization. Selected Agency Components Recalled Employees during the Shutdown, but ITA and USTR Did Not Document Recall Processes Internal Control Standards states that agency component management should design and implement control activities, such as shutdown processes, through policy. Agencies can effectively do so, in part, by documenting processes and roles and responsibilities for staff implementing those processes. During the fiscal year 2019 government shutdown, agencies recalled employees who were previously furloughed to return to work as the shutdown continued and circumstances changed. While each agency component had processes to recall employees back to work during the shutdown, not all components documented these processes. CBP: CBP’s non-public portion of the DHS contingency plan for fiscal year 2019 documented the employee recall process for government shutdowns. According to the plan, offices were to send a written request for a recall to the Executive Assistant Commissioner, Enterprise Services, specifying the number of employees to recall and the justification for doing so. DHS’s Budget Division and Office of the General Counsel also reviewed these recall requests, according to a DHS official. As with its initial excepted and furloughed employee notices, CBP used email read receipts to determine whether employees received updates to their furlough or excepted statuses and CBP recall processes. Additionally, each office had to verify with the CBP Hiatus Coordinator that updated status notices were sent to employees. IRS: IRS documented its procedures for recalling newly excepted employees during the shutdown in its implementation process map. IRS communicated these procedures to employees during the shutdown via its emergency web page and hotline, an updated Frequently Asked Questions document, and engagement with the employee organization representing IRS employees in the bargaining unit, according to IRS officials. IRS delegated the process of recalling employees to its 23 organizational offices. During the recall process, IRS managers contacted excepted employees to discuss duties and the date to report to work, according to IRS officials. IRS had many instances where the component recalled furloughed employees for a period of time and furloughed the employees again when needed, according to an IRS official. This official told us that IRS issued new furlough letters to employees each time this occurred. Similarly, IRS offices used an intermittent furlough letter when excepted employees planned to be away from work. According to the IRS official, doing so provided documentation of whether those excepted employees worked or were furloughed on a given day. ITA: According to ITA officials, once Commerce approved a temporary exception during the shutdown, ITA’s shutdown coordinator issued a recall letter to employees. ITA issued recall notices for temporary exceptions during the shutdown to perform specific work activities. Once employees completed those activities, ITA issued another furlough notice to those employees, according to ITA officials. ITA had a daily employee tracking document that showed exception start and end dates, and whether recall letters and subsequent furlough letters were issued to each employee. However, ITA did not document its recall process. Similar to its shutdown planning processes, ITA officials said they relied on Commerce’s employee recall processes instead of documenting its own specific processes. However, Commerce’s guidance did not contain information about how ITA developed temporary exception requests or how ITA processed the recalls. Without documentation of employee recall processes, ITA cannot ensure that officials are effectively implementing their processes during a potential future shutdown. Furthermore, officials who previously implemented shutdown-related processes may not be available during future shutdowns. Documentation ensures that processes that have been deemed to be effective can be replicated by others in the future. USTR: USTR recalled additional employees to perform excepted work during the shutdown. On each day after furloughs began, USTR recalled up to 30 employees beyond the 74 excepted employees in the Executive Office of the President’s (EOP) shutdown contingency plan. The Chief of Staff and Deputy Chief of Staff reviewed a list of excepted employees each day to identify adjustments to the number of excepted employees needed, according to USTR officials. These officials told us that when USTR offices requested employee recalls, the Chief of Staff and Deputy Chief of Staff consulted with the responsible Deputy U.S. Trade Representatives to make necessary changes. USTR officials did not have a documented process for recalling these employees. As with its preshutdown contingency planning processes, USTR officials said that they rely on institutional knowledge to carry out its recall procedures. Documentation of employee recall processes would help USTR ensure that officials effectively implement these processes during future shutdowns, especially given that officials who previously implemented shutdown-related processes may not be available during future shutdowns. Selected Agency Components Reviewed Plans and Operations to Identify Lessons Learned for Future Government Shutdowns All agency components we reviewed said that they had reviewed or planned to review their shutdown processes and incorporate any identified solutions into their internal planning documents or into agency contingency plans. CBP: CBP incorporated changes to its policies on employee leave and absences into its non-public portion of the DHS fiscal year 2020 contingency plan for a potential shutdown. For example, CBP’s fiscal year 2020 plan now contains examples of when supervisors may approve absences for excepted employees, such as for previously approved and ongoing requests under the Family and Medical Leave Act of 1993. IRS: Following the fiscal year 2019 shutdown, IRS reviewed its processes, requesting input from offices about ways to improve those processes in the case of future government shutdowns. Some improvements identified by offices included modifying current lapse plans to incorporate a medium- and long-term view, hosting training that focuses on frequently asked questions and managerial and employee responsibilities, and creating user-friendly access to information. ITA: ITA planned to cooperate with partner agencies on planned excepted activities going forward, according to ITA officials. These officials said they were in contact with their interagency partners at the time of our review and would work with them prior to a potential future shutdown to determine whether to submit requests for excepted work for certain activities. USTR: USTR reviewed its processes for the fiscal year 2019 shutdown and determined that it operated effectively and would not require changes for future shutdowns, according to USTR officials. Selected Agency Components Generally Tracked Employees Working but Did Not Have Controls for Workspace Access during the Shutdown Three Selected Agency Components Tracked the Number of Employees Who Worked during the Shutdown Internal Control Standards states that agency component management should implement control activities through policies. During a government shutdown, agencies must limit the work performed to only exempt or excepted activities. Establishing limits for the number of employees working during a shutdown can help achieve this goal, and agencies can document these limits in their shutdown contingency plans. Tracking the number of employees working during a shutdown can help agencies ensure that they operate in accordance with their established contingency plans and prevent violations of the ADA. CBP: According to officials, CBP did not direct program offices to perform daily head counts of employees working and did not track the number of employees who worked during the shutdown. CBP officials told us that it would have been difficult to track employees because it did not have the systems or data to match the number of planned excepted employees with the number of employees who actually worked during the shutdown. Instead, CBP relied on managers to ensure that individual offices did not exceed their approved number of excepted positions during the shutdown. While individual offices could have opted to track the number of employees working each day for this purpose, CBP officials said that they did not direct all offices to do so. Tracking the number of employees who worked during the shutdown would help CBP ensure that controls to limit who can perform work during a shutdown function as intended. It would also ensure that its operations are consistent with contingency plans. IRS: IRS tracked the number of employees who worked each day during the shutdown but faced challenges in doing so. IRS directed managers to ensure that the number of excepted employees in each office did not exceed the number of approved positions in the contingency plan, according to IRS officials. An IRS official told us that each office had discretion for how it complied with this requirement, such as by requiring a daily headcount of employees. For example, during the shutdown, IRS’s Wage and Investment Division documented the office or function under which employees worked and the number of employees who worked in each. However, as headcounts proved to be time consuming for offices—Wage and Investment tracked up to 11,000 employees on one day—IRS officials told us they plan to move to an automatic tracking system in the future. According to IRS officials, IRS hosted daily calls with senior executives and Lapse Program Managers for each office to discuss the daily implementation of the shutdown contingency plan. IRS officials told us that the Heads of Office and Lapse Program Managers oversaw daily operations in each of their offices to help ensure operations were consistent with contingency plans. For example, Wage and Investment officials told us that the Wage and Investment Commissioner met daily with teams to discuss activities performed to help ensure that employees performed only the work in the contingency plan approved prior to the shutdown. ITA: ITA maintained a daily tracker of excepted employees who were scheduled to work each day. ITA used this tracker to record excepted employee names, projects and tasks, exception start and end dates, exception categories, and travel information as appropriate. Officials used this information to determine whether employees received the appropriate furlough or excepted status notice during the shutdown. USTR: USTR tracked which employees worked during the shutdown after January 14, 2019, in accordance with EOP guidance. EOP guidance says that “all EOP components are required to compile and report daily the name of each excepted employee and certify the hours worked that week for the duration of a lapse in appropriations to the group responsible for payroll.” USTR provided to EOP daily lists of excepted staff during the shutdown. These lists helped account for those who were guaranteed pay for work performed during the shutdown, according to USTR officials. Selected Agency Components Had Insufficient Controls for Physical and Virtual Workspace Access Internal Control Standards states that agency component management should design and implement control activities through policies to help meet objectives. Effective implementation includes determining the policies necessary to operate a process based on objectives, such as limiting physical and virtual employee access to agency component workspaces and networks. CBP: In its notices provided to furloughed employees at the start of the fiscal year 2019 government shutdown, CBP advised employees that they must remain away from their workplace unless and until recalled. These furlough notices and CBP’s non-public portion of the DHS contingency plan also stated that employees could not use their government-issued devices for any purpose other than receiving updates and emergency notification from their supervisors. However, CBP did not have additional controls to limit employee access to physical or virtual workspaces, according to CBP officials, such as removing furloughed employees’ ability to logon to CBP networks or devices. DHS officials indicated that it would be difficult to monitor access for all excepted employees during a shutdown, especially given that most CBP employees continued to work during the fiscal year 2019 shutdown. IRS: IRS did not have sufficient controls to limit building access or virtual workspace access during the shutdown. While IRS developed lists of excepted and exempt employees who could work during the shutdown, IRS did not use these lists to grant or deny access to facilities, according to IRS officials. They told us IRS primarily used these lists to ensure it could provide sufficient services to each building based on the number of employees expected to work during the shutdown. IRS’s guidance to furloughed employees stated that employees should not use government-issued mobile phones or login to their government accounts remotely, and managers discussed this requirement with employees, according to IRS officials. IRS also directed employees not to use other government-furnished equipment such as computers, according to IRS officials. However, we found no additional controls to limit virtual network access during the shutdown. During the shutdown, IRS frequently substituted which excepted employees performed excepted functions, resulting in a rotating workforce, according to IRS officials. IRS officials believed it would be difficult to control physical or virtual access for all excepted employees in future shutdowns since access needs changed as frequently as each hour depending on which employees worked. ITA: ITA followed Commerce procedures to develop building access security lists to help ensure building access for excepted and exempt employees during the shutdown, according to ITA officials. Prior to the shutdown, Commerce directed agency components to prepare and submit building access security lists to the department’s Office of Security each day. If an employee tried to enter the headquarters building during the shutdown, the Office of Security would contact an ITA official to verify whether that employee could enter, according to ITA officials. These officials told us that employees not on the building access security list were not granted access to the headquarters building during the shutdown. While ITA had controls to limit physical workspace access during the shutdown, it did not have sufficient controls in place to limit virtual access. According to ITA officials, all furloughed employees were instructed not to use their government devices or access the ITA network virtually, and furlough notices stated that furloughed employees could not work at an alternative worksite during the shutdown. However, ITA officials believed that implementing additional controls, such as turning off network access for furloughed employees, would complicate its process for granting temporary exceptions for employees during a shutdown. USTR: USTR provided employees with EOP guidance prior to implementing furloughs on January 14, 2019. This guidance instructs furloughed employees not to access their place of work or use government-issued cell phones or computers. USTR officials told us they did not have controls in place to monitor employee building access or prevent furloughed employees from entering physical USTR workspaces. These officials told us they provided adequate communications, instructions, and guidance to employees about who can access physical and virtual workspaces. USTR officials also told us that they did not have controls in place to monitor or limit employee access to virtual USTR workspaces. According to these officials, USTR does not maintain or monitor the EOP-provided mobile communications devices and information technology network. Instead, provision and control of telecommunications and information technology, such as the ones identified, is the responsibility of the Presidential Information Technology Community. While agency components may face challenges implementing workspace access controls, such as limiting network access for a large number of employees, these steps are nevertheless important to take. Having sufficient controls to limit who can perform work during a shutdown would help agency components ensure that they operate consistently with the ADA and with contingency plans that are designed to help them operate effectively and avoid misuse of government resources during a shutdown. Agency component management can tailor controls to meet the component’s unique needs. Specific controls used by an agency component may be different than those used by other components based on a number of factors, such as differences in mission, size, or operational environment of the component. Conclusions Government shutdowns are disruptive events that have spanned multiple weeks in recent years. Given the length of some shutdowns, it is important for agencies to have robust plans and established internal controls to effectively communicate, plan for potential changes, and manage operations prior to and during a shutdown. In addition, documentation of these plans and controls helps ensure that agencies can replicate their actions in the event of future shutdowns. According to OMB, agencies should have detailed contingency plans in place prior to a potential lapse in funding to ensure an orderly shutdown of operations. While three of four agencies’ contingency plans that we reviewed addressed most elements laid out in OMB’s guidance, we identified three elements for which all selected agencies had missing or incomplete information in their contingency plans. When asked about these deficiencies, agency officials often cited internal documents or discussions as addressing these information elements. Internal documentation and guidance can be useful in planning for a potential shutdown, but they do not provide the level of transparency of contingency plans, which are generally available to the public and furloughed employees. Contingency plans that address all information elements specified in OMB guidance ensure that agencies are prepared for potential shutdown scenarios, and provide transparency to agency actions during a lapse in appropriations. In addition to contingency plans, the agency components we reviewed all had internal processes related to planning for and managing operations during a shutdown. However, not all agency components documented these processes. Without documentation of shutdown operations, agencies may not be able to cease operations in a timely manner, and agencies’ actions may not be transparent to OMB, Congress, and the public during future shutdowns. Additionally, proper documentation of processes can help preserve institutional knowledge that might otherwise be lost. During a lapse in funding, agencies must ensure that they do not violate the ADA, which prohibits agencies from obligating or expending funds in the absence of appropriations unless otherwise authorized by law, and from accepting voluntary services for the United States except in cases of emergency involving the safety of human life or the protection of property. Contingency plans are one control that agencies use in this effort. Agencies must have assurance that the contingency plan is being followed daily during a shutdown. This assurance can be verified through controls that (1) track and document the number of employees who actually worked daily during the shutdown, and (2) limit physical and virtual workspace access to appropriate employees. Three of four agency components we reviewed tracked employees who worked, one had sufficient controls on physical access to workspaces, and none had sufficient controls to limit virtual access. Without these controls, agencies are at an increased risk that contingency plans will not be followed, thus diminishing their value as a mechanism to ensure ADA compliance. Recommendations for Executive Action We are making a total of 14 recommendations, including four to USTR, three each to CBP and IRS, two to ITA, and one each to the Departments of Commerce and Homeland Security. The Secretary of Commerce should align the agency’s contingency plan with OMB guidance by including (1) plans for a potential prolonged shutdown; (2) flexibilities available to supervisors if furloughed employees were unable to return to work after the end of the shutdown; and (3) procedures for resuming program activities, including steps to ensure appropriate oversight and disbursement of funds upon the end of a shutdown. (Recommendation 1) The Secretary of Homeland Security should align the agency’s contingency plan with OMB guidance by including (1) plans for a potential prolonged shutdown; (2) flexibilities available to supervisors if furloughed employees were unable to return to work after the end of the shutdown; and (3) procedures for resuming program activities, including steps to ensure appropriate oversight and disbursement of funds upon the end of a shutdown. (Recommendation 2) The Commissioner of Internal Revenue should align the agency’s contingency plan with OMB guidance by including (1) plans for a potential prolonged shutdown; (2) flexibilities available to supervisors if furloughed employees were unable to return to work after the end of the shutdown; and (3) procedures for resuming program activities, including steps to ensure appropriate oversight and disbursement of funds upon the end of a shutdown. (Recommendation 3) The U.S. Trade Representative, in consultation with EOP as appropriate, should align the component’s contingency plan with OMB guidance. This could be accomplished through (1) revisions to the EOP contingency plan; or (2) by creating a separate USTR plan. (Recommendation 4) The Under Secretary for International Trade should document the component’s shutdown processes, including roles and responsibilities, planning processes for potential shutdowns, and recall processes for furloughed employees during a shutdown. (Recommendation 5) The U.S. Trade Representative should document the component’s shutdown processes, including roles and responsibilities, planning processes for potential shutdowns, and recall processes for furloughed employees during a shutdown. (Recommendation 6) The Commissioner of CBP should develop internal controls to track and document which employees worked and what work was performed daily during a government shutdown. (Recommendation 7) The Commissioner of CBP should develop internal controls to limit access to physical workspaces to appropriate employees during a government shutdown. (Recommendation 8) The Commissioner of Internal Revenue should develop internal controls to limit access to physical workspaces to appropriate employees during a government shutdown. (Recommendation 9) The U.S. Trade Representative should develop internal controls to limit access to physical workspaces to appropriate employees during a government shutdown. (Recommendation 10) The Commissioner of CBP should develop internal controls to limit access to virtual workspaces to appropriate employees during a government shutdown. (Recommendation 11) The Commissioner of Internal Revenue should develop internal controls to limit access to virtual workspaces to appropriate employees during a government shutdown. (Recommendation 12) The Under Secretary for International Trade should develop internal controls to limit access to virtual workspaces to appropriate employees during a government shutdown. (Recommendation 13) The U.S. Trade Representative should, in consultation with EOP, develop internal controls to limit access to virtual workspaces to appropriate employees during a government shutdown. (Recommendation 14) Agency Comments and Our Evaluation We provided a draft of this report to Commerce, DHS, EOP, IRS, OMB, and USTR for review and comment. We received written comments from Commerce, DHS, and IRS, summarized below and reproduced in appendixes II, III, and IV. USTR provided comments via email, also summarized below. OMB did not provide comments, citing its focused efforts on addressing the national emergency response to the coronavirus pandemic. DHS, EOP, IRS, and USTR provided technical comments, which we incorporated as appropriate. Commerce agreed with all three recommendations directed to it and ITA, and stated that ITA has taken steps to address two of the recommendations. Commerce stated that ITA has documented its shutdown planning processes and recall processes for furloughed employees during a shutdown (recommendation 5). According to Commerce, ITA has also established and documented internal controls to limit virtual workspace access to excepted or exempt employees during a government shutdown (recommendation 13). In addition, Commerce stated that it will develop an action plan to address the recommendation to better align its contingency plan with OMB guidance (recommendation 1). DHS agreed with all four recommendations directed to it and CBP, and stated that it has begun to take steps to better address OMB guidance on contingency plans (recommendation 2). In addition, DHS stated that CBP plans to analyze existing systems to determine which is best suited to track and document employee work during a government shutdown and will ensure that the chosen system is available should a future shutdown occur (recommendation 7). For the recommendation on developing controls for physical workspaces (recommendation 8), DHS stated that because CBP does not have systems capable of efficiently restoring physical access for furloughed employees, it would have to reinstate employee access individually and the cost would be substantial. DHS stated that CBP plans to update procedures to ensure more comprehensive workspace access guidance for furloughed employees. With regard to the recommendation on developing controls for virtual workspace access (recommendation 11), DHS stated that CBP believes that furloughed employees must be able to passively monitor the status of the government shutdown and access important agency communications using DHS-issued electronic devices. Additionally, disabling and reactivating thousands of employee user accounts during a shutdown posed a significant burden. DHS said that CBP plans to update shutdown procedures to clarify allowed use of DHS-issued electronic devices by furloughed employees. We agree that CBP should update procedures on workspace access as suggested, and continue to believe that physical and virtual access controls are important during shutdowns in order to prevent misuse of government resources. We encourage CBP to improve their systems to be able to efficiently implement such controls. IRS partially agreed with one recommendation addressed to it and disagreed with two others. IRS agreed with one element of our recommendation to include additional detail in its agency contingency plan (recommendation 3) and stated that it is in the process of adding procedures for resuming program activities following a government shutdown into its contingency plan. IRS did not agree with the other elements of the recommendation because it believes it has already addressed plans for a potential prolonged shutdown and flexibilities for supervisors if employees are unable to return to work at the end of a shutdown in its contingency plans. We agree that while IRS has included some details on these elements in its plans, we continue to believe that it should provide more detail, such as points in time when the furlough status of an employee may change, how many employees would be affected, and the legal basis for the changes, within its publically available contingency plan to fully address these elements. IRS disagreed with our recommendations on developing controls for physical and virtual workspace access during a shutdown (recommendations 9 and 12). For both recommendations, IRS stated that it believes that it has effective controls in place to manage physical and virtual workspace access during a shutdown. In addition, IRS said that it believes that implementing additional access controls do not justify the corresponding resource investments. We continue to believe that IRS should improve its access controls, which currently rely on managers and furlough letters to communicate limits on workspace access. While we recognize the costs of increased access controls, government shutdowns are unique events that require additional access controls in order to prevent potential misuse of government resources. In USTR’s emailed comments, its Assistant U.S. Trade Representative for Administration neither agreed nor disagreed with the four recommendations addressed to it. The official, however, stated that USTR has already begun addressing our recommendations on aligning its contingency plan with OMB guidance (recommendation 4) and documenting its shutdown processes (recommendation 6), and has made EOP aware of the recommendations on developing controls for physical and virtual workspace access during a shutdown (recommendations 10 and 14). As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Commerce, the Acting Commissioner of U.S. Customs and Border Protection, the Acting Secretary of Homeland Security, the Commissioner of the Internal Revenue Service, the Acting Under Secretary for International Trade, the Director of the Office of Management and Budget, the U.S. Trade Representative, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or mctiguej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. James R. McTigue, Jr. Director, Strategic Issues. Appendix I: Objectives, Scope, and Methodology This report assesses the extent to which (1) selected agencies’ contingency plans were consistent with applicable Office of Management and Budget (OMB) guidance, (2) selected agency components planned for a potential prolonged shutdown and changed operations during the shutdown, and (3) selected agency components’ shutdown policies and procedures were consistent with relevant internal control principles. We selected four agency components under the jurisdiction of the Senate Committee on Finance that were affected by the fiscal year 2019 shutdown. When more than one agency component at an agency met these criteria, we selected the component that had the largest budget and the greatest planned number of employees performing excepted work during the shutdown. While the four components we selected are not generalizable to other agency components, they do reflect variation in size, funding type, and justification for excepted work that serve as illustrative examples of a range of experiences. These selected agency components are U.S. Customs and Border Protection (CBP), Department of Homeland Security (DHS); Internal Revenue Service (IRS), Department of the Treasury (Treasury); International Trade Administration (ITA), Department of Commerce (Commerce); and Office of the United States Trade Representative (USTR), Executive Office of the President (EOP). To address our first objective, we compared information in selected agencies’ government shutdown contingency plans to key information elements described in OMB guidance. Specifically, we identified 14 key information elements in the 2018 OMB Circular No. A-11 Section 124— Agency Operations in the Absence of Appropriations (Circular A-11), the applicable guidance, at the beginning of the partial government shutdown that began on December 22, 2018. This document details the information agencies should include in their contingency plans, such as significant agency activities that will continue or cease during a shutdown, the number of employees who will continue to work during a shutdown, and necessary actions for resuming orderly operations after a shutdown. Three of our four selected agency components—CBP, ITA, and USTR— operated under an agency-wide plan. Therefore, we evaluated the fiscal year 2019 contingency plans for Commerce, DHS, and EOP. Each component of DHS has a non-public, for official use only, portion of the agency-wide plan, and we included CBP’s non-public portion in our evaluation. Because Treasury’s contingency plan did not cover IRS, we evaluated IRS’s contingency plans for this objective. We also reviewed written responses from OMB and interviewed officials at selected agencies to understand the reasons for any discrepancies between the contingency plans and OMB guidance. To address our second objective, we assessed the extent to which selected agency components planned for a potential prolonged shutdown—one longer than 5 days—as outlined by Circular A-11, and changed operations during the shutdown. We reviewed shutdown contingency plans and other planning documents at CBP, IRS, ITA, and USTR to determine agency component processes for proposing, reviewing, and approving operational changes during a government shutdown. We interviewed officials at these agency components to determine what operational changes components made during the fiscal year 2019 shutdown and the key factors that led to these changes. To address our third objective, we assessed selected agency components’ shutdown processes to determine the extent to which the components followed relevant internal control principles in planning for the fiscal year 2019 government shutdown. We reviewed our Standards for Internal Control in the Federal Government (Internal Control Standards) and identified key principles related to agency components’ shutdown processes. Relevant internal control standards include designing and implementing appropriate policies and procedures and effectively communicating this information to stakeholders. This would include policies and procedures to ensure an orderly shutdown process and compliance with applicable laws such as the Antideficiency Act (ADA). We developed a questionnaire for selected agency components based on these internal control principles that reflected practices we determined to be associated with effectively implementing the controls in the context of a government shutdown, such as documentation of shutdown processes or employee communication. We reviewed the results of this questionnaire, reviewed agency component contingency plans and other internal planning documents, and interviewed component officials to determine the extent to which components followed these internal control principles. We assessed the sufficiency of selected agency components’ internal controls based on whether the evidence gathered contained relevant details about a component’s shutdown processes that demonstrated the component would have reasonable assurance of achieving its shutdown objectives. While we assessed agency components’ shutdown processes, we did not assess the results of those processes, such as whether components correctly or appropriately categorized activities as excepted from the ADA. We interviewed officials at selected agency components to understand the reasons for any inconsistencies between component planning and decision-making processes and internal control principles. We also interviewed representatives of employee organizations at the agency components we reviewed to determine if communication of shutdown- related policies and procedures was timely, sufficient, and transparent. In addition, we selected one program office within each reviewed agency component to identify illustrative examples of how components operationalized their shutdown processes. For CBP, IRS, and ITA, we selected the program offices with the largest budget based on available budget data. Based on this criterion, we selected CBP’s Office of Field Operations, IRS’s Wage and Investment division, and ITA’s Global Markets office. Selection of these program offices provided for a variety of justifications for excepted work and number of planned excepted employees. Due to the size of USTR, the agency component does not manage based on program offices, according to USTR officials. Because of this, we did not select a program office within USTR. We reviewed documents and interviewed officials in these program offices to determine how they planned for the fiscal year 2019 government shutdown, communicated with employees, and recalled furloughed employees back to work, among other things. We conducted this performance audit from March 2019 to June 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Commerce Appendix III: Comments from the Department of Homeland Security Appendix IV: Comments from the Internal Revenue Service Appendix V: GAO Contact and Staff Acknowledgments GAO Contact James R. McTigue, Jr. at (202) 512-9110 or mctiguej@gao.gov. Staff Acknowledgments In addition to the individual named above, Danielle Novak, Assistant Director; Shelby Kain, Analyst-in-Charge; Alyssia Borsella; Kendall Chan; Jacqueline Chapin; Ann Czapiewski; Kristine Hassinger; J. Andrew Howard; Ulyana Panchishin; Steven Putansu; and Melissa Wolf made major contributions to this report. Ted Hu and Triana McNeil also contributed to the report.
Why GAO Did This Study A lapse in appropriations resulted in the federal government partially shutting down from December 22, 2018, to January 25, 2019. GAO was asked to evaluate agency contingency plans and operations during the FY 2019 shutdown. This report assesses the extent to which selected agencies and selected components (1) had contingency plans that were consistent with applicable OMB guidance, (2) planned for a potential prolonged shutdown and changed operations during the shutdown, and (3) had shutdown policies and procedures consistent with relevant internal control principles. GAO selected CBP, IRS, ITA, and USTR as agency components for review because they are under the jurisdiction of the Senate Committee on Finance and were affected by the FY 2019 shutdown. GAO reviewed OMB's guidance, agencies' contingency plans, and other documentation. GAO interviewed agency and component officials. What GAO Found The Office of Management and Budget (OMB) issues shutdown guidance for agencies in Circular A-11. Of four selected agency components, three—U.S. Customs and Border Protection (CBP), the Internal Revenue Service (IRS), and the International Trade Administration (ITA)—operated in fiscal year (FY) 2019 under contingency plans that included most of the key information elements specified in Circular A-11 . The plan that the fourth one—Office of the U.S. Trade Representative (USTR)—operated under, authored by the Executive Office of the President, did not include a majority of the key information elements. OMB guidance instructs agencies to have plans in place for both short and prolonged—longer than 5 days—shutdowns. None of the four selected agencies' FY 2019 contingency plans fully addressed anticipated changes in the event of a prolonged shutdown. GAO found that IRS, ITA, and USTR internally discussed and planned for anticipated operational changes in the event of a prolonged FY 2019 shutdown. CBP officials said they only focused on short-term operational needs. Having a comprehensive plan for a potential prolonged shutdown would help provide clearer workforce expectations during any future shutdowns. Having sufficient internal controls, such as documented policies and procedures, in place prior to a shutdown can help agencies implement changes in day-to-day operations during a shutdown. Selected agency components all incorporated some internal controls in their shutdown-related activities, as shown in the table below. However, none of the agency components had controls for limiting both physical and virtual workspace access for employees during a shutdown, each citing the difficulty of implementing such controls. Having these controls in place would help components ensure that they operate consistently with their contingency plans and avoid misuse of government resources. What GAO Recommends GAO is making 14 recommendations, including that certain agency components improve contingency plans, document shutdown procedures, and improve controls for physical and virtual workspace access during a shutdown. CBP and ITA agreed with the recommendations directed to them; IRS partially agreed with one and disagreed with two; and USTR did not state whether it agreed or disagreed, but has begun taking steps to implement two recommendations.
gao_GAO-19-439
gao_GAO-19-439_0
Background DOD’s acquisition of weapon system programs has been on our High Risk List since 1990 because DOD programs consistently fall short of cost, schedule, and performance expectations. Congress and DOD have long explored ways to curtail these cost, schedule, and performance problems, and both took related actions about a decade ago, with Congress passing the Weapon Systems Acquisition Reform Act of 2009 and DOD implementing its “Better Buying Power” initiatives. The Weapon Systems Acquisition Reform Act of 2009 aimed to improve the organization and procedures of DOD for the acquisition of major weapon systems, for example by revising the certifications that programs were expected to complete before approval for system development start. The new certifications included the need to conduct trade-offs among cost, schedule, and performance objectives and for independent verification of technology maturity. In 2010, DOD started its own acquisition reform initiatives through “Better Buying Power.” These reforms required DOD programs to conduct analyses of program affordability and set cost targets, among other things, which placed cost constraints on programs and encouraged programs to find cost improvements during program execution. These and other reforms championed sound management practices, such as realistic cost estimating, increased use of prototyping, and systems engineering. In 2016, we found that DOD was beginning to decrease the amount of cost growth in its major defense acquisition program portfolio. Despite DOD’s improvements in cost control, however, members of Congress remained concerned that the DOD acquisition process was overly bureaucratic and too slow to deliver capability to the warfighter. Congress enacted numerous additional acquisition-related provisions in the National Defense Authorization Acts for Fiscal Year 2016 and subsequent years that addressed the processes with which DOD and the military departments acquire goods and services and encourage innovation. These provisions addressed a wide range of acquisition issues, such as the: creation of new processes for oversight of major defense acquisition programs; development of streamlined alternative acquisition paths; and changes to DOD’s other transaction authority, which allows DOD to enter into agreements that generally do not follow a standard format or include terms and conditions required in traditional mechanisms, such as contracts or grants. Congress also required that DOD establish a panel in the National Defense Authorization Act for Fiscal Year 2016, referred to as the “Section 809 Panel,” to identify ways to streamline and improve the defense acquisition system. The panel issued its final report in January 2019, which, together with its earlier reports, included a wide range of recommendations aimed at changing the overall structure and operations of defense acquisition. DOD Acquisition Programs and Authorities DOD acquisition policy defines an acquisition program as a directed, funded effort that provides a new, improved, or continuing materiel, weapon, or information system, or a service capability in response to an approved need. DOD Directive 5000.01, The Defense Acquisition System, provides management principles and mandatory policies and procedures for managing all acquisition programs. Oversight levels and procedures for DOD’s acquisition programs are outlined in DOD Instruction 5000.02, Operation of the Defense Acquisition System. Traditionally, defense acquisition programs are classified into acquisition categories based on the value and type of acquisition. DOD’s most costly programs have historically been referred to as major defense acquisition or Acquisition Category I programs. Programs with lower costs are categorized as Acquisition Category II or III programs. The acquisition category of a program can affect oversight levels and procedures, such as what program information and documents are required and who is designated as the milestone decision authority. Among other responsibilities, the milestone decision authority approves entry of an acquisition program into the next phase of the acquisition process and is accountable for cost, schedule, and performance reporting. Overview of DOD Weapon System Decision-Making Processes DOD’s acquisition process includes three major milestones at which program offices provide information to or receive a waiver from the milestone decision authority. The milestone decision authority then makes a decision on whether the program is ready to transition to the next acquisition phase. The milestones normally represent transition points in programs at which there is a marked increase in the funding required for the program. Milestone A is the decision for an acquisition program to enter into the technology maturation and risk reduction phase. Milestone B is the decision to enter the engineering and manufacturing development phase. Milestone C is the decision to enter the production and deployment phase. Programs may start at different milestones depending on the circumstances of the particular program, such as whether the technologies the program plans to use are mature. Some major defense acquisition programs, such as the Marine Corps’ Amphibious Combat Vehicle program and the Navy’s Next Generation Jammer-Mid Band program, entered the acquisition system at milestone A. Other programs, such as the Air Force’s Combat Rescue Helicopter program and the Army’s Armored Multi-Purpose Vehicle program, entered directly at milestone B without having a milestone A because technologies were considered mature by the Office of the Secretary of Defense and an independent review team, respectively. Figure 1 illustrates the key milestones associated with the defense acquisition system. DOD’s acquisition policy encourages tailoring the acquisition process, including tailoring of documentation or information requirements. In previous work, we identified opportunities for DOD to tailor the documentation and oversight needed for major defense acquisition programs. In 2015, we found that 24 acquisition programs we surveyed spent, on average, over 2 years completing up to 49 information requirements for their most recent milestone decision. We found that DOD’s review process was a key factor that influenced the time needed to complete the information requirements. In total, the requirements averaged 5,600 staff days to document, yet acquisition officials considered only about half of the requirements as high value. We recommended that DOD eliminate reviews and information requirements that do not add value or are no longer needed. DOD agreed with both recommendations and took some actions through its Better Buying Power initiatives to streamline documentation and staff reviews. Among the information requirements that acquisition officials considered most valuable were those that support a sound business case. A solid, executable business case provides credible evidence that (1) the warfighter’s needs are valid and that they can best be met with the chosen concept, and (2) the chosen concept can be developed and produced within existing resources—such as technologies, design knowledge, funding, and time. Establishing a sound business case for individual programs depends on disciplined requirements and funding processes, and calls for a realistic assessment of risks and costs; doing otherwise undermines the intent of the business case and increases the risk of cost and schedule overruns and performance shortfalls. The program’s business case typically includes documentation of the capabilities required of the weapon system, the strategy for acquiring the weapon system, sound cost estimates based on independent assessments, and a realistic assessment of technical and schedule risks. DOD Weapon System Acquisition Program Oversight Roles and Responsibilities Several entities at the enterprise level (meaning the Office of the Secretary of Defense, Joint Chiefs of Staff, and Joint Staff) and the military department level play a role in the oversight and budgeting for DOD weapon system acquisition programs. In general, at the enterprise level, the acquisition and budgeting processes are managed by subordinate offices within the Office of the Secretary of Defense. More specifically: The Under Secretary of Defense for Research and Engineering is responsible for establishing policies on and supervising all aspects of defense research and engineering, technology development, technology transition, prototyping, experimentation, and developmental testing activities and programs, including the allocation of resources for defense research and engineering. This organization has a significant role in activities prior to milestone B, but also interacts with major defense acquisition programs throughout their life cycles with regard to technical risks. For major defense acquisition programs, the Under Secretary conducts assessments in areas such as technology maturity, interoperability, and cyber security. The Under Secretary of Defense for Acquisition and Sustainment is responsible for establishing policies on and supervising all matters relating to acquisition (including (1) system design, development, and production; and (2) procurement of goods and services) and sustainment (including logistics, maintenance, and materiel readiness). This organization has certain oversight responsibilities for major defense acquisition programs throughout the acquisition process, such as collecting and distributing performance data. The Under Secretary is the Defense Acquisition Executive and serves as the milestone decision authority for certain major defense acquisition programs. The Director, Cost Assessment and Program Evaluation and the Under Secretary of Defense (Comptroller) manage the annual budget preparation process for acquisition programs. These organizations have cost assessment and budgetary responsibilities, respectively, for major defense acquisition programs leading up to each milestone and once these programs have been fielded. At the military department level, the service acquisition executive, also known as the component acquisition executive, is a civilian official within a military department who is responsible for all acquisition functions within the department and can serve as the milestone decision authority. The following officials serve as the service acquisition executive for the military departments: the Assistant Secretary of the Air Force (Acquisition, Technology, and Logistics) for the Air Force; the Assistant Secretary of the Army (Acquisition, Logistics and Technology) for the Army; and the Assistant Secretary of the Navy (Research, Development and Acquisition) for the Navy and the Marine Corps. Selected Acquisition Oversight Reforms We focused our review on five selected reforms from the National Defense Authorization Acts for Fiscal Years 2016 and 2017. Three of the reforms affect the processes related to DOD’s oversight of major defense acquisition programs, the fourth restructured acquisition oversight functions in the Office of the Secretary of Defense, and the fifth provides alternative acquisition pathways for programs that are not considered major defense acquisition programs and have an objective of being completed within 5 years. Table 1 identifies the source of the five reforms that we reviewed and provides a brief summary of each reform. For additional detail on the statute, amendments, and related DOD guidance we reviewed, see appendix II. DOD Has Made Progress in Implementing Acquisition Oversight Reforms and Efforts to Reorganize Are Ongoing We found that DOD has made progress implementing reforms that have affected the oversight of major defense acquisition programs. Decision- making authority for these programs has been realigned between the Office of the Secretary of Defense and the military departments. In addition, new processes are in place to improve DOD’s consideration of program cost, fielding, and performance goals and assessment of technical risk although questions remain about how they will be implemented. The Office of the Secretary of Defense has also restructured in an effort to increase innovation in the earlier stages of the acquisition process and reduce cost, schedule, and performance risks in later stages. While the restructure has begun to take shape, additional steps remain to be completed, including developing charters and fully staffing new offices. These steps are important to determining how acquisition oversight roles within the Office of the Secretary of Defense— which had been executed by a single office for decades—will be divided and how new offices will be structured to effectively carry out their work. DOD Has Implemented Reforms That Affect the Oversight of Major Defense Acquisition Programs Milestone decision authority for most major defense acquisition programs now resides with the military departments, a reform generally required for programs starting after October 1, 2016 by section 825 of the National Defense Authorization Act for Fiscal Year 2016. According to data from DOD’s Defense Acquisition Visibility Environment system, as of March 2019, milestone decision authority was at the military department level for 80 of 89 major defense acquisition programs. The 80 programs include all six programs that started at milestone B or an equivalent milestone since this reform became effective on October 1, 2016, and 74 other programs that started before the reform became effective. The nine programs retained by the Office of the Secretary of Defense all began prior to the reform becoming effective and include programs that are high risk, joint, or have had significant cost or schedule growth, such as the F-35 Joint Strike Fighter program and the Army’s Integrated Air and Missile Defense program. See appendix III for more information about milestone decision authority, including a list of the major defense acquisition programs as of March 2019 and the milestone decision authority for each. Prior to this reform going into effect, the Under Secretary of Defense for Acquisition, Technology and Logistics within the Office of the Secretary of Defense typically was the milestone decision authority for major defense acquisition programs until they entered the production and deployment phase—that is, for the milestone A, B, and C decisions. The Under Secretary then typically delegated milestone decision authority to the military departments after the milestone C decision. Under the new reform, the Secretary of Defense may designate an alternate milestone decision authority under certain circumstances. For example, the Secretary may determine that the program meets one of several characteristics outlined in statute, such as addressing a joint requirement or the program being critical to a major interagency requirement. There are now substantially more major defense acquisition programs with decision authority at the military department level. This change resulted from both the statutory reform for newly started programs and changes to milestone decision authority for existing programs resulting from a separate review conducted by the Office of the Secretary of Defense after the reform became effective, wherein the military department was designated the milestone decision authority for approximately 20 programs. See figure 2 for trends in the level of milestone decision authority from 2012 to 2019. major defense acquisition program will provide an options matrix to stakeholders including the Under Secretaries of Defense for Research and Engineering and Acquisition and Sustainment, Cost Assessment and Program Evaluation, and the Joint Staff, which must include at least three options that represent differing assumptions about possible solutions, technical risks, cost, schedule, and affordability. gathering and distributing data and lessons learned, and conducting or approving independent cost estimates. These stakeholders must be granted the access necessary to complete independent analysis in their area of responsibility. This analysis will consider aggregated risk regarding technical feasibility, cost, schedule, and affordability, and will be submitted to the milestone decision authority. A goal establishment meeting will be held within 30 days of the program’s analysis of alternatives outbrief and will be co-chaired by the milestone decision authority and Vice Chief of the pertinent military service(s) and supported by the stakeholders identified above. As of March 2019, no programs have held a milestone A since the reform became effective, and no programs have had goals established under the new process. establish a process to develop program cost, fielding, and performance goals for major defense acquisition programs that reach milestone A after October 1, 2017. The statute described the goals as follows: (1) the cost goal is for both procurement unit cost and sustainment cost, (2) the fielding goal is the date for initial operational capability, and (3) the performance goal is for technology maturation, prototyping, and a modular open system approach. DOD issued a policy for the process in November 2018, stating that stakeholders will complete independent analyses in their areas of responsibility to consider the aggregated risk regarding technical feasibility, cost, schedule, and affordability, which will be submitted to the milestone decision authority (typically at the military department level). The policy stated that it applies to all major defense acquisition programs that enter the acquisition process after October 1, 2017, without regard to what milestone initiates the program. The policy also stated that the Office of the Secretary of Defense will have the opportunity to consult with the milestone decision authority on revised goals if the program exceeds its initial cost or fielding goals prior to the next milestone or production decision. DOD acquisition policy already required programs to document objectives for system cost, schedule, and performance in an acquisition program baseline at milestone B and affordability cost goals were to be set at milestone A. Under the new process, fielding and performance goals are established earlier and all three goals (cost, fielding, and performance) are required to be established before funds are obligated for technology development, systems development, or production, rather than being set at specific program milestones. The new process also adds a meeting to review and discuss the goals before they are approved by the milestone decision authority. Officials from the Office of the Under Secretary of Defense for Acquisition and Sustainment told us that this new process is intended to consolidate existing information to inform earlier decisions on which investments the department wants to make. As of March 2019, no major defense acquisition programs have held a milestone A since the statutory requirement became effective, and no major defense acquisition programs have had goals established under the new process. According to officials from the Office of the Under Secretary of Defense for Acquisition and Sustainment, no new programs have been required to have goals established since DOD’s policy for the process was issued in November 2018. These officials told us they rely on the milestone decision authority to notify them when goals need to be established and that the first programs expected to have goals established under the new policy are the Army’s Gator Landmine Replacement Program and the Air Force’s Mk21A Reentry Vehicle. Both programs are slated to go through the process in mid-2019. Independent technical risk assessments. The Under Secretary of Defense for Research and Engineering is now responsible for conducting or approving independent technical risk assessments for major defense acquisition programs prior to milestones A and B and before production decisions. According to DOD’s December 2018 independent technical risk assessment policy, the assessments will consider the full spectrum of technology, engineering, and integration risk, including critical technologies and manufacturing processes, and the potential impacts to cost, schedule, and performance. The reform required the assessments for major defense acquisition programs reaching milestone A after October 1, 2017; no major defense acquisition programs have held a milestone A since that date. DOD policy issued in December 2018 implementing the statute states that the assessments will be conducted for all major defense acquisition programs at each upcoming milestone throughout the acquisition process, effective December 3, 2018. As a result, the assessments will be conducted regardless of whether the program reached milestone A after October 1, 2017. As of March 2019, the Office of the Under Secretary of Defense for Research and Engineering had conducted eight independent technical risk assessments on major defense acquisition programs. One additional assessment on the Infrared Search and Track Block II program was delegated to the Navy to conduct, although the Office of the Under Secretary of Defense for Research and Engineering still approved the assessment. While DOD acquisition guidance previously provided for similar types of assessments, they were not always required to be conducted or approved at the Office of the Secretary of Defense level for all major defense acquisition programs. DOD acquisition guidance previously provided for the Office of the Secretary of Defense to request broad program assessments related to systems engineering, including risk areas, at all milestones for major defense acquisition programs with milestone decision authority at the Office of the Secretary of Defense level. Additionally, all major defense acquisition programs were required to have a separate assessment of critical technology elements prior to entering the system development phase or the production and deployment phase if the system enters the acquisition life cycle after system development. DOD’s December 2018 policy requires that independent technical risk assessments be conducted or approved at the Office of the Secretary of Defense level by the Office of the Under Secretary of Defense for Research and Engineering unless this responsibility is delegated, regardless of the level of the milestone decision authority. Office of the Secretary of Defense Reorganization Is Ongoing and Many Key Leadership Positions Are Not Filled The Office of the Secretary of Defense officially reorganized its acquisition organization on January 31, 2018, in response to Section 901 of the Fiscal Year 2017 National Defense Authorization Act. Under the reorganization, responsibilities of the former Under Secretary of Defense for Acquisition, Technology and Logistics were divided between two new offices—the Under Secretary of Defense for Research and Engineering and the Under Secretary of Defense for Acquisition and Sustainment (see fig. 3 and app. IV for organizational charts). According to the conference report accompanying the legislation, the priorities framing the conference discussions on reorganization included elevating the mission of advancing technology and innovation within DOD, and fostering distinct technology and acquisition cultures. The report further states that the conferees expect that the Under Secretary of Defense for Research and Engineering would take risks, test, and experiment, and have the latitude to fail, as appropriate. Additionally, the report states that the conferees expect the Under Secretary of Defense for Acquisition and Sustainment to focus on timely, cost-effective delivery and sustainment of products and services, and seek to minimize any risks to that objective. It is too early to say whether the goals of the reorganization have been realized. In July 2018, the Deputy Secretary of Defense issued a memorandum outlining the overall organizational structures, roles, and responsibilities of the two new Under Secretary offices. Responsibilities of many prior subordinate offices were realigned to one of the two new Under Secretary offices as part of the reorganization. For example, systems engineering falls under the Under Secretary of Defense for Research and Engineering and contracting policy and oversight falls under the Under Secretary of Defense for Acquisition and Sustainment. New offices or positions were also created during the reorganization. For example, the Office of the Under Secretary of Defense for Research and Engineering created eight assistant director positions to serve as resident experts in strategic technology areas, such as cyber, quantum science, and hypersonics. Similarly, the Office of the Under Secretary of Defense for Acquisition and Sustainment created an Assistant Secretary of Defense for Sustainment. Previously, sustainment activities were spread across several organizations headed by two Assistant Secretaries of Defense. While foundational steps to stand up the two new Under Secretary offices have been taken, as of March 2019, reorganization actions were ongoing in two major areas: completing chartering directives that define the scope of responsibilities for the two new offices and hiring additional people for the new offices, including for several senior leadership positions. Chartering directives: Officials from the Office of the Chief Management Officer originally expected charters for the two offices to be completed by January 2019, but progress has been delayed. According to DOD policy, chartering directives are required to define the scope of functional responsibilities and identify all delegated authorities for the chartered organizations. According to a July 2018 memorandum issued by the Deputy Secretary of Defense, the Chief Management Officer is to oversee the development of the charters. Officials from the Office of the Chief Management Officer stated that they are doing so with significant input from the Under Secretaries of Defense for Research and Engineering and Acquisition and Sustainment. These officials told us that the development of the charters has taken longer than expected because redistributing the responsibilities of a single office into two new offices was complicated due to the number of shared or partially overlapping interests. Officials from the Offices of the Under Secretaries of Defense for Research and Engineering and Acquisition and Sustainment now estimate the charters will be completed in July 2019 after department- wide coordination, though they said that time frame may be optimistic given the challenges to date. These officials also told us they expect that they will need to make additional changes to other existing acquisition policies and guidance to incorporate the new content of the chartering directives once complete. Hiring additional employees: In order to stand up the two newly- created organizations, on February 1, 2018, 516 civilian and military positions from the former Office of the Under Secretary of Defense for Acquisition, Technology and Logistics were divided between the two new Under Secretary offices. Finalizing staffing for both offices has been a gradual process that will not be completed until at least fiscal year 2020 because of the need to: (1) reduce positions to meet statutorily-directed cost-savings objectives; (2) realign positions between the two offices; and (3) hire additional staff. Table 2 provides additional detail on past and expected changes to authorized positions. Both Under Secretaries are still working to staff their offices, with approximately 30 percent of current positions vacant in the Office of the Under Secretary of Defense for Research and Engineering, and 8 percent of current positions vacant in the Office of the Under Secretary of Defense for Acquisition and Sustainment. See figure 4 for the current status of staffing within both offices. Both Under Secretaries have experienced challenges while staffing their offices. For example: The Office of the Under Secretary of Defense for Acquisition and Sustainment has experienced challenges stemming from needing to meet required personnel reductions while also hiring staff to align with the revised priorities from the reorganization. As part of the restructuring, the office will absorb all of the 57 remaining civilian and military position reductions that were originally assigned to the former Office of the Under Secretary of Defense for Acquisition, Technology and Logistics. These reductions will occur during both fiscal years 2019 and 2020. At the same time, officials said they are still working to hire staff with skills in needed areas such as data analytics. Officials said they are leveraging existing authorities such as voluntary early retirement authority and voluntary separation incentive payments to meet their targeted number of authorized positions by the end of fiscal year 2020. Officials from the Office of the Under Secretary of Defense for Research and Engineering said their challenges have primarily been negotiating the appropriate number of positions for the organization and staffing the organization in a timely manner. For example, 13 positions are not currently available to be filled because they will not be transferred from the Office of the Under Secretary of Defense for Acquisition and Sustainment until fiscal year 2020. The officials also stated that there have been delays related to developing new position descriptions, revalidating existing position descriptions, and finding individuals with the right skill sets for positions. Both offices have been delayed in filling key leadership positions. According to officials from these offices, vacant positions include the Deputy Director of Mission Engineering and Integration, the Director of Systems Engineering, and the Principal Director of Defense Pricing and Contracting. Senior officials from both offices told us that they have been unable to fill some vacant senior executive positions since the most recent Secretary of Defense resigned on December 31, 2018. The inability to fill these positions is due to the Office of Personnel Management’s general policy to suspend processing for senior executive service career appointments when an agency head leaves, until a successor is appointed at the agency. As of March 2019, a new Secretary of Defense had yet to be confirmed. Senior level officials also told us that some decisions about structure and staffing may be held up until after these executive positions are filled, but that in the interim, they are moving forward with daily operations and in some instances have other employees acting in those roles. Military Departments Are Using Middle-Tier Acquisition Pathways, but DOD Has Yet to Determine How Certain Aspects of Program Oversight Will Work Military Departments Are Using Middle-Tier Acquisition Pathways to Execute Programs of Varying Costs and Complexity As of March 2019, the military departments had begun using middle-tier acquisition pathways for over 35 rapid prototyping and rapid fielding programs under interim guidance issued by the Under Secretary of Defense for Acquisition and Sustainment and the military departments. However, DOD has yet to determine certain aspects of program oversight, including what information military departments should consider in selecting programs and what metrics and data the Office of the Secretary of Defense and military department leaders should use to assess performance. The Departments of the Air Force, Army, and Navy have begun to execute over 35 unclassified and classified acquisition programs using new acquisition pathways distinct from the traditional DOD acquisition process. Section 804 of the National Defense Authorization Act for Fiscal Year 2016 required DOD to issue guidance establishing two new streamlined acquisition pathways for DOD—rapid prototyping and rapid fielding—under the broader term “middle tier of acquisitions.” According to the Joint Explanatory Statement accompanying the National Defense Authorization Act, the guidance was to create an expedited and streamlined “middle tier” of acquisition programs intended to be completed within 5 years. The Joint Explanatory Statement noted that middle-tier programs would be distinctive from rapid acquisitions that are generally completed within 6 months to 2 years and traditional acquisitions that last much longer than 5 years. Statute lays out more specific intended time frames and expectations for programs using these two pathways: The rapid prototyping pathway is to provide for the use of innovative technologies to rapidly develop fieldable prototypes to demonstrate new capabilities and meet emerging needs. The objective of a rapid prototyping program is to field a prototype that can be demonstrated in an operational environment and provide for a residual operational capability within 5 years of the development of an approved requirement. The rapid fielding pathway is to provide for the use of proven technologies to field production quantities of new or upgraded systems with minimal development required. The objective of a rapid fielding program is to begin production within 6 months and complete fielding within 5 years of the development of an approved requirement. Middle-tier acquisition pathways are distinct from the traditional acquisition system for major defense acquisition programs. These pathways allow for programs to be exempted from the acquisition and requirements processes defined by DOD Directive 5000.01 and the Manual for the Operation of the Joint Capabilities Integration and Development System. The statute does not identify a dollar limit for programs using middle-tier acquisition pathways. Middle-tier programs are typically approved for initiation by the service acquisition executive, although Air Force policy also allows for smaller programs to be initiated by the program executive officer. Table 3 shows the number of unclassified programs initiated by the military departments as of March 2019. The middle-tier programs initiated to date represent a range of products, dollar amounts, and complexity. For example, one of the smaller dollar value programs is an approximately $30 million Navy effort to develop a prototype rocket motor that would support extended ranges for an existing missile. One of the larger dollar value programs is a multibillion dollar Army effort to develop the next generation combat vehicle. The military departments generally require funding these programs through the traditional budget process, using DOD’s existing planning, programming, budgeting, and execution process. Based on estimated program costs reported by the military departments, we found that approximately half of the programs initiated to date would be categorized as major defense acquisition programs if they were not being pursued under a middle-tier pathway. In some cases, such as the Army’s Lower Tier Air and Missile Defense Sensor program, an existing program planned as a major defense acquisition program shifted to a middle-tier acquisition pathway. Appendix V includes a list of middle-tier acquisition programs started by the military departments as of March 2019. DOD Has Issued Interim Guidance, but Has Yet to Determine Certain Aspects of Middle-Tier Program Oversight Although DOD and the military departments have issued interim guidance for using middle-tier acquisition pathways, we found that DOD has not provided department-wide guidance on how certain aspects of program oversight will be conducted. DOD has yet to determine what types of business case information should be submitted to decision makers to help ensure well-informed decisions about program initiation and how program performance will be measured consistently. DOD and the Military Departments Have Each Issued Interim Guidance Section 804 of the National Defense Authorization Act for Fiscal Year 2016 required the Under Secretary of Defense for Acquisition, Technology and Logistics to establish guidance for middle-tier acquisitions. In response, the Under Secretary of Defense for Acquisition and Sustainment issued interim guidance in April 2018 that provided the military departments and other DOD components with the authority to implement middle-tier acquisition programs on an interim basis through September 30, 2019. The guidance laid out the broad purposes and requirements of middle-tier acquisition authorities, and encouraged the military departments and other DOD components using middle-tier acquisition pathways to develop specific implementation processes and procedures to implement the interim authority. Between April 2018 and September 2018, the military departments each issued their own implementing guidance, which provided additional details on how middle- tier programs would be selected and overseen within their department during the period of the interim authority. Subsequently, the Under Secretary of Defense for Acquisition and Sustainment issued two additional interim guidance memorandums: the first in October 2018, which described how the Office of the Secretary of Defense and the Joint Staff would conduct oversight of the military departments’ use of middle-tier acquisition pathways, and the second in March 2019, which addressed sustainment planning considerations for programs using the rapid fielding pathway. The Director, Cost Assessment and Program Evaluation, also issued guidance in April 2019 that included a life-cycle cost estimating policy for programs using the rapid fielding pathway. DOD Guidance Does Not Consistently Identify Business Case Elements to Be Developed and Considered for Program Selection Statute requires that the guidance from the Office of the Secretary of Defense include a “merit-based process” for considering potential middle- tier programs, although the interim guidance does not describe what the process should include or what information should be considered by decision makers to assess merit other than meeting the needs communicated by the Joint Chiefs of Staff and the combatant commanders. Guidance from each of the military departments provides additional detail on the program selection process, to include describing generally the type of information decision makers should consider when selecting programs. Neither the Office of the Secretary of Defense’s guidance nor the military departments’ guidance fully identifies key elements of a business case to be provided as part of the program initiation process. Our past work has shown that in order to make sound decisions about initiating acquisition programs, it is important that decision makers have the information they need to assess the business case, including that (1) the warfighter need exists and that it can best be met with the chosen concept and (2) the concept can be developed and produced within existing resources. Information needed to establish a business case for a traditional acquisition program typically includes a requirements document (which provides information on the capabilities required of the weapon system); the strategy for acquiring the weapon system; sound cost estimates based on independent assessments; and a realistic assessment of risks, including those risks related to technology and schedule. For a middle-tier acquisition program, business case information would help decision makers make well-informed decisions, to include assessing whether the program is likely to meet objectives established in statute to complete a prototype with a residual operational capability (in the case of a rapid prototyping program) or complete fielding (in the case of a rapid fielding program) within 5 years of an approved requirement. Programs using a middle-tier pathway are intended to be completed within 5 years, and guidance may provide for expedited and streamlined procedures. As a result, the appropriate documents to provide business case information for a middle-tier acquisition program may not need to be as detailed as those for a major defense acquisition program. These documents may also vary to some extent depending on whether a program is a rapid prototyping or a rapid fielding program. However, having this type of information available in some form at program initiation can help decision makers to assess the soundness of a program’s business case at the time a decision is made to start a new program. Oversight at this time is critical because, as we have previously reported, program initiation presents the greatest point of leverage in the program life cycle for decision makers. Table 4 provides additional detail about certain types of business case documentation that are to be considered at program initiation for middle-tier acquisition programs according to the Office of the Secretary of Defense and the military departments’ guidance. Section 804 of the National Defense Authorization Act for Fiscal Year 2016 directed the Under Secretary of Defense for Acquisition, Technology and Logistics to establish guidance for middle-tier acquisitions within 180 days of enactment of the statute (which would have been May 2016), but guidance was not issued until April 2018. According to officials who were involved in efforts to develop the guidance, the Office of the Secretary of Defense circulated multiple iterations of draft guidance, but was unable to reach agreement with the military departments because of concerns that the guidance was too burdensome. These officials told us that as a result, the Under Secretary of Defense for Acquisition and Sustainment decided instead to issue broad interim guidance and allow each of the military departments and other DOD components to develop processes and procedures to implement the interim authority. As stated in the Under Secretary of Defense for Acquisition and Sustainment’s April 2018 interim guidance, the Under Secretary of Defense for Acquisition and Sustainment would develop final guidance for the department in 2019 based on lessons learned from the military departments and other DOD components. According to officials from the Office of the Under Secretary of Defense for Acquisition and Sustainment, the Under Secretary began the process in February 2019 to develop this final guidance. The process was in its initial stages as of March 2019 and officials involved told us they hope to complete the final guidance by September 2019. The business case information programs provided to decision makers at initiation varied widely for nine middle-tier acquisition programs we reviewed. We found that certain types of business case information, such as an assessment of schedule risk that would indicate whether a program could realistically be expected to be completed within time frame objectives in statute, were often not completed at the time of program initiation. For example: Six programs had approved requirements at program initiation. Three of these programs had requirements validated through DOD’s traditional requirements process prior to the decision to start under a middle-tier pathway. Two of these programs, both of which were Air Force programs, had previously planned to start as major defense acquisition programs. The third program, an Army program, had requirements based on those approved for an existing major defense acquisition program. The other three of these programs, all of which were Navy programs, had high-level requirements that described, for example, what environments the system should be tested in or what quantity should be fielded. These requirements were approved by the Navy’s Accelerated Acquisition Board of Directors, which includes the Chief of Naval Operations and the Assistant Secretary of the Navy for Research, Development and Acquisition, among other officials. Three programs were still in the process of developing requirements at the time of program initiation. Only one of the nine programs had an approved acquisition strategy at the time of program initiation. Officials from the other programs told us they planned to develop an acquisition strategy or were in the process of developing or updating one. While all nine of the programs had developed at least a draft cost estimate at program initiation, only one of the nine programs had an assessment of its program cost estimate completed by the military department cost agency at the time of program initiation. Officials from three other programs said that an assessment by the military department cost agency was in progress or planned. Officials from the other five programs told us they had developed a draft cost estimate at program initiation that in some cases was still expected to change and that they did not plan for an assessment by the military department cost agency. The programs varied in the extent to which they assessed risk at program initiation. Four programs had risk assessments that addressed schedule and technology risks, which are types of risks we have identified in our previous work as important to understanding a program’s business case. Two other programs had risk assessments that included either schedule or technology risks but not both. Officials from the other three programs stated that they were still in the process of assessing risks and had yet to assess risks related to meeting statutory schedule objectives at the time of program initiation. Without the Office of the Under Secretary of Defense for Acquisition and Sustainment identifying in its final guidance the minimum program information needed to help decision makers evaluate the program’s business case, DOD cannot ensure that the military departments are consistently considering these types of information. As a result, DOD is not well positioned to ensure that approved middle-tier acquisition programs represent sound investments and are likely to meet the objective of delivering prototypes or capability to the warfighter within 5 years. DOD Has Yet to Identify Metrics to Monitor Program Performance in a Consistent Manner The Office of the Secretary of Defense and the military departments generally collect program data but we found that neither the Office of the Secretary of Defense nor the military departments has identified metrics that would allow them to use that data to measure and report on program performance in a consistent manner. Developing such metrics would allow senior leaders in the Office of the Secretary of Defense and the military departments to monitor and assess performance across the portfolio of middle-tier programs during program execution, including whether programs are on track to meet statutory objectives for rapid prototyping and rapid fielding. Table 5 provides additional detail on the extent to which guidance addresses the collection of program data and identification of metrics to measure program performance. The Office of the Secretary of Defense began collecting middle-tier program data from the military departments in November 2018 as part of an effort to ensure that middle-tier authority was being used appropriately within the department. However, the office has yet to determine what metrics it will use to measure program performance consistently across the portfolio. Officials within the Office of the Secretary of Defense who are involved with collecting the data told us that they are still refining what data should be collected, determining how to standardize definitions to improve the consistency of data, and considering how to use the data collected to monitor program execution. For example, they are still trying to determine the appropriate triggers that would allow them to know that a middle-tier program may be experiencing cost or schedule challenges. Similarly, guidance from two of the three military departments requires the collection of program data, but the military departments also have not identified metrics to consistently measure performance across programs. The Navy’s guidance does not require the collection of program data or identify metrics to measure program performance. Interim guidance from the Air Force and the Army requires the collection of program data and also requires programs to develop metrics to measure performance, but these metrics are not required to be consistent across programs. Decisions about specific metrics to be reported are left to the discretion of the decision authority for each program, who is typically the service acquisition executive or a program executive officer. As a result, these metrics may not allow consistent measurement of performance across programs because, for example, programs may have a different starting point for reporting data, or may change the metrics that are being assessed at different points within the life of a program. According to federal internal control standards, the ability of agency management to compare actual performance to planned or expected results throughout the organization and analyze significant differences is important to help ensure that the agency is meeting objectives and addressing risks appropriately. These standards also state that agency management should define objectives in quantitative or qualitative terms to permit reasonably consistent measurement of performance toward those objectives. For middle-tier acquisition programs, statute includes objectives related to fielding time frames for both rapid prototyping and rapid fielding programs. Additionally, for rapid prototyping, part of the objective is that the prototype fielded can be demonstrated in an operational environment and provide for residual operational capability. Middle-tier acquisition programs are to be provided streamlined processes, including for program oversight. Decisions about how to measure program performance therefore should be considered in light of how to facilitate oversight without losing the benefits of the flexibilities offered by middle-tier pathways. However, without the Office of the Under Secretary of Defense for Acquisition and Sustainment identifying in its final guidance a minimum set of metrics that can be used to measure performance of programs across the military departments, DOD risks not knowing how the department’s portfolio of middle-tier programs is progressing, including whether programs are on track to meet statutory objectives for rapid prototyping and rapid fielding. As a result, senior leaders in the Office of the Secretary of Defense and the military departments may lack insight needed to identify and address emerging challenges in a timely manner. This is particularly important given that the portfolio includes complex, costly programs that address important capability gaps for the department. DOD Faces Challenges in Addressing Disagreements about Oversight Roles and Responsibilities, Improving Portfolio Management, and Assessing Effectiveness of Reforms While DOD has made progress implementing individual reforms, it continues to face challenges that affect the implementation of the reforms we reviewed. First, we found that senior DOD leadership has not fully addressed disagreements about the division of acquisition oversight roles and responsibilities between the Office of the Secretary of Defense and the military departments. As a result, there have been continuing differences of opinion about how to implement specific reforms. Second, DOD has yet to address persistent portfolio management challenges that affect its ability to effectively manage its portfolio of weapon system investments. Lastly, DOD has yet to develop processes to assess the effectiveness of recent reforms. Without developing such processes, DOD officials will not be well positioned to assess whether reforms are having the intended effects, such as improving innovation and delivering capability to the warfighter more quickly, or if additional changes are necessary to achieve such outcomes. Top DOD Leadership Has Not Fully Addressed Continuing Disagreements over the Division of Roles and Responsibilities for Acquisition Oversight Top DOD leadership has not fully addressed disagreements that remain about the division of acquisition oversight responsibilities between the Office of the Secretary of Defense and the military departments. Our past work has shown that in times of significant organizational transformation, top leadership must set the direction, pace, and tone for the transformation. Personal involvement of these leaders in driving change, including the Secretary and Deputy Secretary, helps provide stability. Internal control standards for federal agencies also emphasize the importance of management communicating information down and across organizational levels in order to enable personnel to perform key roles in achieving objectives and addressing risks. The Deputy Secretary of Defense has weighed in on the division of acquisition oversight responsibilities within the Office of the Secretary of Defense and has addressed specific roles and responsibilities for certain reforms. However, despite continuing disagreements about the division of oversight roles and responsibilities between the Office of the Secretary of Defense and the military departments, DOD’s top leadership has not provided a detailed framework addressing the appropriate roles of each party for acquisition oversight. Officials from the Office of the Secretary of Defense and the military departments we met with expressed different opinions on the appropriate oversight role of the Office of the Secretary of Defense. For example, the Under Secretaries of Defense for Research and Engineering and Acquisition and Sustainment both stated that in cases where the milestone decision authority is at the military department level, the military departments do not see the value in having the Office of the Secretary of Defense involved. This is consistent with concerns that officials from all three military departments have raised in speaking with us. Specifically, officials from all three departments raised concerns that the Office of the Secretary of Defense is overreaching on its oversight responsibilities in some cases, and creating new oversight processes that contradict the intent of recent reforms to speed up the acquisition process. Implementation of several of the reforms we reviewed has resulted in disagreements between the Office of the Secretary of Defense and the military departments that have yet to be resolved. For example: Cost, fielding, and performance goals. Despite the issuance of policy by the Deputy Secretary of Defense in November 2018 on the establishment of cost, fielding, and performance goals, military department officials have continued to express concerns that the process is too burdensome and involves too many stakeholders from the Office of the Secretary of Defense. These officials stated that Office of the Secretary of Defense involvement in programs with decision authority at the military departments, such as participation in meetings with the milestone decision authority to provide advice on cost, fielding, and performance goals, would slow down programs that other reforms were intended to accelerate. They added that they had expressed these concerns to the Office of the Secretary of Defense during the drafting of the policy, but they did not feel that their input was appropriately considered in the final policy. Officials from the Office of the Under Secretary of Defense for Acquisition and Sustainment stated that the analysis and meetings that involve the Office of the Secretary of Defense are ways for stakeholders to advise the milestone decision authority on program decisions based on information from existing oversight mechanisms, such as independent cost estimates and analyses of alternatives. Previously this type of oversight was conducted via multiple meetings leading up to program milestones. The policy states that the policy procedures will be revisited in 6 months and lessons learned incorporated where needed. Independent technical risk assessments. Debates about who should conduct independent technical risk assessments were elevated to the Deputy Secretary of Defense. Subsequently, the Deputy Secretary issued guidance in December 2018 to reiterate that the Under Secretary of Defense for Research and Engineering would conduct or approve these assessments for all major defense acquisition programs, although that responsibility may be delegated. However, despite the issuance of new guidance, there continue to be ongoing debates about when assessments will be delegated to the military departments. The December 2018 guidance does not include criteria for when responsibility for the assessments may be delegated. Officials from the Office of the Under Secretary of Defense for Research and Engineering said that decisions about whether to delegate assessments should be based primarily on the risk level of the program, but officials from military departments stated that these assessments should be conducted within the military department. Officials from the Office of the Under Secretary of Defense for Research and Engineering told us that they had convened a joint working group with the military departments in February 2019 to address this and other implementation issues related to independent technical risk assessments. In the meantime, nearly all assessments continue to be conducted by the Office of the Under Secretary of Defense for Research and Engineering. Middle-tier acquisition. Office of the Secretary of Defense and military department officials also disagree on the extent to which the Office of the Secretary of Defense should weigh in on the appropriateness of a program using a middle-tier pathway. DOD’s October 2018 interim governance guidance provided that the Office of the Secretary of Defense may determine that specific programs were not appropriate for a middle-tier pathway. However, officials from the Air Force and the Army expressed concerns to us about whether that determination was appropriate to be made by the Office of the Secretary of Defense since, from their perspective, programs should be selected at the military department level. Office of the Secretary of Defense officials also told us that there are differences of opinion between them and the military departments on the appropriate amount of information that programs should report to the Office of the Secretary of Defense, including whether the same information should be provided by all middle-tier programs, regardless of expected program cost. As stated earlier, DOD is in the process of finalizing guidance for middle-tier acquisition programs, which could address these issues. Documents that could outline roles and responsibilities of the various parties for acquisition oversight are still being developed. For example, as discussed earlier, officials from the Offices of the Under Secretaries of Defense for Research and Engineering and Acquisition and Sustainment told us that chartering directives for these offices, which are expected to be completed in July 2019, may address to some extent how the offices should work together and with the military departments and other external organizations. In addition, officials from the Office of the Under Secretary of Defense for Acquisition and Sustainment told us that while reforms are currently being implemented under multiple different polices and guidance documents, DOD Instruction 5000.02 will be substantially revised, including to reflect the latest reforms. When completed, the instruction is expected to provide further detail on how oversight activities will be carried out by various acquisition entities. Officials stated that they hoped to complete a version of the revision of DOD Instruction 5000.02 by the end of 2019, but they acknowledged that this estimate was optimistic and that it might take longer than expected to come to agreement on this policy. Without a comprehensive framework from top leadership in the near term that addresses acquisition oversight roles and responsibilities in detail, DOD’s ability to continue with reform implementation, including its ability to finalize policies that could clarify roles and responsibilities, may be slowed by ongoing disagreements. In the longer term, without resolving these issues, DOD cannot ensure that it is achieving the balance between oversight and accountability and efficient program management that senior leadership expects as an outcome of acquisition reform. With too little oversight, acquisition programs may not be properly scrutinized before they are started, which could lead to poor program cost and schedule outcomes. Alternatively, if new oversight processes are too burdensome, DOD may not achieve the expected benefits of streamlining its acquisition processes. DOD Has Yet to Address Persistent Weapon System Portfolio Management Challenges As part of this review, we also assessed DOD’s efforts to implement our previous portfolio management recommendations and identified opportunities and challenges related to portfolio management that DOD may face as it continues to implement acquisition reforms. Our past work has shown that when investments are not managed as a portfolio at the enterprise level (meaning at the level of the Office of the Secretary of Defense, Joint Chiefs of Staff, and Joint Staff), the military departments plan to acquire more weapons than DOD can afford, sometimes develop potentially duplicative solutions to common needs, and do not always choose an optimal mix of investments to ensure the department can maintain its technological edge in the future. Realigning roles and responsibilities for decisions related to weapon system programs between the Office of the Secretary of Defense and the military departments could lead to further questions about who is ultimately responsible and accountable for portfolio management decisions if leadership roles are not clearly defined. Officials we met with from the Office of the Secretary of Defense told us that questions remain about the division of responsibilities between the Office of the Secretary of Defense and the military departments for making these types of portfolio management decisions. They told us that concerns we had previously identified about the division of decision-making authority for portfolio management had yet to be addressed during the implementation of recent acquisition reforms, and that in some cases, the reforms had led to additional questions. For example, the Under Secretary of Defense for Research and Engineering told us that while the statute that created his position as part of the restructuring of the former Office of the Under Secretary of Defense for Acquisition, Technology and Logistics assigns him responsibility for allocating resources for defense research and engineering, because he does not have control over the research and engineering budget, in actuality the military departments decide how to prioritize their investments. We found in August 2015 that DOD has had difficulty implementing portfolio management at the enterprise level in part due to diffuse decision-making responsibilities that make it difficult to determine who is empowered to make enterprise-level weapon system investment decisions. At that time, we recommended that DOD revise its portfolio management directive in accordance with portfolio management best practices. We also recommended that the Secretary of Defense designate the Deputy Secretary of Defense or some appropriate delegate responsible for the directive’s implementation, among other recommendations. DOD partially concurred with the recommendations, but the planned actions DOD identified at the time of our report did not fully address the issues we identified. For example, DOD stated that it did not plan to revise its portfolio management directive as we recommended, but instead planned to rescind it and direct stakeholders to participate in portfolio management through the requirements, acquisition, and budget processes. In response, we expressed concern that this approach could reinforce the stove-piped governance structure that we found to be an impediment to integrated portfolio management. As of March 2019, DOD had yet to implement our recommendations. An official from the Office of the Under Secretary of Defense for Acquisition and Sustainment told us that DOD is revising its portfolio management directive, but that there was not yet an estimated completion date. We are not making new recommendations on portfolio management in this report, but we continue to believe that DOD should implement our prior recommendations in order to improve its portfolio management capabilities. See appendix VI for additional details on our assessment of DOD’s progress in this area. DOD Has Yet to Develop Processes to Assess the Effectiveness of Acquisition Reforms DOD is beginning to monitor the implementation of certain reforms, but has yet to establish processes to assess the overall effectiveness of its reform efforts. Collectively, the reforms offer the potential for DOD to significantly reduce the time needed to approve and field acquisition programs by allowing the military departments additional opportunities to tailor documents needed for approval and limiting oversight by the Office of the Secretary of Defense. Ultimately, DOD anticipates that this opportunity will improve the speed at which new capabilities are delivered to the warfighter. Our prior work has identified steps that agencies, such as DOD, can take to help ensure successful implementation of reform efforts, including establishing clear outcome-oriented goals and performance measures putting in place processes to collect the needed data and evidence to effectively measure the reforms’ outcome-oriented goals. The Office of the Secretary of Defense has taken some initial steps to collect data that may help to measure the outcomes of a few reforms, but has yet to determine goals or processes for assessing the overall effectiveness of the reforms. For example, as previously discussed, the Office of the Under Secretary of Defense for Acquisition and Sustainment began initial efforts to collect middle-tier acquisition program data, such as cost and schedule data, from the military departments in November 2018. Officials from that office told us that once they address reliability concerns with the data they are receiving, such as ensuring that programs report schedule data in a consistent fashion, they anticipate that they will be able to use the data to better understand the military departments’ use of middle-tier acquisition pathways. However, according to officials we spoke with from the offices of the Under Secretaries of Defense for Research and Engineering and Acquisition and Sustainment, DOD has not determined how it will assess whether the reforms are collectively resulting in an acquisition process that is more efficient or how it will measure their effect on cost and schedule outcomes. These officials told us that it is important to have data to assess the effect of recent acquisition policy and organizational changes, but they have not determined specifically who will do the assessment, how it will be done, and what data will be needed. They told us that as a part of the reorganization of the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, they are still in the process of assessing data gaps and needs within the newly-formed organizations and that this type of analysis needs to be completed before they determine how they will assess recent reforms. We recognize that assessing the cumulative effect of recent acquisition reforms on the acquisition process and on the cost and schedule performance of the major defense acquisition program portfolio could take several years because a critical mass of programs will need to go through the new acquisition processes. In the interim, however, determining how an assessment of reforms will be conducted is an important first step in determining whether the reforms are having their intended effect. If DOD officials wait too long to plan for how the department will assess the effect of recent acquisition reforms, including identifying who will be responsible for the assessment and what data will be needed, they may miss the opportunity to collect data from the beginning of implementation needed to measure progress. As a result, they may not be informed about early indications of improvements or problems in the cost, schedule, and performance of programs. Conclusions Recent acquisition reforms have given DOD significant opportunity to focus on delivering innovative capability to the warfighter more quickly and reduce bureaucratic processes that had built up over time. While DOD has made progress in implementing these reforms, continued attention from top leadership would help ensure that the progress the department has made is not unnecessarily slowed or halted. Middle-tier acquisition will require careful consideration as the department proceeds with the development of final guidance. Middle-tier programs are generally exempt from traditional acquisition and requirements processes, but they may still be large, expensive programs critical to the department’s ability to meet its mission. Identifying the types of business case elements decision makers should consider when initiating programs would improve the department’s ability to ensure that the programs the military departments select are sound investments and likely to succeed using a middle-tier acquisition pathway. Identifying metrics to track performance consistently across the portfolio of middle-tier programs will provide necessary information to senior leaders once programs have been started to assess the performance of middle-tier acquisition programs, including whether they are well positioned to meet statutory objectives. The department also faces challenges that affect the implementation of the reforms we reviewed. These sweeping changes have resulted in some disagreements about oversight roles and responsibilities between the Office of the Secretary of Defense and the military departments that have not been fully resolved. Clear communication from top leadership of a framework for oversight roles and responsibilities that is detailed enough to address areas of continued disagreement would help the department to move forward with effective implementation of the reforms. Developing an approach to assess the effects of recent acquisition reforms is also critical so that DOD can monitor whether reforms are collectively having the effect of speeding up the acquisition process without unintended negative consequences on cost and performance of acquisition programs. We also continue to believe that DOD should address our past recommendations to clarify and strengthen roles and responsibilities at the enterprise level for making portfolio management decisions to make sure that its investments are strategy-driven, affordable, and balance near- and long-term needs. In fact, these recommendations may take on more importance for DOD in light of the implementation of acquisition reforms that will further diffuse responsibility for initiating and overseeing acquisition programs. Recommendations for Executive Action We are making the following four recommendations to DOD: The Secretary of Defense should direct the Under Secretary of Defense for Acquisition and Sustainment to identify in final guidance the types of business case elements potential middle-tier acquisition programs should develop and decision makers should consider at program initiation to assess the soundness of programs’ business cases, including whether programs are well positioned to meet statutory objectives. (Recommendation 1) The Secretary of Defense should direct the Under Secretary of Defense for Acquisition and Sustainment to determine and identify in final guidance for middle-tier acquisition programs the metrics that will be used to assess the performance of middle-tier acquisition programs across the military departments, including whether programs are meeting statutory objectives. (Recommendation 2) The Secretary of Defense should ensure that a comprehensive framework that clarifies the roles and responsibilities of the Office of the Secretary of Defense and the military departments for acquisition oversight is communicated by senior leadership. This framework should be detailed enough to address areas of continued disagreement among key stakeholders and serve to inform the department’s revisions of other acquisition policies such as DOD Instruction 5000.02. (Recommendation 3) The Secretary of Defense should develop a plan for how the department will assess the effect of recent acquisition reforms, including identifying who will be responsible for the assessment and what data will be needed. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this product to DOD for comment. In its comments, reproduced in appendix VII, DOD concurred with our four recommendations. DOD also provided technical comments with regard to improving the clarity of the discussion of certain reforms and providing additional context about military departments’ oversight practices for middle-tier acquisition programs, among other issues. We incorporated DOD’s technical comments as appropriate. In its written comments, DOD described planned actions to address our recommendations. Specifically, in response to our first recommendation, to identify the types of business case elements that should be considered by decision-makers for middle-tier programs at program initiation, DOD stated that it expects to identify these business case elements in its final guidance on middle-tier programs, which it expects to complete in September 2019. In response to our second recommendation, to identify metrics that will be used to assess the performance of middle-tier programs, DOD stated that it plans to determine performance metrics in coordination with its release of its final guidance on middle-tier programs. DOD expects to release this guidance in late 2019. In response to our third recommendation, for senior leadership to clarify acquisition oversight roles and responsibilities, DOD stated that these roles and responsibilities will be finalized through the issuance of chartering directives and updated acquisition policy; issuance is expected by the end of 2019. Finally, in response to our fourth recommendation, to plan for assessing the effects of acquisition reforms, DOD stated that it has included a division in the Office of the Assistant Secretary of Defense for Acquisition to analyze and assess this and other high-level oversight and policy issues. DOD’s planned actions to address our first, second, and fourth recommendation, if implemented effectively, should address the intent of our recommendations. With regard to our third recommendation, however, we do not believe that the steps outlined in DOD’s written comments are likely to fully address the disagreements about acquisition oversight roles and responsibilities that we identified in the report. We acknowledge in the report that DOD plans to issue chartering directives and re-issue DOD Instruction 5000.02 as part of its efforts to outline the roles and responsibilities of various parties for acquisition oversight, as DOD reiterated in its written comments. However, without a comprehensive framework to inform the revisions of acquisition policies, such as DOD Instruction 5000.02, DOD’s ability to finalize these policies may be hindered by the disagreements between the Office of the Secretary of Defense and the military departments that we identified in our report. These disagreements are persistent and focused on fundamental acquisition oversight issues. Simply issuing chartering directives and finalizing policy as planned may not be enough to ensure that areas of disagreement are resolved and that officials within the Office of the Secretary of Defense and the military departments have a shared understanding of an acquisition oversight framework for the entire Department that will serve as the basis for any policy. Furthermore, without senior leadership within DOD communicating this framework to the Office of the Secretary of Defense and the military departments in sufficient detail to address areas of disagreement among key stakeholders, disagreement will likely persist and the intended impacts of reforms could be stymied. We are sending copies of this report to the appropriate congressional committees and the Acting Secretary of Defense. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or oakleys@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Appendix I: Objectives, Scope, and Methodology This report addresses (1) the progress the Department of Defense (DOD) has made to implement selected oversight reforms for major defense acquisition programs; (2) how DOD has used middle-tier acquisition pathways and the extent to which DOD has developed guidance on middle-tier program oversight; and (3) challenges DOD faces related to reform implementation. The conference report for the National Defense Authorization Act for Fiscal Year 2018 and the Senate Armed Services Committee report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2018 also contained provisions for GAO to review project, program, and portfolio management standards within DOD. Appendix VI of this report includes our assessment of DOD’s efforts to implement our previous portfolio management recommendations and identifies opportunities and challenges related to portfolio management that DOD may face as it continues to implement acquisition reforms. We focused our review on five selected reforms from the National Defense Authorization Acts for Fiscal Years 2016 and 2017 that we determined substantially affected DOD’s oversight of acquisition programs. Our selections were informed by our analysis of the National Defense Authorization Acts for Fiscal Years 2016 and 2017 and our past work on factors affecting the oversight of major defense acquisition programs. We also interviewed officials from the Office of the Secretary of Defense and the military departments to obtain their perspectives on the most significant reforms to acquisition oversight and considered those perspectives when we made our selections. For the purposes of our report, when we refer to a reform, we are referring to a specific change to DOD’s acquisition oversight processes or roles and responsibilities. Two of the reforms we reviewed align with sections of the National Defense Authorization Act for Fiscal Year 2016, and the other three align with one or more sections from the National Defense Authorization Act for Fiscal Year 2017. Table 6 identifies the specific sections or subsections that we reviewed for each reform. We also reviewed related amendments to these sections from National Defense Authorization Acts for subsequent years to determine whether the National Defense Authorization Act sections we reviewed, or sections of the U.S. Code that were added by sections we reviewed, had been modified since being signed into law. When we identified amendments, we assessed DOD’s progress in implementing the statute as amended. Appendix II provides additional details about the original legislative requirements and amendments, if any, to each of the reforms we selected. To identify the progress DOD has made to implement selected oversight reforms for major defense acquisition programs, we analyzed three selected reforms that affect processes related to DOD’s oversight of major defense acquisition programs: designating military departments to be the milestone decision performing independent technical risk assessments; and establishing cost, fielding, and performance goals. We also analyzed one reform that restructured acquisition oversight functions in the Office of the Secretary of Defense. We analyzed the associated National Defense Authorization Act sections and reviewed related acquisition policies and guidance from the Office of the Secretary of Defense and the military departments (see app. II for a list of key guidance we reviewed for each reform). For each reform, we analyzed DOD and military department policies and guidance to determine steps DOD and the military departments had taken to implement the reforms. We also compared new or updated policies and guidance, when available, with prior policies and guidance to determine how oversight roles, responsibilities, and processes had changed for DOD’s major defense acquisition programs. To obtain additional insight into how designation of milestone decision authority had changed as a result of recent reforms, we requested and analyzed data provided by DOD about the milestone decision authority levels for the major defense acquisition program portfolio. To assess the reliability of these data, we discussed the data and sources used to compile them with DOD officials, reviewed the data for errors, reviewed related documentation on programs with milestone decision authority at the military department level, and compared the data when possible to other sources, such as publicly available lists of major defense acquisition programs. On the basis of these steps, we determined that the data we used were sufficiently reliable to identify changes in the level of milestone decision authority over time for major defense acquisition programs. To assess changes resulting from the reorganization of Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, we also reviewed updated organizational charts and staffing and vacancy data for the successor offices (the Office of the Under Secretary of Defense for Research and Engineering and the Office of the Under Secretary of Defense for Acquisition and Sustainment) and compared these to past organizational charts and staffing data for the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics. To determine the current percentage of positions vacant in each office, we compared actual data for filled positions as of March 2019 to the total number of vacant positions as of the same point in time. The vacancy numbers do not include vacant positions that are slotted for future reduction or transfer. To assess the reliability of these data, we requested and reviewed written responses from DOD officials on the reliability of the data and sources used to compile it, reviewed the data for logical inconsistencies, and compared the data when possible to other sources, such as related data provided for other time frames. On the basis of these steps, we determined that the data we used were sufficiently reliable to identify the current staffing status for the two new Under Secretary offices. To determine how DOD has used middle-tier acquisition pathways, we reviewed the relevant statute and guidance, and obtained information from the military departments about the number and types of programs using middle-tier acquisition pathways as of March 2019. We analyzed the guidance from the Office of the Under Secretary of Defense for Acquisition and Sustainment and the military departments to determine how they were implementing the statute with regard to selection of programs and program oversight. We also compared the guidance with our past work on elements of business cases that should be completed at program initiation to determine what elements were addressed by DOD guidance. At each of the military departments, we judgmentally selected three middle-tier programs to review in additional detail. We selected programs to obtain a range of program costs (including programs that were above the equivalent threshold cost for designation as a major defense acquisition program if the program were not using a middle-tier acquisition pathway, as well as those below that threshold) and types of programs being executed under middle-tier acquisition pathways (such as space, artillery, software, missile, and ground vehicle programs). Programs we selected include: Air Force: Hypersonic Conventional Strike Weapon; Next Generation Overhead Persistent Infrared Space; Protected Tactical Enterprise Service; Army: Extended Range Cannon Artillery; Optionally Manned Fighting Vehicle; Rapid Opioid Countermeasures System; and Navy: STANDARD Missile-2 Block IIIC; STANDARD Missile-6 Block IB Phase IA Rocket Motor; STANDARD Missile-6 Block IB Phase IB All Up Round. For these programs, we collected and analyzed additional information such as acquisition decision memorandums, acquisition strategies, program cost and schedule estimates, and risk assessments. We also interviewed or received detailed written responses from program officials that addressed issues such as how decisions were made to execute programs under middle-tier acquisition pathways and how oversight for programs was being conducted. Further, we reviewed interim guidance from the Office of the Under Secretary of Defense for Acquisition and Sustainment and the military departments to determine how DOD planned to measure middle-tier program performance. We compared DOD and the military departments’ guidance on developing metrics and collecting data to assess middle-tier program performance to relevant internal controls related to consistent measurement of program performance. To assess the challenges DOD faces with regard to reform implementation, we reviewed policy and guidance issued by top DOD leadership that outlined roles and responsibilities for the Office of the Secretary of Defense and the military departments with regard to acquisition oversight and compared them to leading practices for leadership involvement in agency transformations that we had identified in prior work. We also collected and analyzed information about DOD’s actions taken to implement prior recommendations we have made to improve portfolio management at DOD and analyzed the acquisition oversight reforms we included in this review to identify opportunities and challenges related to portfolio management that DOD may face as it continues to implement acquisition reforms. Lastly, we reviewed DOD’s plans and ongoing efforts to develop performance measures and collect data to assess the effects of acquisition reforms and compared these efforts with success factors for reform implementation identified in our past work. For all objectives, we also conducted interviews with officials from the Office of the Secretary of Defense, the Joint Staff, and the military departments to obtain additional insight into implementation status, implementation challenges, and future plans, including: Office of the Secretary of Defense: The Office of the Under Secretary of Defense for Research and Engineering, the Office of the Under Secretary of Defense for Acquisition and Sustainment, the Office of the Under Secretary of Defense (Comptroller), the Office of the Chief Management Officer, the Office of the Director of Operational Test and Evaluation, the Office of the Director of Cost Assessment and Program Evaluation, and the Office of the General Counsel. Joint Staff: Force Structure, Resource and Assessment Directorate, J-8. Military departments: For each of the three military departments (Air Force, Army, and Navy) we interviewed acquisition officials from the Service Acquisition Executive’s office, requirements officials supporting the Chief of Staff of the respective armed force, and officials from the military department cost agencies. At the Air Force we also interviewed officials from the Office of the General Counsel. We conducted this performance audit from March 2018 to June 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Summary of Original Legislation and Amendments to Acquisition Reforms Reviewed by GAO Appendix III: Milestone Decision Authority for Major Defense Acquisition Programs as of March 2019 Appendix III: Milestone Decision Authority for Major Defense Acquisition Programs as of March 2019 Programs with milestone decision authority at the Air Force level (26) Advanced Extremely High Frequency Satellite AIM-120 Advanced Medium Range Air-to-Air Missile Air Force Intercontinental Ballistic Missile Fuze Modernization Airborne Warning and Control System Block 40/45 Upgrade B61 Mod 12 Life Extension Program Tailkit Assembly F-15 Eagle Passive Active Warning Survivability System Family of Advanced Beyond Line-of-Sight Terminals Global Positioning System III Follow-On Production Military Global Positioning System User Equipment Increment 1 MQ- 9 Reaper Unmanned Aircraft System Small Diameter Bomb Increment II Space Based Infrared System High Space Fence Ground-Based Radar System Increment 1 Wideband Global SATCOM Programs with milestone decision authority at the Army level (18) AH-64E Apache New Build Airborne & Maritime/Fixed Station Joint Tactical Radio System AN/TPQ-53 Counterfire Target Acquisition Radar Common Infrared Countermeasure Programs with milestone decision authority at the Army level (18) Guided Multiple Launch Rocket System/Guided Multiple Launch Rocket System Alternative Warhead Handheld, Manpack, and Small Form Fit Radios M88A2 Heavy Equipment Recovery Combat Utility Lift Evacuation System MQ-1C Gray Eagle Unmanned Aircraft System Patriot Advanced Capability-3 Missile Segment Enhancement RQ-7B Shadow Tactical Unmanned Aircraft System Warfighter Information Network-Tactical Increment 2 Programs with milestone decision authority at the Navy level (36) Advanced Arresting Gear AGM-88E Advanced Anti-Radiation Guided Missile Air and Missile Defense Radar Amphibious Combat Vehicle Phase 1 Increment 1 CVN 78 Gerald R. Ford Class Nuclear Aircraft Carrier DDG 1000 Zumwalt Class Destroyer DDG 51 Arleigh Burke Class Guided Missile Destroyer H-1 Upgrades (4BW/4BN) Joint Precision Approach and Landing System LHA 6 America Class Amphibious Assault Ship Littoral Combat Ship Mission Modules LPD 17 San Antonio Class Amphibious Transport Dock MQ-4C Triton Unmanned Aircraft System Programs with milestone decision authority at the Navy level (36) MQ-8 Fire Scout Unmanned Aircraft System Offensive Anti-Surface Warfare Increment 1 (Long Range Anti-Ship Missile) Appendix IV: Organizational Charts from Before and After the Reorganization of the Office of the Secretary of Defense Appendix V: Programs Using Middle-Tier Acquisition Pathways as of March 2019 Appendix VI: Department of Defense (DOD) Efforts to Implement Portfolio Management Best Practices The conference report for the National Defense Authorization Act for Fiscal Year 2018 and the Senate Armed Services Committee report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2018 included provisions for GAO to review project, program, and portfolio management standards within DOD. This appendix includes our assessment of DOD’s efforts to implement our previous portfolio management recommendations and identifies opportunities and challenges related to portfolio management that DOD may face as it continues to implement acquisition reforms. Portfolio management is used by leading commercial companies to help ensure their investments are optimized to meet customer needs within available resources. Portfolio management focuses on products collectively at an enterprise level and involves evaluating, selecting, prioritizing, and allocating limited resources to projects that best accomplish strategic or organizational goals. It is also a vehicle to make a wide variety of decisions, including capability and funding trade-offs, to achieve the optimal capability mix for a given level of investment. For DOD, effective portfolio management can help to ensure that weapon system investments are strategy-driven and affordable and balance near- and long-term needs. Take a hypothetical example in which DOD starts with 10 programs and $50 billion to invest. Without portfolio management, program managers may seek to get the most that they can out of each of the 10 programs, without assessing their aggregate contributions to defense. Using portfolio management, DOD executives would look at different combinations of and approaches to the 10 programs to determine what, collectively, would provide the best capabilities for $50 billion. This would enable executives to decide, for example, whether it is better to concentrate more investment in seven programs rather than fund all 10 as best as possible. In another example, if a program began to have cost or performance problems, portfolio management would consider whether the other programs in the portfolio could address the requirements of the problematic program rather than just putting more money into it. Portfolio management activities at DOD are carried out at both the enterprise level and the military department level and responsibilities are divided among the requirements community, the acquisition community, and the budget community. At the enterprise level, the primary offices responsible for portfolio management are the Joint Staff (representing the requirements community), the Under Secretary of Defense for Research and Engineering and the Under Secretary of Defense for Acquisition and Sustainment (representing the acquisition community), and the Director of Cost Assessment and Program Evaluation (representing the budget community). Portfolio Management Best Practices In 2007, we identified several best practices for portfolio management, which we revalidated in 2015. These leading practices encourage organizations to assess product investments collectively from an enterprise level, rather than as independent and unrelated initiatives; continually make go/no-go decisions through a gated review process to rebalance portfolios based on investments that add the most value; use an integrated approach to prioritize needs and allocate resources in accordance with strategic goals; rank and select investments using a disciplined process to assess the costs, benefits, and risks of alternative products; empower leadership to make investment decisions and hold leadership accountable for investment outcomes; and provide sustained leadership for portfolio management. Portfolio management best practices and the Project Management Institute’s portfolio management standards also state that organizations should conduct regular reviews to adjust to strategic changes or changes in the mix of products within a portfolio, among other reasons. From a DOD perspective, portfolio reviews can help increase return on taxpayers’ investments in weapon systems in a number of ways, such as: helping to ensure investments align with national security and military strategies; prioritizing the most important investments; selecting the optimum mix of investments; identifying and eliminating unwarranted duplication; monitoring programs’ health to determine whether changes to the portfolio are warranted; and determining whether investments are affordable. Previous GAO Findings and Recommendations on Portfolio Management at DOD We have previously reported that DOD was not effectively using portfolio management to optimize its weapon system investments. In 2015, we identified several factors that inhibited DOD’s ability to do so, including fragmented governance, a lack of sustained leadership, and a perceived lack of decision-making authority at the enterprise level. We also found that DOD’s portfolio management policy was dated, not fully consistent with best practices, and was not being implemented by the Department, in part due to changes in leadership priorities. Further, DOD’s enterprise- level requirements, acquisition, and budgeting communities—meaning those at the Office of the Secretary of Defense, Joint Chiefs of Staff, and Joint Staff level—were not consistently conducting portfolio reviews or collaborating to integrate key information. As a result, we reported that DOD may have been missing opportunities to better leverage its resources and identify investment priorities that best reflect DOD-wide needs. We recommended that DOD update its portfolio management policy; designate a senior official responsible for its implementation; conduct annual portfolio reviews that integrate key information from the requirements, acquisition, and budget processes; and invest in analytical tools to support its portfolio management efforts. DOD partially concurred with the recommendations, but the planned actions DOD identified at the time of our report did not fully address the issues we identified. As of March 2019, DOD had yet to implement our recommendations from 2015 (see table 16 for details of implementation status). Recent Acquisition Reforms Offer Opportunities to Improve Portfolio Management at DOD but Could Also Exacerbate Existing Challenges It is too soon to assess the effect of the acquisition reforms we reviewed on DOD’s portfolio management efforts because a critical mass of programs has not yet gone through the new acquisition processes. Depending on how the department implements these reforms, some aspects of these reforms could help to address previously-identified deficiencies in portfolio management in the department. For example: Officials in the Office of the Under Secretary of Defense for Acquisition and Sustainment told us that now that milestone decision authority for major defense acquisition programs has largely shifted from the Office of the Secretary of Defense to the military departments, they expect that they will have more time to focus on portfolio-level issues such as identifying how systems need to work together to fill capability gaps since they are less involved in the details of individual programs. We previously reported that DOD’s processes were too focused on optimizing individual investments rather than considering investments across the department. The process developed by DOD to establish cost, fielding, and performance goals brings together officials from DOD’s acquisition, requirements, and budget communities, the three key entities with responsibility for portfolio management, to provide advice on the establishment of program goals. We previously reported that DOD’s enterprise-level processes, organizations, and decision makers oversee weapon system investments generally as stove-pipes and not as an integrated whole. While the process assesses programs on an individual basis rather than collectively from an enterprise level as called for by portfolio management best practices, it may still provide additional shared insight across the acquisition, requirements, and budget communities to help assess portfolios in a more integrated fashion at an enterprise level. However, other aspects of certain reforms have the potential to exacerbate challenges we have previously identified with DOD’s portfolio management approach if not actively managed. For example: Realigning roles and responsibilities for decisions related to weapon systems programs could lead to further questions about who is ultimately responsible and accountable for portfolio management decisions if leadership roles are not clearly defined. We previously reported that DOD’s governance structure for portfolio management was fragmented in part as a result of widely-dispersed decision making responsibilities for weapon system investments. We found that this dispersion of responsibility made it difficult to determine who was empowered to make enterprise-level weapon system investment decisions and who can be considered portfolio managers. According to portfolio management best practices, leadership should be clearly defined and held accountable for outcomes. Programs under middle-tier acquisition pathways have fewer requirements to report program information to offices within the Office of the Secretary of Defense and the Joint Staff than major defense acquisition programs. For example, middle-tier acquisition programs are generally exempted from the Joint Capabilities Integration and Development System for requirements development. Therefore Joint Staff officials may have less information about program requirements than for a major defense acquisition program. Office of the Secretary of Defense officials told us they are working with the military departments and other stakeholders to determine what information is needed for oversight and portfolio management for middle-tier acquisition programs. Office of the Secretary of Defense and Joint Staff officials told us that guidance issued by the Under Secretary of Defense for Acquisition and Sustainment in October 2018 that gives the Office of the Secretary of Defense and Joint Staff formal roles in a governance process may help to ensure sufficient insight. DOD’s ability to develop a common set of portfolios to facilitate integrated portfolio analysis may be more difficult. We previously reported that the requirements, acquisition, and budget communities at DOD were using different portfolio constructs, meaning that they defined their portfolios differently and did not use a standard approach to group investments into portfolios. We identified the use of different approaches as a barrier to taking an integrated approach to prioritize needs and allocate resources in accordance with strategic goals, as called for by portfolio management best practices. For example, the requirements community uses eight joint capability areas for examining warfighter needs, acquisition portfolios vary by military department, and budget data are organized into 11 major force programs. In our prior work, many officials at DOD said that using a wide variety of portfolio constructs is necessary and sometimes beneficial given the different roles and perspectives of the organizations involved. However, as notionally illustrated in figure 8, the different communities need to go through an extensive mapping exercise when they want to analyze their portfolios from another perspective—for example, examining funding associated with joint capability areas. With the reorganization of the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, officials from the Offices of the Under Secretaries of Defense for Research and Engineering and Acquisition and Sustainment told us that portfolio management activities that used to be conducted by the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics are now split between their offices. Officials from the Office of the Under Secretary of Defense for Research and Engineering told us that they were still in the process of determining what portfolio construct they would use to group investments for portfolio management purposes. If that office decides to use a different portfolio construct than other entities, that decision will increase the already complex process of mapping together portfolios in order to perform an integrated portfolio analysis. Officials from both offices told us that they were working on a pilot effort to conduct portfolio management by focusing on DOD’s missions rather than programs, which could help to standardize the portfolio constructs if the approach is accepted on a wider scale. Appendix VII: Comments from the Department of Defense Appendix VIII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Cheryl Andrew (Assistant Director), Marie Ahearn, Peter W. Anderson, David Dornisch, Anne McDonough, Melissa Pope, Scott Purdy, Juli Steinhouse, Sara Sullivan, Anne Louise Taylor, Alyssa Weir, and David Wishard made key contributions to this report.
Why GAO Did This Study Amid concerns about the ability of DOD's acquisition process to keep pace with evolving threats, Congress included numerous reforms in recent National Defense Authorization Acts that could help to streamline acquisition oversight and field capabilities faster. GAO was asked to examine DOD's efforts to implement these reforms. This report addresses (1) the progress DOD has made implementing selected oversight reforms related to major defense acquisition programs; (2) how DOD has used middle-tier acquisition pathways; and (3) challenges DOD faces related to reform implementation. GAO reviewed five reforms: milestone decision authority designation; cost, fielding, and performance goals; independent technical risk assessments; restructuring of acquisition oversight offices; and middle-tier acquisition. GAO analyzed applicable statutes and implementing guidance, collected information from DOD about the number and types of middle-tier acquisition programs, reviewed relevant documentation, and interviewed DOD officials. What GAO Found The Department of Defense (DOD) has made progress in implementing reforms to restructure the oversight of major defense acquisition programs. As a result of one of these reforms, decision-making authority for many programs shifted from the Office of the Secretary of Defense to the military departments (see figure). Questions remain about how some reforms GAO reviewed will be carried out. For example, no programs have been required to have cost and fielding goals set under DOD's new process yet, and DOD has formed a working group to determine when to delegate risk assessments to the military departments. DOD also began using new pathways referred to as middle-tier acquisition to rapidly prototype and field new weapon systems. Middle-tier programs are expected to field capabilities within 2 to 5 years. As of March 2019, military departments were using this authority for 35 unclassified programs (see table). Source: GAO analysis of Department of Defense data. | GAO-19-439 DOD has yet to fully determine how it will oversee middle-tier acquisition programs, including what information should be required to ensure informed decisions about program selection and how to measure program performance. Without consistent oversight, DOD is not well positioned to ensure that these programs—some of which are multibillion dollar acquisitions—are likely to meet expectations for delivering prototypes or capability to the warfighter quickly. DOD also continues to face implementation challenges, including one related to disagreements about oversight roles and responsibilities between the Office of the Secretary of Defense and the military departments. Senior DOD leadership has not fully addressed these disagreements. As a result, DOD is at risk of not achieving an effective balance between oversight and accountability and efficient program management. What GAO Recommends GAO is making four recommendations, including that DOD should identify the types of information needed to select and oversee middle-tier acquisition programs consistently, and clarify the roles and responsibilities of the Office of the Secretary of Defense and the military departments for acquisition oversight. DOD concurred with GAO's recommendations and described actions planned to address them.
gao_GAO-20-454
gao_GAO-20-454_0
Background Filing Requirements for Exempt Organizations IRS Form 990-series return or notice must be filed by most organizations exempt from income tax under Internal Revenue Code section 501(a), and certain political organizations and nonexempt charitable trusts. TE/GE uses Form 990 reporting for promoting compliance and enforcing federal tax law for tax-exempt organizations (see appendix II for a copy of the Form 990 and a list of its schedules). Form 990 asks for information about an organization such as: employees, governance, and compensation; revenue and expenses; assets and liabilities; employment tax compliance; and specific organizational issues, such as lobbying by charities and private foundations. TE/GE redesigned the Form 990 for the first time in nearly 30 years for tax year 2008, and has made subsequent changes to the form (see appendix III for a summary of the changes). For tax year 2017, which is the most recent year of completed filing data, organizations filed 319,183 Form 990s. Beyond the basic Form 990, other versions include: Form 990–EZ, Short Form Return of Organization Exempt from Income Tax. This form reduces the filing burden on small tax-exempt organizations. Organizations with less than $200,000 in gross receipts and less than $500,000 in total assets may use it. For tax year 2017, 232,764 Form 990-EZ’s were filed. Form 990–N, Electronic Notice (e-Postcard) for Tax-Exempt Organizations Not Required to File Forms 990 or 990–EZ. Most small organizations whose annual gross receipts are normally $50,000 or less may file Form 990-N. For tax year 2017, 652,280 Form 990-N’s were filed. Form 990–PF, Return of Private Foundation. In addition to private foundations, nonexempt charitable trusts treated as private foundations are required to file Form 990-PF. For tax year 2017, 113,658 Form 990-PF’s were filed. Certain larger organizations are required to electronically file their returns. The Taxpayer First Act of 2019 requires all organizations to electronically file Form 990’s for tax years beginning after July 1, 2021. TE/GE can assess financial penalties for failing to file a required Form 990. As an employer, or if an exempt organization generates unrelated business income, additional tax reporting requirements may apply, such as for employment tax or unrelated business income. Several TE/GE Entities Are Involved in Examination Selection Decisions A 2017 TE/GE reorganization created CP&C to provide a centralized approach to compliance planning, examination selection and assignment and planning and monitoring activities. CP&C has three groups as follows: 1. Issue Identification and Special Review identifies and develops issues for examinations or compliance activities and certain criteria for examination selection. 2. Classification and Case Assignment uses IRS staff known as “classifiers” to review returns for examination under different examination sources (see appendix IV). Classification is the process of determining whether a return should be selected for compliance activities, what issues should be the primary focus of the compliance activity, and the type of compliance activity that should be conducted. 3. Planning and Monitoring develops an annual work plan and monitors performance. The work plan details the number of examination starts, closures and other measures. It develops classification requests to ensure that enough returns are available to meet work plan goals. TE/GE’s Compliance Governance Board (Governance Board) oversees TE/GE’s compliance program, including CP&C operations such as approving priority issue areas—known as compliance strategies. The Governance Board also reviews program goals, considers metrics and reporting, and reviews the performance of compliance strategies. The Governance Board has five TE/GE executives plus counsel who are voting members as well as three non-voting members. The Exempt Organizations examinations group is responsible for compliance activities. Examinations have various outcomes for an organization. The most severe outcome is revocation of tax exempt status. Taxes—such as employment or excise—may be assessed as a result of an examination. In fiscal year 2019, approximately $131 million in taxes were assessed. TE/GE conducts compliance contacts—non-examination correspondence such as compliance checks and soft letters—that are used to handle some compliance issues. For example, compliance checks determine whether specific reporting or filing requirements have been met. A “soft letter” notifies an organization of changes in tax-exempt law or potential compliance issues. A response to these letters is not required. TE/GE also reviews tax-exempt hospitals for compliance with certain community benefit requirements. In fiscal year 2019, TE/GE closed 1,470 compliance checks, sent 3,955 soft letters, and closed 750 hospital reviews. Compliance checks and hospital reviews can result in an examination while responses to soft letters may result in a compliance check. Exempt Organizations Identified for Examination Originate from Many Sources TE/GE identifies exempt organization returns for examination from many sources and categorizes examinations into three groups, known as portfolios: (1) Data Driven Approaches, (2) Referrals and Other Casework, and (3) Compliance Strategies. All three rely on data, to some extent, to make decisions on selecting returns for examination. Data Driven Approaches Portfolio This portfolio uses analytical models and queries based on quantitative criteria to identify potential examinations. TE/GE has three separate models that review exempt organization data from Forms 990, 990-EZ, and 990-PF for compliance. The models “score” returns for examination based on potential noncompliance. The Form 990, 990-EZ and 990-PF models have 354 unique queries. For purposes of this report, a query reviews databases to identify responses on returns that may indicate noncompliance because they do not meet certain criteria or expected values, such as exceeding a dollar threshold. Exempt Organizations Examination staff developed many of the queries, based on information collected on the Form 990 after it was redesigned for tax year 2008, according to TE/GE officials. As queries were developed, staff tested and used them to identify certain potentially noncompliant populations and to identify returns that were flagged by multiple queries. Starting in fiscal year 2016, TE/GE began using queries in models. The models use a scoring system that applies weights, or points, to each query result to generate a score—which for the Form 990 model has ranged from zero to more than 50—for a return. The models also screen out returns that are approaching a statute of limitations date, if the organization is not active, or has a current or recent examination history. Since November 2017, staff have been able to submit potential compliance issues for consideration through an online submission portal for Governance Board approval. CP&C has the option of considering whether these ideas result in model changes, according to IRS officials. Twice a year, each model is run using the latest data, and generates a Model Score Sheet (MSS). The MSS is a ranked list of returns that score above a minimum threshold. A classifier uses the ranking to identify returns for potential examination. Although the models screen for examination status and statute of limitations, a TE/GE official said the classifier also checks whether the statute of limitations date is near and whether the organization recently had undergone an examination or compliance check, as well as whether the return was identified under another selection method. This official explained that a classifier checks the criteria because conditions may have changed since the model’s last run. The classifier selects returns to fulfill a stocking plan, which identifies the number and type of returns to be examined to meet work plan requirements. See figure 1. Aside from the three models, TE/GE also uses other methods and data to identify and develop compliance work. The Data Driven Approaches portfolio includes approaches that TE/GE developed in partnership with IRS’s RAAS division. The partnership began in 2016 and continues today, according to IRS officials. The portfolio also includes some of the queries that TE/GE ran prior to fiscal year 2016 for examination selection. Some of these examinations remained open as of fiscal year 2019. Referrals and Other Casework Portfolio Although not all of the returns selected for examination in this portfolio rely on data for examination selection, we describe them all below. Referrals. Referrals are complaints about exempt organization noncompliance made by third parties, including the public and other IRS offices or divisions. Post Determination Compliance. Sampling and queries are used to identify organizations that file Form 1023-EZ. Claims. Claims are requests for tax refunds, adjustments of tax paid, or credits not previously reported or allowed. Form 990 Queries (pre-model): These queries were run prior to fiscal year 2016. Some of these examinations remained open as of fiscal year 2019. Training. TE/GE uses these examinations, selected based on various methods, to teach examiners. Other Projects. TE/GE initiated these examinations under older compliance projects, using a variety of selection methods. Compliance Strategies Portfolio The Compliance Strategies portfolio consists of compliance issues that originated from a Compliance Issue Submission Portal for TE/GE staff. The strategies are approved by the Governance Board, which results in adding the compliance strategy to the work plan. In fiscal year 2019, TE/GE closed examinations under three compliance strategies, including private foundation loans, and for-profit entities that converted to 501(c)(3) organizations. Returns are selected using sampling or other uses of data. Table 1 shows examinations closed for the three portfolios. Once an examination is underway, an examiner may expand it to include an organization’s returns for other tax years or other types of returns such as employment tax returns. IRS refers to these additional examinations as “pick-ups,” each of which is counted as a separate examination. Examiners must obtain manager approval to expand an examination. Examiners are required to check that an organization filed all returns that are required. If the examiner finds that a return was not filed—such as an employment tax return—and is unable to secure the return, he or she may prepare a “dummy” return called a substitute for return (SFR). The organization’s activities, records, and documents may then be examined. In 2017, TE/GE hired a contractor to assess aspects of the exempt organization process for examination selection, with a focus on the Form 990 model. In January 2018, the contractor released a report on the development and operation of the models. The contractor released a second report in July 2018 on the Form 990 model performance. The contractor found the model was not always identifying the “next best case” as TE/GE intended because scores did not consistently predict certain measures of noncompliance. Across both reports, the contractor made 17 recommendations, which we discuss later in this report (see appendix V). As of March 2020, TE/GE implemented one recommendation on model update submissions and part of another on hiring assessments. In September 2019, TE/GE initiated another study with the same contractor—with a planned release of the report in September 2020—on developing alternatives to the Form 990 model. Over Half of Exempt Organizations Selected for Examinations Are Identified Using Data, with No Assurance That the Models Produce Better Outcomes Reliance on Data for Examination Selection Has Increased in Recent Years Since the Form 990 model was first run for fiscal year 2016, the percentage of examinations closed that were identified by using data, such as through models or queries, has increased each year, as shown in figure 2. Almost half of these examinations are from the models. This increased reliance on using data in selecting returns for examination offers potential efficiencies. For example, a potential efficiency from using data to find possible noncompliance could mean fewer steps for staff who classify returns. Ultimately, this could allow TE/GE to shift staff from classifying returns to doing compliance activities such as examinations to confirm any actual noncompliance. Another potential efficiency would be selecting more examinations that find changes to the return. To measure the outcomes of examinations, TE/GE computes a “change rate,” or the percentage of closed examinations with a change to the return. In general, a higher change rate indicates that more examinations found noncompliance. Examinations selected using data have a slightly better change rate than other selection sources (84 percent versus 82 percent) for closures in fiscal years 2016 through 2019. The Form 990 Model’s Contribution to Improving Change Rates Is Not Clear Similar to all examinations that used data, the change rate for examinations selected using data through the Form 990 model (87 percent) was higher than the change rate for other selection sources (82 percent) in fiscal years 2016 through 2019. However, we found evidence that the changes identified in examinations did not clearly result from using the Form 990 model’s scoring system. Specifically: The model has not improved change rates compared to pre-model Form 990 queries. A higher model score is not associated with a higher change rate. Most examination changes credited to the model come from pick-up returns and SFR’s that examiners identify rather than from primary returns identified by the model score. Form 990 Model Scoring Has Not Resulted in Higher Change Rates, Compared with Pre- Model Queries The scoring generated by the Form 990 model has not improved change rates compared with the Form 990 queries that TE/GE used prior to the model. The change rates for both the Form 990 model and the pre-model queries, for fiscal years 2016 through 2019, was 87 percent. Similarly, for the last 2 fiscal years, the change rate for all Form 990 models was roughly equivalent to the change rate for other selection sources of exempt organization examinations. As shown in table 2, the models had a slightly higher change rate in fiscal year 2018, and a slightly lower change rate in 2019, compared to the other sources. Higher Model Score Is Not Associated with Higher Change Rate from Examination Form 990 model scores for returns do not consistently predict examination change rates based on our analysis of examination closures since the model’s first run in 2016 through fiscal year 2019; the scores better predicted the rate at which returns were selected for examination. See figure 3. The figure shows little relationship between model scores and change rates; change rates remained relatively flat as model scores increase. While change rates were slightly higher for the less than 1 percent of returns scoring 45 or above relative to lower-scoring returns, TE/GE only examined 65 returns during fiscal years 2016 through 2019 that scored this high. The overall correlation between model scores and change rates is -.02. A TE/GE official said that it is not difficult to find a small issue on a return, which allows for a change regardless of score. To attempt to measure the severity of an examination change, TE/GE developed a weighted disposal score (WDS). However, TE/GE does not have documented criteria or justifications for how the weights were developed. A TE/GE official acknowledged that TE/GE has not used WDS because of questions about how consistently the weights have been developed. If WDS was to be used as a measure, TE/GE would need to ensure the adequacy of the support for the related weights and scores. According to TE/GE’s fiscal year 2020 Program Letter, the model relies on quantitative criteria, “which allows TE/GE to allocate resources that focus on issues that have the greatest impact.” To the extent that a higher model score does not predict a higher change rate, the model is not selecting returns with the greatest impact. Further, taxes assessed per return also indicate that examinations are not having the greatest impact. For fiscal years 2016 through 2019, the examinations credited to the model averaged $2,460 in proposed tax assessments per return, compared with an average of $19,042 for the rest of the exempt organization examinations. TE/GE acknowledged that its scoring methods are limited because it does not utilize modern data practices. It contracted for a study, to be completed in September 2020, of alternative model architectures and scoring methods that incorporate best practices for using criteria and options for scoring returns. Most of the Changes Credited to the Form 990 Model Are Driven by Examinations of Returns Not Identified by the Model Score As shown in table 3, the Form 990 model scoring did not account for most closed examinations and examination changes credited to the model during fiscal years 2016 through 2019. Rather, examinations of “pick-up” returns and substitutes for returns (SFRs) accounted for most closed examinations and produced a higher change rate than examinations of primary returns scored by the model. Examiners find these other returns during examinations of returns identified by the model. The higher change rates for pick-up and SFR returns compared to the primary returns identified by the model support TE/GE’s policy to examine all pick-up returns and SFRs that meet examination criteria. However, this raises questions about how well the model identifies noncompliant returns. Given the lower change rate for the returns the model scored, the queries for noncompliance on the Form 990 may not be effective. While the model includes queries on noncompliance related to “pick-up” issues such as unfiled employment tax returns, the necessary data were not available to allow us to analyze how often these queries identified the primary return for potential noncompliance. As discussed later, an analysis of queries could provide insight into the validity of the model. TE/GE Has Not Fully Implemented and Documented Internal Controls for Assessing and Using Data for Examination Selection Internal control should be an integral part of an agency’s operational processes and structure to help managers achieve their objectives on an ongoing basis. When evaluating implementation, management determines if the control exists and is operational. A deficiency in implementation exists when no such control is present or is not implemented correctly, preventing objectives from being met. Documentation is required to show the effective design, implementation, and operation of an internal control system. The level and nature of documentation can vary based on the size of the agency and the complexity of its processes. Management exercises judgment in determining the extent of documentation that is needed. TE/GE has not fully implemented or documented internal controls for analyzing data for examination selection, meaning it cannot be assured that its selection decisions will produce the desired outcomes. The internal controls range from two controls that TE/GE adequately documented and implemented to seven others where TE/GE did not. The seven include five controls presented as sequential steps in using data for making selection decisions as well as two controls addressing timely documentation of Internal Revenue Manual (IRM) sections and risk management. TE/GE Has Implemented Two Controls for Building a Positive System in Selecting Returns for Examination The first internal control TE/GE implemented involved assessing staff competence. To ensure competence in using data to make decisions, TE/GE officials contracted with data specialists for modeling expertise to incorporate statistical and machine learning into examination selection. Bringing in this modeling expertise was an important step because exempt organization examinations staff, rather than statisticians or data analysts, initially developed the examination selection models, according to TE/GE officials. TE/GE also provided documents on training and basic duties for staff when analyzing data. What are Internal Controls and Why Do They Matter? One way federal agencies can improve accountability in achieving their missions is to implement an effective internal control system. Effective internal control comprises the plans, methods, policies, and procedures used to fulfill objectives on an ongoing basis. It serves as the first line of defense in safeguarding assets and increases the likelihood that an agency will achieve its objectives while adapting to changing environments, demands, risks, and priorities. Effective internal control provides reasonable, not absolute, assurance that an organization will meet its objectives. The second internal control involved communicating inside and outside of TE/GE. Internally, TE/GE staff could provide feedback through an online compliance issue submission portal in fiscal year 2018. Submissions may become compliance strategies or model queries. As for external communication, TE/GE collaborated on data-related issues in an IRS- wide group and with statistical specialists in the RAAS division. For example, RAAS identified potential data sources for compliance issues and drew samples for certain compliance strategies to test rates of noncompliance. In addition, to show how it communicates essential information with staff and outside parties, TE/GE provided examples on disseminating guidance and examination accomplishments, including examination starts and closures. TE/GE Did Not Fully Implement and Document Controls over Processes and Data Used to Select Returns for Examination TE/GE did not fully implement and document internal control over the processes and data used to select returns for examination. These processes cover five key steps for using data to decide which returns to select for examination (see figure 4). Effective internal controls would enable TE/GE to show how feedback and lessons learned in Step 5 can help it better determine how to create and use quality information (Step 3) and what decisions to make (Step 4) when pursuing the established objectives (Step 1). However, TE/GE has not defined measurable objectives or undertaken regular evaluations to assess progress toward objectives. Although TE/GE was able to describe its approach for accessing relevant and reliable data, processing those data into quality information and using the data to make decisions, it was not able to fully document how its control processes worked, as discussed below. TE/GE Has Not Defined Measurable Objectives for Selecting Returns for Examination Since its 2017 reorganization, TE/GE has not established measurable objectives to select exempt organization returns for examination (see figure 5). Specifically, TE/GE has not produced formal objectives that are aligned with its mission and the IRS strategic plan, are expressed in quantitative terms, and are related to examination selection and program outcomes. TE/GE documents, including Program Letters and Business Performance Reviews, refer to outcomes that could constitute objectives—such as improving the models and advancing data analytics to drive decisions about identifying and addressing existing and emerging areas of noncompliance—but they do not identify them as such. TE/GE officials acknowledged the need to establish measurable objectives. They said their efforts are evolving and they need to improve analytical abilities to help assess the capacity for meeting objectives. For example, one official said they are working to establish objectives at the onset of a compliance strategy. Without measurable and defined objectives, TE/GE cannot effectively analyze how well it selects returns for examination and lacks a clear vision of what it is trying to achieve. A lack of measurable objectives also hinders implementing other internal controls, such as evaluating performance or assessing risk, as discussed later. TE/GE Could Not Demonstrate that It Has Controls in Place to Catch Certain Form 990 Errors but Electronic Filing Will Likely Increase Data Reliability The IRM has procedures for processing Form 990 data, which include controls over acceptance and transmission of the data (see figure 6). TE/GE provided data that showed error rates for electronically filed returns filed in 2019 were between 1 and 4 percent. However, taxpayer or transcription error rates for paper returns filed in 2019 were between 19 and 32 percent of filed returns, depending on the version of the Form 990. TE/GE was not able to show that it regularly reviews and remediates such errors to ensure the reliability of Forms 990 data. However, under the Taxpayer First Act of 2019, electronic filing of all Forms 990 will be required for tax years starting July 2, 2021. This change should remediate the known errors from paper-filed returns and increase data reliability. Processing Queries for the Model Did Not Always Produce Quality Information We found several issues with TE/GE’s processing of queries in the Form 990, 990-EZ and 990-PF models that affect the validity or reliability of the scores that the models generate to rank returns for examination selection (see figure 7). As a result, TE/GE cannot ensure that the model scores properly rank the returns for examination selection. Specifically, TE/GE does not consistently assign point values for the queries used to generate the model scores and inform selection decisions. We also found errors in TE/GE’s documentation of the queries, which lead to redundant queries, and inflated model scores. Finally, TE/GE has no control procedures to ensure consistent testing of proposed queries. Inconsistent Point Values for Queries Raise Concerns about Model Scores We estimate that for 24 percent of queries (83 queries) from the models, TE/GE staff did not assign point values for queries consistent with its definitions for the four categories (see table 4). Not implementing the defined point values puts the model scores at risk of inconsistent scoring and examination selection. We found three types of queries involved with the inconsistent assignment of point values. 1. Miscategorized queries were not assigned to the category that matches TE/GE’s definition. These occurred because TE/GE has not documented specific rules for query categorization. As a result, we found an estimated 7.4 percent of queries (26 queries) where TE/GE staff overrode the category definitions when assigning points without documenting the reasons. Absent the reasons, TE/GE cannot ensure consistent treatment of similar queries. In our sample, these override decisions included assigning: Three queries to the Speculative category, which is worth five points, when the definitions supported the Automatic category, which is worth 10 points. TE/GE officials said they did this to offset potentially confusing language in the return lines or instructions. One query to the Automatic category rather than the Speculative category supported by the definitions. TE/GE officials said they used the higher point value category to increase the chance of selection so that certain Form 990-PF attachments, which the queries do not cover, would be more likely to be considered for examination. 2. Queries could fit into more than one category based on TE/GE’s definitions. We estimate that 16 percent (55) of the queries could fit in more than one category. Of these, 18 in our sample could have been placed in the Missing Schedule/Form category. In addition, we found one query in our sample that TE/GE labeled as having a duplicate but one query was assigned to the Automatic category worth 10 points and the other was assigned to the Inconsistencies category worth one point. TE/GE officials acknowledged that some queries could fit in more than one category. When we asked why certain queries for missing schedules and forms were not categorized as such, these officials described a hierarchy of missing forms based on being subject to penalties and interest, such as employment tax returns, and their associated categories. They did not document or consistently implement this hierarchy as queries identifying the same missing form sometimes were in different categories. 3. Sliding scale queries whose point values differ from those stated in TE/GE’s model documentation. We found nine queries with sliding scale point values that involved Form 1099 information returns. The sliding scales reduce point values based on the severity of the compliance issue, such as reducing the query point values if the organization filed a low number of information returns. TE/GE did not provide documentation about the rationale and associated definitions for these queries. Without documentation on the different treatment of these queries, TE/GE is not transparent about the rationale for assigning points through a sliding scale to support its model scoring. TE/GE officials said they have not updated definitions and criteria for using the categories and sliding scales because of a decision to keep the model operating as is and to update documentation as time permits. After our preliminary analyses, TE/GE provided updated definitions for the four categories, and descriptions of the sliding scales that were used for queries. However, these definitions and descriptions do not include any decision rules or criteria that document how to apply them. Further, the sliding scale descriptions do not offer definitions for words like “low,” when referring to the volume of information returns filed. Definitions that are incomplete and not always followed when assigning point values raise concerns about consistency and transparency in scoring returns for examination selection. TE/GE’s assignments affect scores and whether a return is placed on the MSS for examination consideration. Inconsistent or invalid assignment of point values may distort the potential for examination. For example, of the nine miscategorized queries we analyzed in our sample, we determined that if their categorizations were corrected, hits on three of the queries would make a return eligible for the MSS and hits on two others may make a return eligible, depending on the other queries the return hit. Changes to two queries would have made returns no longer eligible for the MSS. Query Documentation Has Errors That Forestall Valid Analysis of Queries We estimate that about 27 percent (96 queries) of the queries in the models had errors in the documented descriptions. Query descriptions detail the logic and data used from specific forms and line numbers that the queries scan. The errors we found include: references to older versions of the forms as well as omissions of form lines used in the query; and query descriptions that did not match programming code. To address these differences, TE/GE proposed corrections to the query descriptions. A TE/GE official said re-visiting the query documentation is part of the contractor’s 2020 study and that TE/GE does not have a timeline for correcting the documentation. In addition to errors, the descriptions also use inconsistent language, which prevents easy identification of queries by issue. For example, to identify all queries related to excess benefit transactions, one must manually search different fields for terms such as “excess benefit,” “excessive benefit,” and “EBT” (excess benefit transaction). Furthermore, TE/GE’s database fields only capture one issue per query. Since many queries involve multiple issues, these fields cannot be used to fully inventory the queries. These errors and inconsistencies in the query descriptions occurred because TE/GE has no procedures for regular reviews of queries as forms or laws change. TE/GE Compliance Governance Board (Governance Board) members review query descriptions prior to implementation but do not review details of the queries in the context of the entire model. Further, TE/GE procedures only require review of programming code before queries are sent to the Governance Board. Review of the code once it is integrated into the model program is optional, according to TE/GE procedures. The errors and inconsistent descriptions prevent TE/GE from having a comprehensive and accurate inventory of queries within and across models. Without regular reviews, TE/GE cannot be assured that its programming code is correct and that any analyses of the performance of queries or the models as a whole are valid. When we asked about the lack of regular reviews of queries, TE/GE officials said they plan to implement reviews but did not provide us with a plan or timeframes for doing so. Another effect of not having a comprehensive and accurate inventory is that TE/GE cannot analyze query performance and identify queries that look for the same compliance issue to prevent redundancies and to ensure valid and consistent scoring. As a result, we found queries that address the same or similar issues with the same criteria, inflating scores for returns and making selection for examination more likely. Our analysis of the July 2019 Form 990 model run showed 90 pairs of queries, involving 78 unique queries that hit together at least 90 percent of the time. By having two queries that rely on the same criteria, returns accumulate extra points for the same behavior. For example, all 910 returns that hit an employment tax query also hit a query that shares some of the same criteria and thresholds. As a result, these returns accumulated 10 points rather than five points, making them eligible for the MSS. Aside from our sample, we found queries seeking certain organizations with political campaign activities and political expenditures that would total 15 points in the Form 990 model. Queries identifying these same activities and expenditures would total 30 points in the Form 990-EZ model. TEGE’s contractor recommended in 2018 that TE/GE eliminate “redundant” queries, which is similar to our finding. TE/GE officials said they do not believe the redundant queries are duplicates and they are awaiting the results of the contractor’s study in 2020 before making changes. Until TE/GE resolves the extent to which it has redundant queries, it cannot do a valid analysis of whether its queries identify the most noncompliant returns. TE/GE Lacks Procedures and Criteria for Testing Proposed Model Queries TE/GE has no procedures requiring the testing of proposed model queries. Even so, based on our sample of the new queries in the fiscal year 2018 Form 990 model, TE/GE would be able to provide evidence of tests for an estimated 94 percent of all new queries. However, TE/GE also does not have procedures for how to conduct testing or what data to use. The testing that has been performed consisted of running the query on certain tax years of returns to count the number of returns flagged, according to a TE/GE official. TE/GE does not run the queries on data from closed examinations to see whether the queries would identify known compliance issues that justify an examination. Interactions with existing queries are not tested. When considering new queries, Governance Board members see the number of returns flagged by each query during testing, but have no criteria to determine whether a query flags an appropriate number of returns. A TE/GE official said TE/GE does not believe it needs to document procedures for testing. In the absence of procedures and standards, TE/GE cannot ensure that testing of new queries is done consistently with appropriate data sources and research standards. By only testing the number of returns that a query flags, TE/GE cannot validate that proposed queries can effectively identify the noncompliance that would be worth examining. Using tested, validated and documented data is a critical step in ensuring that research is proper, reliable, and accomplished in accordance with expectations, according to the IRM. Without testing queries on reliable data, and making adjustments based on criteria, TE/GE risks implementing queries that do not produce reasonable numbers of hits that are worth pursing through examinations. TE/GE Did Not Fully Document Its Processes for Transforming Data into Quality Information for Other Examination Sources For examination sources that used data other than the models, we found that TE/GE did not always document its processing of data into quality information. We identified common “start-to-finish” segments to this processing of data, including: submitting a proposal and supporting data to find noncompliance; reviewing the potential data sources and queries or thresholds to be used as examination selection criteria; and recommending the proposed effort for approval through the appropriate executives. On one hand, TE/GE provided documentation of the required approvals for these segments in processing data for five compliance strategies. These strategies included examining loans by private foundations and collecting information on organizations that exceed investment income limitations. On the other hand, TE/GE did not provide similar start-to-finish documentation on processing quality information from other examination sources that use data outside of the models; examples include research projects under the Data Driven Approaches portfolio and projects that use queries under the Referrals and Other Casework portfolio (see table 1). Over several discussions, TE/GE did not explain why it did not fully document such projects. By not fully documenting how it processes data into quality information, and by not linking such processes to measurable objectives, TE/GE cannot ensure that it is analyzing quality information in selecting examinations. TE/GE Did Not Consistently Document Use of Quality Information to Make Decisions For the compliance strategies, TE/GE showed evidence of using the quality information to decide which returns to select for examination, such as for Governance Board decisions. However, TE/GE did not provide documentation on how it made selection decisions using data for other projects that use queries (see figure 8). In addition, TE/GE did not use quality information to decide how frequently to run the model. TE/GE decided to run the Form 990 models twice per year without analyzing the effects. Moreover, we found that the time between runs is inconsistent. Since the Form 990 model’s first run, the time between runs ranged from 84 days to 251 days. Since returns are ranked on the MSS, eliminations result in the classification staff selecting lower scoring returns. The average score for examined returns was 27.1 for the list that was used for 84 days compared to 23.2 for the list used for 251 days. To the extent that TE/GE ensures that its model scores are as reliable and valid as possible, analyzing data could help TE/GE identify the frequency of model runs that maximizes the use of model scores to guide decisions on examination selection. For example, analyzing Form 990 filing patterns could help identify the optimal timing of model runs, allowing for adequate time remaining under the statute of limitations. Lack of Regular Evaluations and Inconsistent Data Prevent TE/GE from Fully Evaluating Its Selection Methods TE/GE does not regularly evaluate its models and other selection processes that use data. In particular, model scores for all returns are not retained or are inconsistent from year to year which limits the ability to conduct evaluations. Furthermore, TE/GE does not evaluate reasons why some selected returns are not examined, which could help improve selection methods (figure 9). TE/GE Has No System for Regularly Evaluating Examination Selection Decisions TE/GE has not regularly evaluated its examination selection decisions that rely on data to improve its selection methods. While TE/GE commissioned the contractor evaluations of its Form 990 model, it has no documented process for continued evaluations of the model or any evaluations of other sources, such as research projects, that rely on data to select returns for examination. For its compliance strategies, not enough examinations have closed under the strategies to warrant evaluations yet, according to a TE/GE official. Data limitations have challenged evaluation efforts, according to a TE/GE official. To address this, TE/GE started capturing more detailed data on examination outcomes; however no evaluations of outcomes have resulted. The officials noted that they have been spending more time reporting and monitoring compared to analyzing and evaluating, which they said needs to occur more often. Without evaluation, TE/GE cannot ensure that its use of data to select returns is working as intended. In addition to not evaluating selection decisions and their outcomes, TE/GE has also not addressed the Form 990 model deficiencies the contractor previously identified. In its 2018 reports, the contractor made 17 recommendations (see appendix V for the status of each recommendation). A TE/GE official said it had not acted on many recommendations because all examination selection strategies are being evaluated with the transition to the Compliance Planning and Classification (CP&C) office. TE/GE initiated another study in 2019 with the same contractor to address its 2018 recommendations among other tasks. As of March 2020, TE/GE implemented one recommendation and part of another, deferred action on nine recommendations until after the contractor finishes the new study, deferred action on three due to other reasons, and did not clearly provide a status for two. In addition, TE/GE will likely not implement the other recommendation. According to contract documentation, the study will explore architectures and alternative designs to the model, propose up to three compliance actions other than examinations, and recommend measures to monitor the actions’ effectiveness. TE/GE expects a final report by September 2020. To the extent that TE/GE has not implemented the contractor’s recommendations from 2018, the related deficiencies identified in the Form 990 model will have persisted for more than 2 years by the time the contractor issues its 2020 report. Unless TE/GE documents its consideration and action of the recommendations, the value of the contractor’s work is diminished and possible improvements may be overlooked. TE/GE Has Not Retained Complete Data to Allow for Full Evaluation of Its Models Until recently, TE/GE did not retain model scores for each return and query performance data that would be useful for evaluation. The January 2018 contractor report recommended that TE/GE save model data. For its July 2018 report, the contractor had to recreate historical scoring data for its evaluation. TE/GE officials said they increased storage space and saved the fiscal year 2018 data. When we asked for these data, TE/GE officials said that each time they run a model, they overwrite the old data. The officials said they did not have space on their server to save all of the data. Instead, TE/GE had been saving the MSS’s for each run. However, the MSSs have only limited value for evaluating the model and queries. Specifically, the MSS for each model run contains score information for only about 20,000 returns (out of about 300,000 scored) that have a certain minimum score and hit queries in certain categories. Further, the MSS does not contain data on model queries that are flagged. In September 2019, TE/GE officials said the Research, Applied Analytics and Statistics division provided temporary server storage space to save model data through September 2020 while the contractor assesses TE/GE’s models. Starting with the July 2019 model run, TE/GE is saving score and query performance data for all filed returns. In January 2020, TE/GE officials told us they developed a way to save data on query hits for all returns run through the model. However, TE/GE has not provided documentation to show exactly what data will be saved over the long term for all filed returns run through the model. Without complete historical data on model scores and query hits, TE/GE cannot assess the full performance of its models. Such data would facilitate an analysis of the queries, and whether they identified returns with changes or related pick- up returns. Historical Data on Examination Outcomes Lack Consistency, Which Complicates Evaluation TE/GE does not analyze consistent multi-year data on examination outcomes, which would facilitate evaluation of its use of data in selecting returns for examination. TE/GE officials said they use historical data— such as change rates—to determine the success of an examination source. TE/GE provided historical data on examination starts, closures, and pick-up returns covering 2 years but did not provide data beyond that and change rates were not always included. Further, TE/GE has used different methods to organize and report examination outcomes over the years. These differences in reporting outcomes affect TE/GE’s data in the following ways: Starting with fiscal year 2018, data on exempt organizations examinations include federal, state and local employment tax examinations. Prior to 2018, TE/GE reported these employment tax examination data separately. After TE/GEs reorganization in 2017, it grouped examinations into portfolios and changed the portfolio definitions during 2018. As of March 2020, TE/GE has not produced a consistent method of summarizing of historical data. TE/GE officials acknowledged data limitations, and said they are working to implement recommendations from a 2019 study to improve capturing examination data. TE/GE officials said the staff member analyzing data has been doing so for many years, allowing them to reconcile the data. However, this poses a risk that other IRS or oversight entities cannot reconcile the data. According to internal control standards, agencies should establish effective methods for retaining organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel, as well as contingency plans to respond to sudden personnel changes. TE/GE’s inconsistent data limit its ability to conduct evaluations. These inconsistent data also prevent TE/GE from establishing baselines or targets for examination outcomes such as change rates to help measure the success of its selection methods. TE/GE Did Not Evaluate Reasons for Not Examining Some Returns Selected for Examination In fiscal year 2019, TE/GE did not examine about 20 percent of the exempt organization returns that had been selected for examination. Although this rate of non-examined returns has improved in recent years, TE/GE has not analyzed data to explore why the rate has improved and how to reduce it further. Our analysis showed that almost 30 percent of these returns were not examined because they were too close to the statute of limitations date. TE/GE officials did not have a reason why the returns were sent to the field for examination if the statute date was so close. TE/GE officials said they do not regularly analyze reasons for non-examined returns. They said they have analyzed only the number of non-examined returns by manager and area. In addition, TE/GE officials said they implemented new guidance in fiscal year 2019 for staff who make decisions to not examine returns, which is intended to improve the information they have on these decisions. As of fiscal year 2019, TE/GE began tracking certain non-examined returns by project code but has not committed to analyzing the data. Non-examined returns are not an efficient use of resources, as the time spent reviewing and rejecting these returns—even if minimal—reduces the time staff have for conducting examinations. Routinely analyzing reasons for non-examined returns, as well as related data, could help TE/GE identify actions to reduce the number of returns that are sent to the field but are then declined for examination by a manager or examiner. Updating Examination Selection Procedures and Identifying Risks Could Help TE/GE Use Data in Decision Making TE/GE Has Not Annually Reviewed and Updated Procedures in Certain Internal Revenue Manual Sections or Issued Related Interim Guidance on Examination Selection TE/GE did not annually update procedures on examination selection and databases in certain IRM sections since the May 2017 reorganization. The Internal Revenue Manual (IRM) states that procedures in IRM sections must be annually reviewed and updated as needed. TE/GE released updated IRM sections for two of the three groups in CP&C. It released a section on the Issue Identification and Research in September 2018, and one on the Classification and Case Assignment procedures in September 2019. However, these sections do not cover the steps the model classifier takes when reviewing returns from the MSS. As of December 2019, no IRM section has been released on the Planning and Monitoring group. As such, TE/GE staff does not always have official information on roles and responsibilities for new entities and processes created since May 2017. For certain updated or new IRM sections, TE/GE did not release interim guidance while those sections awaited approval. IRS requires issuance of interim guidance to address deviations from the IRM, even if temporary. Instead of developing interim guidance, TE/GE officials stated that, in the wake of the reorganization, they decided to use desk guides, such as for the IRM section on classification and case assignment processes. However, TE/GE did not update its desk guides on processes until more than 2 years after the reorganization. Furthermore, the desk guides do not cover the specific duties of the model classifier, or the steps for classification of returns identified for compliance strategies. IRM guidance states that management must develop and maintain documentation on data systems; collection and analysis; and responsibilities for data collection, input and analysis. Timely documentation of new procedures and responsibilities improves the accuracy and reliability of IRM content. According to the IRM, when the IRM and related guidance are not current, TE/GE increases the risk that staff follow incorrect procedures, use guidance that is not transparent to the public, administer tax laws inconsistently, and misinform taxpayers. TE/GE Has Not Identified Risks from Using Data for Examination Selection Good federal government practice requires risk management, without which, TE/GE could undercut its use of data to enhance decisions on examination selection. Although the use of data in examination selection has the potential to improve efficiencies in classifying and examining returns to identify noncompliance, any new endeavor carries risks. TE/GE did not identify any TE/GE-specific risks that could undercut its success in using data to select exempt organization returns for examination. As of December 2019, the TE/GE risk register identified 12 risks, ranging from aging technology and infrastructure to employee engagement and morale. One risk— data access and analytics—involved using data in general decision making at the IRS level rather than TE/GE decisions about examination selection or its related models. TE/GE officials said they are analyzing and responding to this risk under the IRS- wide risk management process. TE/GE did not document why it did not identify any TE/GE-specific risks in using data for examination selection. Our report discusses a number of deficiencies that could be potential risks to TE/GE using data in selecting returns for examination. For example, TE/GE lacks program objectives that would be necessary to identify and assess risks. We also found weaknesses in how TE/GE processes and analyzes data to inform examination selection and how it evaluates selection decisions. Further, the IRM states that TE/GE’s Compliance Governance Board (Governance Board) should consider risks in its decisions and we saw that risks were considered in documents proposing examination selection criteria to the Governance Board. We did not find evidence that TE/GE’s risk management process recorded these risks for analysis and any response if needed. After we shared our concerns about the lack of identified risks, TE/GE officials noted that TE/GE participates in mitigation steps as identified by the IRS Risk Office. TE/GE officials also mentioned CP&C representation in an IRS pilot program designed to explore ways to better select employment tax cases. While such actions could be a component of a risk management strategy, it is incomplete and it is unclear how this initiative would help TE/GE identify, analyze, and mitigate risk. Not identifying and managing risks identified in this report leaves TE/GE open to errors and examination selection decisions that are potentially not transparent or not fair. As such, without objectives and a consistent and documented process for identifying and managing risks, TE/GE cannot effectively address risks that may hamper its efforts to use data to enhance its compliance work. Conclusions Increasingly constrained resources underscore the importance of TE/GE’s efforts to efficiently identify and examine exempt organization returns that have the highest noncompliance potential. TE/GE has developed ways to use data to aid in examination selection. However, opportunities exist to strengthen internal controls to help ensure that data used are reliable, decision rules are clear and documented, and objectives are identified and being achieved. TE/GE should take several steps to improve the reliability and validity of the models. These steps include improving documentation of decision rules and criteria for scoring; regularly reviewing model documentation and programing; testing new queries and their interaction with existing queries; retaining model and query data; and periodically evaluating the performance of selection methods. In absence of regular evaluation of its examination selection decisions, TE/GE misses opportunities for improving its selection processes. Deficiencies that TE/GE’s contractor already identified provide an opening for improving its models. Without consistent historical data, TE/GE will be limited in assessing progress and making improvements. A review of the reasons why certain returns selected for examination are not examined is an example of an evaluation that could help inform process improvements. Ensuring that all procedures are current and accurate would reduce the potential for employees following incorrect procedures and administering tax laws inconsistently. TE/GE’s lack of identified risks from using data in examination selection precludes TE/GE from analyzing and responding to those risks. By taking actions to further strengthen these internal controls, TE/GE could enhance its efforts to identify and examine the most noncompliant exempt organizations and enhance IRS’s oversight of tax exempt organizations and help maintain the integrity of the charitable sector and the larger exempt community. Recommendations for Executive Action We are making the following 13 recommendations to IRS: The Commissioner of Internal Revenue should document measurable objectives for using data in selecting exempt organization returns for examination. (Recommendation 1) The Commissioner of Internal Revenue should document and consistently use clear criteria and decision rules on assigning point values to queries, using categories and sliding scales. (Recommendation 2) The Commissioner of Internal Revenue should require a regular review of query descriptions and programming to ensure their accuracy and minimize queries that flag the same or similar compliance issue. (Recommendation 3) The Commissioner of Internal Revenue should develop procedures and criteria to test new queries prior to implementation in the models. (Recommendation 4) The Commissioner of Internal Revenue should more fully document how TE/GE processes data and uses data to make examination selection decisions for sources outside of the model such as research projects and other projects that use queries. (Recommendation 5) The Commissioner of Internal Revenue should conduct an analysis to identify the optimal interval between model runs. (Recommendation 6) The Commissioner of Internal Revenue should establish a process for regularly evaluating selection decisions and related outcomes for the models and other processes that use data to select returns for examinations. (Recommendation 7) The Commissioner of Internal Revenue should document consideration or action on recommendations from its 2018 and 2020 contractor assessments. (Recommendation 8) The Commissioner of Internal Revenue should document how score and query data for all returns in the models will continue to be saved over the long term. (Recommendation 9) The Commissioner of Internal Revenue should ensure that historical data on examination outcomes are consistently defined and used when doing analysis of examination outcomes. (Recommendation 10) The Commissioner of Internal Revenue should routinely analyze the reasons for not examining selected returns and identify any necessary actions to address the reasons. (Recommendation 11) The Commissioner of Internal Revenue should annually review and update procedures as needed in relevant IRM sections on examination selection and issue interim guidance until the affected IRM sections are updated. (Recommendation 12) The Commissioner of Internal Revenue should document why TE/GE has not identified any risks in its risk register for using data to select exempt organization returns for examination. If risks are subsequently identified, TE/GE should document how it plans to analyze and address them. (Recommendation 13) Agency Comments and Our Evaluation We provided a draft of this report to IRS for review and comment. IRS provided written comments, which are reproduced in appendix VI and summarized below. Of our 13 recommendations, IRS agreed with 12 and disagreed with one. IRS also provided technical comments, which we incorporated as appropriate. IRS disagreed with our recommendation on ensuring that historical data on examination outcomes are consistently defined (Recommendation 10), pointing out that its raw data are consistently defined in its information systems. Our concern, however, is with how the outcome data are reported and analyzed, which inhibits understanding of outcome trends over time. In response to IRS comments, we added language to the final recommendation to more clearly focus on the consistency of the outcome data used and analyzed over the years. In addition, although IRS agreed with our recommendation to more fully document how TE/GE processes and uses data to make examination selection decisions outside of the model (Recommendation 5), IRS said that it would provide documentation on a project (other than compliance strategies) that is approved by the Governance Board. While we look forward to such documentation, we are primarily interested in IRS documenting a system for how it processes and uses data to select returns for examinations for projects outside of the model, regardless of Governance Board approval. As discussed in the report, IRS has such a system for projects in its compliance strategies portfolio, which could provide a framework to follow. Similarly, IRS agreed to analyze return due dates of the filing populations commonly associated with the examinations (Recommendation 6). We will be interested to see how that analysis helps IRS to determine the optimal interval between model runs, which is the focus on our recommendation. We are sending copies to the appropriate congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or mctiguej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology This report assesses (1) the use of data to select tax-exempt organization returns for examination; and (2) the process the Tax Exempt and Government Entities (TE/GE) division has established to select returns for examination. To assess the use of data to select tax-exempt organization returns for examination, we reviewed data from the Internal Revenue Service’s (IRS) Returns Inventory and Classification System (RICS) for fiscal years 2016 to 2019. Table 5 defines the variables and measures we analyzed. We analyzed aggregated data at the project code level, and we grouped project codes by examination source (for example, examinations from referrals occurred under several project codes). Based on our testing of the data and review of documentation and interviews, we determined that the data were reliable for purposes of assessing TE/GE’s selection processes. We analyzed outcomes from the Form 990, Return of Organization Exempt from Income Tax model. We used RICS data and Model Score Sheets (MSS), for examinations closed from October 1, 2015 through September 30, 2019. Each model run generates an MSS, which is a ranked list of Form 990s that hit certain types of queries and have a minimum score. We matched Form 990 scores from the MSS with selection information and examination outcomes in RICS for examinations closed under all project codes, though the data presented in objective one is specific to examinations started under the Form 990 project code. We used source codes—which indicate whether an examination was a pick-up, substitute for return or primary return—to analyze what types of examinations produced the highest change rates under the Form 990 model project code. To inform this work, we reviewed recent TE/GE contractor assessments of exempt organization examination selection and the Form 990 model. To assess the process that TE/GE has established to select returns for examination, we reviewed internal controls steps in Standards for Internal Control in the Federal Government (Green Book). Given TE/GE’s emphasis on using data in examination selection, we identified five internal control steps related to analyzing data to select returns for examination to address our objectives. We selected four other internal controls because they constitute practices common to all five steps in the selection process. These are presented in table 6. Define objectives in measurable terms so performance in achieving objectives can be assessed. (Green Book (GB) 6.04) Obtain relevant data from reliable internal and external sources in a timely manner based on identified information requirements. (GB 13.04) Process the obtained data into quality information that supports the internal control system (i.e., using data in decision making); use quality information to achieve the entity’s objectives; and document policies on the responsibilities for data collection, input, and analysis. (GB 13.05, 13.01, and 12.02) Use the quality information to make informed decisions in achieving key objectives. (GB 13.05) Evaluate performance (outcomes) for key objectives and take actions to remediate deficiencies. (GB 13.05, 16.03, and 17.06) Develops, maintains, and updates in a timely fashion documentation on the responsibilities for data collection, input and analysis for using data in decision making. (GB 12.02 and 12.05 and IRM) Defines risk tolerances in specific and measurable terms, considers internal and external factors to identify risks, analyzes risks to estimate significance, and designs specific actions for response. (GB 6.09, 7.04, 7.05, and 7.09) Ensures that personnel possess competence to meet responsibilities as well as understand the importance of effective data analysis in decision making. (GB 4.04) Communicates necessary information to enable personnel to perform key roles for analyzing data in decision making and with external parties. (GB 14.03 and 15.20) To identify criteria specific to IRS, we reviewed the Internal Revenue Manual (IRM), which provides standards and guidance similar to the criteria we identified. We shared the Green Book and IRM criteria with TE/GE, as well as our expectations of the documentation that would show adherence to these criteria. Our assessment focused on examination sources developed after the 2017 reorganization and sources that rely on data for selection (such as models and projects that use queries). Examination sources that did not rely on data, such as claims, were not assessed. We reviewed the referrals classification process to consider how data might be used to enhance it. We analyzed TE/GE documents such as Program Letters, Business Performance Reviews, desk guides, memorandums, work plans, performance data, contractor reports and training documents. In addition, we assessed documents—such as meeting minutes and research results—showing the development and approval of data queries and projects used in examination selection. We reviewed the MSSs for the Form 990 model, and procedures for the Form 990 model, the Form 990-EZ, Short Form Return of Organization Exempt from Income Tax, and Form 990-PF, Return of Private Foundation, models. We selected a generalizable stratified random sample of 114 of the 354 unique queries in the three models (see table 7). Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., the margin of error is +/- 10 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Our sample is designed to control the margin of error of attribute estimates within the overall scope query sample as well as the combined Form 990 query sample (a combination of strata 2 and 4 plus certainty selections). The sample was designed as follows. There is one certainty stratum for Form 990-PF queries where we selected a 100 percent sample (i.e. a census), and this stratum does not have a margin of error. We selected these queries with certainty because of the smaller population size in this stratum. For remaining strata, we selected the necessary sample size to achieve an overall 95 percent confidence interval for attribute (percentage) estimates with a margin of error of about +/-10 percentage points under proportionate allocation. In addition, the sample size was increased in strata 2 and 4 (combining Form 990 model queries) to achieve the necessary sample size for a 95 percent confidence interval with a margin of error of about +/-10 percentage points within this group. For the sampled queries, we compared their category and descriptions as provided in the model documentation, with TE/GE’s definitions of the categories to assess whether the query was categorized appropriately. We also compared the query descriptions with the forms to assess whether the referenced lines were relevant to the query. Additionally, we reviewed the model programming code to check for errors and consistency with the query descriptions. For query categorizations that did not match TE/GE’s definitions or queries that appeared to have errors in the descriptions or programming, we asked TE/GE to review and explain its decisions. To identify potentially redundant queries, we analyzed output from the July 2019 Form 990 model run, the only one available at the time of our analysis. Within our sample, we reviewed 36 of the 104 newly added queries in the fiscal year 2018 model. Specifically, we reviewed approval documentation and meeting minutes to test whether two levels of management and the Compliance Governance Board approved new queries, consistent with TE/GE procedures. We also reviewed evidence that TE/GE tested each query prior to its approval for inclusion in the models. We held two telephone focus groups with the nine classifiers who review exempt organizations referrals. We asked questions about the data and resources they use to classify referrals, how they convey their results, and how they are provided feedback. We interviewed officials from the Compliance Planning and Classification office and IRS’s Research, Applied Analytics and Statistics division who worked on several compliance research initiatives. We met regularly with TE/GE to share ongoing assessments. We conducted this performance audit from November 2018 to June 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Form 990, Return of Organization Exempt from Income Tax The figure below shows the text of the 2019 version of Form 990, Return of Organization Exempt from Income Tax. A list of schedules for the Form 990 is provided in table 8 following the form. The remaining pages of the Form 990 are available at IRS’s website, accessed March 23, 2020: https://www.irs.gov/pub/irs-pdf/f990.pdf. Appendix III: Changes to Exempt Organization Form 990, Tax Years 2009- 2019 Most exempt organizations are required to file an annual form to report their activities, structure, revenue and expenses, and other items. The organization’s classification under the Internal Revenue Code and its gross receipts and total assets, determines which form must be filed. Most organizations file one of the following: Form 990, Return of Organization Exempt from Income Tax; Form 990-EZ, Short Form, Return of Organization Exempt from Form 990-PF, Return of Private Foundation or Section 4947(a)(1) Trust Treated as Private Foundation. Form 990 Has Undergone Changes Since 2008 Redesign The Internal Revenue Service (IRS) last redesigned the Form 990 series for tax year 2008. The redesign added 14 schedules to the existing two, and reflected changes in the tax-exempt sector and tax law. Some changes from the redesign were phased in and implemented for tax year 2008 and 2009 filings. We summarized changes as found in the “What’s New” section of the form instructions for each of the three Form 990 types and for each year. We grouped the changes into two categories as defined by: New or revised question(s): The addition of new lines, check boxes, narratives or schedules. This includes changes to accommodate new laws or reporting requirements, such as new reporting thresholds or standards. Instructions and format: New descriptions or details in the instructions, such as specifying examples or how to provide certain information to IRS. This also includes changes that affect order of lines or schedules, but not the content. For the Form 990 changes since the redesign, IRS made 56 changes to the form or its instructions for tax years 2009 through 2019 (see table 9 below). These changes include three to the 2018 form implementing new excise taxes on net investment income of certain colleges and universities and on certain tax-exempt organization executive compensation. Aside from new electronic filing requirements for tax years beginning July 2, 2019, the 2019 form did not have any changes. In addition to the 56 changes, IRS made 95 clarifications to existing lines or instructions, or revisions to definitions from tax years 2009 through 2018. These clarifications provide more specific definitions or other details. Further, several of the schedules had additions. For example, the Patient Protection and Affordable Care Act led to additional reporting on Schedule H, Hospitals, to fulfill requirements that hospitals report on each of their facilities and conduct a Community Health Needs Assessment every 3 years. Most of the Form 990-EZ’s 27 changes occurred in tax years 2009 through 2012, of which 12 were for 2011 and several of them focused on compensation reporting. IRS also made 27 clarifications for 2009-2013. Public Law 115-97 did not affect Form 990-EZ. There were no changes to the 2019 form. See table 10. For the Form 990-PF, IRS made the fewest changes compared to Forms 990 and 990-EZ, with only 11 changes and four clarifications for tax years 2009 through 2019. The Form 990-PF had three changes prompted in 2018 by Public Law 115-97. Electronic filing requirements apply to Form 990-PF for tax years starting July 2, 2019, but there were no other changes for the 2019 form. See table 11. Appendix IV: Exempt Organizations Examination Selection Process Appendix IV describes the general examination selection process for exempt organization returns, and specific classification steps that apply to certain returns. General Selection Process for Exempt Organizations Examinations The annual work plan is the foundation for identifying and assigning returns for examination. The Compliance Planning and Classification (CP&C) office follows various steps to identify returns to fulfill the work plan, which end in the assignment of returns for potential examinations to field work groups. The intended process is in figure 11 and discussed below. Annual work plan. CP&C’s Planning and Monitoring group develops the annual work plan. The work plan provides estimates of examination starts and closures. It also has estimates for the number of hours to be spent per return examination and the number of days to complete an examination. Planning and Monitoring develops estimates at the project code level, which corresponds to a specific examination source or project such as the Form 990 model. The Tax Exempt and Government Entities’ (TE/GE) Compliance Governance Board approves the work plan. TE/GE provides a summary of the work plan in its annual Program Letter. Stocking report. The Planning and Monitoring group uses the work plan to issue “stocking” reports to guide classifiers on types of returns to identify for potential examination. Planning and Monitoring considers available examiners, and progress in meeting work plan numbers. The report lists the number of returns needed by grade, project code, and classification source. Classification. Classifiers review stocking plans to identify returns for potential examination. Classifiers are to eliminate returns for consideration if the (1) return is approaching its statute of limitation date, (2) organization has been examined in the last 3 years, or (3) organization is under a compliance check. Establishing the return and initial case building. If classifiers identify examination potential, they establish returns in the Audit Information Management System and Reporting Compliance Case Management System (RCCMS). The returns are sent for initial case building—developing paperwork to initiate the examination— according to a TE/GE official. Virtual shelf. Established returns and the initial case material are sent to the virtual shelf, which is an electronic inventory of returns that may be assigned for examination. Certain referrals, claims, compliance strategies, and other returns are prioritized, according to a TE/GE official. Returns remain on the shelf until assigned for examination or otherwise closed due to statute of limitations, according to a TE/GE official. Examination assignment. Functional Assignment Coordinators pull returns from the virtual shelf to fulfill field group work requests. Returns on the virtual shelf that matched a work order undergo additional case building before delivery to field examination groups. Monitoring. Planning and Monitoring staff regularly review reports that compare work plan goals with current work, and run algorithms to forecast upcoming work. These reviews are intended to ensure that sufficient work is available for assignment, excess work is not created, and returns approaching statute of limitations are identified. The monitoring informs new stocking reports. Specific Classification Steps for Models and Certain Other Examination Sources Classification steps vary depending on how a return was identified for potential examination. For returns identified with queries or models, classifiers check a limited set of criteria once a return is identified. For returns identified through other sources, such as referrals, the classifier also reviews facts and circumstances about potential noncompliance in returns. We focus here on examination sources that rely on data—such as models or queries—and referrals. Referrals are complaints of exempt organization noncompliance made by third parties, including the public and other parts of the Internal Revenue Service. We describe referrals classification because it is one of the top sources of exempt organizations examinations. Analytical Models The models are run to identify returns with potential noncompliance and lists them on a Model Score Sheet (MSS). The MSS is a ranked list of returns by scores from the model. According to a TE/GE official, the classifier: works down the list, starting with the highest scores, to fill stocking checks whether the return was also identified for a compliance strategy; and. eliminates returns based on the statute of limitations and recent examination activity. Compliance Strategies For some projects in the Compliance Strategies portfolio, a query is run or returns are sampled to identify a population meeting indicators of potential noncompliance. Then, the classifier uses the stocking report to select returns with certain geographic or case grade criteria and eliminates returns based on statute of limitations, recent examination status, and resolving non-filing issues, according to a TE/GE official. Referrals TE/GE classifiers do a triage to review and eliminate referrals that are not relevant to tax administration or do not have substantiated information. The triage classifier sorts referrals and reviews the following: organization status (for example, already revoked or terminated); examination history of the organization; and evidence of substantial inurement or private benefit, non-exempt activities, or material employment tax or unrelated business income that would result in a significant tax assessment. Referrals that pass triage are either sent to classification or, if they deal with political issues, are sent to a committee of three TE/GE managers, who vote on a selection decision. For all referrals, the classifier researches the referral. Research sources include websites, external databases, and IRS taxpayer account databases. The classifier may look at the organization’s website, information about officers, or prior examination history. Referrals with examination potential are either assigned immediately or placed on the virtual shelf. Referrals that must be immediately assigned include those with strong indicators of fraud, illegal or illicit activities (including terrorism), or referrals from whistleblowers, or certain other IRS divisions. Other referrals are labeled as high, medium or lower priority, based on potential for revocation or significant tax assessments. Appendix V: Status of Contractor Recommendations on Exempt Organizations Examination Selection The Tax Exempt and Government Entities division (TE/GE) hired a contractor to review the effectiveness of its Form 990 examination selection model. The contractor prepared two reports. The first, delivered in January 2018, makes recommendations on the model process, the computing environment, and performance measures. The second, delivered in July 2018, makes recommendations to more effectively and efficiently identify returns for examination, such as through the model. Within the two reports, the contractor made 17 recommendations. Table 12 lists the recommendations and the status of each. In September 2019, TE/GE initiated another study, anticipated to be completed in September 2020. The study focuses on developing alternatives to enhance the models. The study will explore architectures and alternative designs for the model and propose alternative compliance actions to examinations and recommend measures to monitor their effectiveness. Appendix VI: Comments from Internal Revenue Service Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Tom Short (Assistant Director), Lindsay Swenson (Analyst-in-Charge), Ann Czapiewski, George Guttman, Amalia Konstas, Krista Loose, Alan Rozzi, Cynthia Saunders, Andrew J. Stephens, and Sonya Vartivarian made key contributions to this report.
Why GAO Did This Study Exempt organizations often provide charitable services, or in some instances, membership benefits in furtherance of an exempt purpose. They generally do not pay federal income tax. IRS examines exempt organization returns (Form 990 and others) to address noncompliance, which may promote confidence in the tax exempt sector. In 2016, IRS started using three analytical models using Form 990 data to identify potential noncompliance and select returns for examination. GAO was asked to review IRS's use of Form 990 data. This report assesses (1) IRS's use of data to select returns for examination and, (2) the process IRS has established for selecting returns. GAO analyzed (1) examination data from fiscal years 2016 through 2019 including results from the largest Form 990 model, and (2) model documentation for a generalizable sample. GAO interviewed IRS officials and assessed IRS policies and procedures using relevant standards for internal control. What GAO Found The Internal Revenue Service (IRS) used data to select almost 70 percent of its examinations of Form 990 returns in fiscal year 2019. Almost half of these examinations were selected using models that score returns for potential noncompliance (see figure). Of the returns examined that were selected using the model, 87 percent resulted in a change to the return, indicating that IRS identified noncompliance. GAO found that the model did not improve change rates compared to prior selection methods and a higher model score is not associated with a higher change rate. IRS has not fully implemented or documented internal controls in its established processes for analyzing data for examination selection. For example: IRS has not defined measurable objectives for using data to select returns for examination . Without measurable objectives, IRS cannot assess how well it is doing or fully implement other internal controls. IRS's models have deficiencies affecting the validity and reliability of return scoring and selection . IRS has incomplete definitions and procedures and did not always follow its definitions when assigning point values for identifying potential noncompliance for examination. As a result, return scoring by the models is not always consistent. IRS did not consistently document the processing and use of data in decision-making on examination selection . Without such documentation, IRS cannot support its use of data in examination selection in all cases. IRS does not regularly evaluate examination selection. Examination data were inconsistent across years and IRS only tracks one prior year of data. IRS also did not save data on all returns that the models scored. Without data and regular evaluations, IRS cannot assure that its models are selecting returns as intended and that deficiencies are identified and corrected. What GAO Recommends GAO makes 13 recommendations, including that IRS establish objectives, revise model documentation, fully document processing and using data in decisions, and regularly evaluate examination selection. IRS agreed with all recommendations except one related to evaluating examination selection methods using consistent historical data over time. GAO continues to believe that this recommendation is valid as discussed in the report.
gao_GAO-20-22
gao_GAO-20-22_0
Background Dispute Resolution Options Congress appropriated $12.8 billion in federal funds under Part B of IDEA for fiscal year 2019. Under IDEA, Education awards funds to state educational agencies (SEA), which provide these funds to local educational agencies (LEA). SEAs also monitor Part B implementation by the school districts. As a condition of receiving IDEA funds, states are required to have policies and procedures in effect that are consistent with IDEA requirements, including requirements related to procedural safeguards and due process procedures. IDEA requires states to make dispute resolution options available, which parents may use to resolve disagreements regarding a school district’s decisions related to the identification, evaluation, and educational placement of their child with a disability, or the provision of a free appropriate public education (FAPE) to the child. These options include: Mediation. Mediation is a confidential, voluntary process in which a trained, qualified, and impartial mediator, paid for by the SEA, works with the parents and school district to try to reach an agreement about the IDEA-related issue in dispute. Mediations can be initiated by either the parent or the school district to resolve any dispute related to IDEA, including matters that arise before filing of a due process complaint. If agreement is reached through the mediation process, the parties must execute a legally binding agreement. Due process complaint. A due process complaint is a request for a formal due process hearing. A due process hearing is conducted before a qualified and impartial hearing officer and involves presentation of evidence, sworn testimony, and cross-examination. It often involves attorneys and expert witnesses, and thus may be more costly than other dispute resolution options for all parties involved. Because a due process hearing is a formal proceeding, it may be more adversarial in nature than other dispute resolution options. Either party can appeal a hearing officer’s decision by bringing a civil action in any state court of competent jurisdiction or in a U.S. district court. Not all due process complaints result in a due process hearing. For example, some due process complaints may be withdrawn by the parents or not meet the requirements for a filing a complaint under IDEA regulations. In addition, in some cases, the parents and school district may resolve the complaint through alternative means, such as mediation. The 2004 IDEA reauthorization added the requirement for a resolution meeting to the due process complaint process to try to resolve the issues in a parent’s due process complaint collaboratively before the parties may proceed to the formal and often costly due process complaint hearing procedure. A resolution meeting must take place within 15 days of a parent filing a due process complaint and before any due process hearing involving a hearing officer, unless both parties agree in writing to waive the meeting or agree to use the IDEA’s mediation process. Settlement agreements reached through resolution meetings must be in writing and are legally binding. State complaint. An individual or an organization, including one from another state, may file a complaint with the SEA alleging that a public agency has violated a requirement of Part B of IDEA or its implementing regulations. Once the SEA receives such a complaint, it must engage in specified procedures to resolve the complaint, including conducting an on-site investigation, if the SEA determines that it is necessary. Generally, the SEA must issue a written decision within 60 calendar days unless exceptional circumstances warrant an extension or the parties agree to extend the timeline to engage in an alternative dispute resolution procedure. The SEA’s written decision must include findings of fact and conclusions and the reasons for the SEA’s final decision. The state’s complaint procedures must include steps for effective implementation of the SEA’s final decision, including any corrective actions to achieve compliance, if needed. IDEA also requires school districts to provide parents with a procedural safeguards notice, which explains all of the procedural safeguards available to them under IDEA. Education and State Responsibilities under IDEA Education’s Office of Special Education Programs (OSEP) administers IDEA, and is responsible for data collection and monitoring, among other responsibilities. Data collection. Under IDEA, SEAs are required to annually report to Education data on the use of mediation and due process procedures. Specifically, SEAs report data to OSEP, including the total number of: mediation requests received, mediation agreements reached (related to a due process complaint or not related to a due process complaint), due process complaints filed, resolution meetings that result in a written settlement agreement, and due process hearings conducted. Each state also reports data on the timely resolution of state complaints and timely adjudication of due process complaints. According to Education officials, all dispute resolution data are aggregated at the state level and Education does not collect dispute resolution data at the school or district level. According to Education officials, Education’s collection of state-level dispute resolution data is consistent with the manner in which grant awards are made for Part B of IDEA. Because states are the grantees, it is the states that report data to Education. Education’s monitoring. IDEA requires Education to monitor SEAs to ensure they meet program requirements. According to Education officials, Education uses multiple methods to monitor states’ implementation of IDEA, including reviewing data submitted by the states in their state performance plans and annual performance reports, conducting on-site monitoring visits to some states each year, and following up on concerns raised via customer calls and letters. Based on its monitoring and review of state dispute resolution data, among other information, Education is required under IDEA to annually determine whether each state meets the IDEA requirements or needs assistance or intervention. Education’s technical assistance. In addition to providing technical assistance to states, Education provides technical assistance to parents and the general public through its Parent Training and Information Centers (PTI) and CADRE. PTIs are designed to help parents of children with disabilities participate effectively in their children’s education. Education’s technical assistance covers a range of topics, including IDEA dispute resolution options. States’ responsibilities. While Education monitors states, IDEA requires states to monitor and conduct enforcement activities in their school districts. States are also responsible for investigating state complaints and producing reports with the results of their investigation, as well as providing mediators as needed to mediate disputes between school districts and parents. States may also provide other support and direct services such as training and technical assistance among other activities. Dispute Resolution Options Were Used About 35,000 Times Nationally and Use Varied Across School Districts with Different Characteristics Due Process Complaints Were the Most Commonly Used Dispute Resolution Option, and Disputes Were Most Frequently Related to Evaluations, Placement, Services and Supports, and Discipline For the 6.8 million students from ages 3 to 21 who were served under IDEA Part B in school year 2016-17, there were a total of 35,142 mediation requests, due process complaints filed, and state complaints filed nationwide. Over about the last decade, this total decreased by about 2 percent, according to data from the Center for Appropriate Dispute Resolution in Special Education (CADRE). In addition, the mix of dispute resolution options used has changed. Since school year 2004-05, the number of due process complaints declined, while the number of mediation requests increased. However, due process complaints still made up more than half the total number of dispute resolution options used in school year 2016-17 (see fig. 1). Due process complaints. While the overall number of due process complaints has declined since school year 2004-05 (from 21,118 to 18,490) the percentage of fully adjudicated due process hearings (i.e., due process complaints that went all the way through the hearing process and a hearing officer rendered a decision) has declined more sharply. In school year 2004-05, about 35 percent of all due process complaints were fully adjudicated; in school year 2016-17, 11 percent were fully adjudicated. Due process complaints may not be fully adjudicated for several reasons. For example, complaints may be withdrawn by the filer, dismissed by the hearing officer, or resolved through other means, such as a resolution meeting or an agreement to try to resolve the dispute through mediation. CADRE’s data show that resolution meetings were held less than half the time due process complaints were filed in 6 of the 12 school years between 2005-06, the first year resolution meetings were used, and 2016- 17. When resolution meetings did occur, they resulted in resolution agreements less than 30 percent of the time in 10 of these 12 years. Mediation. According to CADRE, mediation is viewed as less adversarial than due process hearings, in part, because parties work together to try to reach an agreement. CADRE also reports that mediation is generally believed to be less costly than due process hearings because it typically requires less time and may require less involvement from attorneys and other experts. The number of mediation requests increased from school year 2004-05 to 2016-17 as Education and the states encouraged dispute resolution options that stakeholders told us were less costly and confrontational. In school year 2016-17, there were 11,413 mediations requested, the largest number of requests from school year 2004-05 to 2016-17. In addition, mediation requests resulted in mediation meetings at least 60 percent of the time in each of these school years. Those meetings resulted in agreements at least two-thirds of the time in every year but one (see fig. 2). Furthermore, more than half of the mediation meetings held stemmed from due process complaints that had been filed, which suggests that parties involved in the complaints may have been using mediation meetings to try to avoid a due process hearing. State complaints. State complaints were the least commonly used dispute resolution option. There were 5,239 state complaints filed in school year 2016-17, down from 6,201 in school year 2004-05 (see fig. 3). On average, from school year 2004-05 to 2016-17, approximately two-thirds of complaints filed resulted in the state issuing a report, and about two-thirds of those reports included findings of noncompliance with some aspect of IDEA on the part of the school district. According to state officials we spoke with, a state that receives a complaint will issue a report unless the filer withdraws the complaint, the state determines that the complaint is not about an issue covered under IDEA, or the complaint is resolved through other means. The rate at which all three dispute resolution options were used varied widely across states. Some states and territories had much higher rates of dispute resolution activity than others. In school year 2016-17, due process complaints were generally used at a higher rate nationwide than mediation requests and state complaints, according to CADRE data (27.2, 16.8, and 7.7 per 10,000 IDEA students served, respectively). However, the rate of due process complaints filed in states ranged from a high of 252.1 in the District of Columbia to a low of fewer than 1 per 10,000 IDEA students served in Nebraska, respectively. Similarly, some states had much higher rates of mediation requests and state complaints filed than others. Within states, the mix of dispute resolution options used also varied. In some states, due process complaints were used much more frequently than mediation requests and state complaints, while other states saw mediation requests or state complaints used most frequently. According to state officials, Parent Training and Information Center (PTI) staff, Protection and Advocacy (P&A) agency staff, and other stakeholders we interviewed, parents most commonly engage in IDEA dispute resolution because of concerns they have about the evaluations, placement, services and supports, and discipline related to the educational services their child receives. For example, a dispute related to placement may arise if a parent wants their child to spend more time in a regular education classroom as opposed to a self-contained classroom with only special education students. A parent might also object if a school district wants to place their child in an alternative school. On the other hand, some parents may seek an out-of-district placement for their child if they feel that more services will be available. A dispute over services may center on a parent asking for services for their child that the school district refuses to provide, or a parent believing that the school district is not providing services that are included in their child’s individualized education program. Research we reviewed generally supported what stakeholders told us were the main causes of disputes, although discipline issues were not reported as frequently. Other issues that led to disputes less frequently, according to those we spoke with, included, lack of progress on the part of the student, parental participation in decision making, transition services, and other accommodations for students. Dispute Resolution Activity Varied Based on the Income Level and Racial/Ethnic Characteristics of Districts in Selected States When we analyzed five states’ dispute resolution data we found that dispute resolution activity varied based on districts’ income levels. In general, a greater proportion of very high-income districts had dispute resolution activity, and these districts also had higher rates of dispute resolution activity than very low-income districts (see fig. 4.) This pattern was mostly consistent for all three types of dispute resolution options. Specifically, Mediation requests and due process complaints: In all five states, a greater proportion of very high-income districts tended to have mediation or due process activity than very low-income districts. Similarly, very high-income districts generally had a higher rate of such activity than very low-income districts. (See app. III for data on the individual states.) State complaints: A greater proportion of very high-income districts had state complaint activity in four of the five states. In addition, very high-income districts also had a higher rate of state complaints compared to very low-income districts in three of the five states. (See app. III for data on the individual states.) When we looked at districts’ racial and/or ethnic characteristics in our five states, we found that a smaller proportion of very high-minority districts had dispute resolution activity than very low-minority districts, but generally had higher rates of activity (see fig. 5, and app. III for data by state). We also analyzed the results of initiated disputes by districts’ income level and racial and/or ethnic characteristics—meaning the percentage of disputes that resulted in a meeting or an agreement for mediation requests, adjudication for due process complaints, and a report with findings for state complaints. As shown in tables 1-3, there was no consistent pattern in the results of dispute activity for all three types of disputes across districts with different income levels and racial/ethnic characteristics. Education and State Efforts Are Designed to Help Parents Who May Face Challenges Parents May Face Challenges Using IDEA Dispute Resolution Options Stakeholders we interviewed identified several types of challenges parents may face in using IDEA dispute resolution options, such as the cost of attorneys for due process hearings. Cost and Availability of Attorneys and Expert Witnesses While parents may hire an attorney to help with dispute resolution, stakeholders consistently told us the cost of attorneys and expert witnesses was a significant barrier to parents’ ability to use the due process complaint option in particular—especially low-income parents. Parents are not required to use an attorney at a due process hearing, but stakeholders told us that prevailing is difficult without legal representation and expert witnesses to testify on the parents’ behalf. An Education official told us that school districts may provide a list of free and low-cost attorneys to parents. According to stakeholders we interviewed, in some cases, Protection and Advocacy agencies (P&A)— which are funded by the Department of Health and Human Services (HHS)—provide legal services to parents at no cost, or refer clients to other attorneys. In general, however, very few attorneys will work on a pro-bono basis to handle IDEA dispute cases, according to stakeholders. Further, under IDEA, a court may award parents reasonable attorney’s fees and costs if they prevail in a due process hearing; however, parents cannot recoup expert witness costs regardless of the outcome. Also, if parents do not prevail at a due process hearing, they may be responsible for the school district’s legal costs in addition to their own, which can be a disincentive to going through a hearing. Education regulations allow parents to be accompanied and advised in due process hearings by individuals with special knowledge about children with disabilities, and according to IDEA regulations, whether those individuals can legally represent them is determined by state law. According to Education officials, bringing non-attorneys to support them may help reduce costs. However, the school district is likely to still have legal representation. The amount of direct legal services P&As provide varies across, and even within, states. P&A staff we interviewed in one state told us that their attorneys in one city spend most of their time assessing parents’ cases, reviewing documentation, giving advice, answering questions, and conducting training for parents, but little time participating in actual hearings. In contrast, the P&A attorneys we spoke with in another city in the same state said that 50 to 70 percent of their work is direct representation at hearings. Staff at other P&As we spoke with work primarily on cases that fall within their priority areas or cases they believe will have wide-reaching or systemic effects. The availability of attorneys can also be a challenge. According to stakeholders we interviewed, some areas, particularly rural ones, may have fewer available attorneys. However, Education officials told us that school districts in rural or sparsely populated areas may be more likely to have an incentive to resolve a dispute before it goes to a due process hearing because smaller school districts are unlikely to have in-house attorneys, and hiring an attorney is expensive. Other Factors Affecting Parents’ Willingness and Ability to Initiate Dispute Resolution According to stakeholders, many parents feel they are at a disadvantage in a conflict with the school district due to an imbalance of power and so may be reluctant to engage in dispute resolution and take on the associated costs when they feel they are unlikely to prevail. Stakeholders also said that some parents who live in less populated and more rural areas may be reluctant to initiate dispute resolution out of concern for their privacy and because, for example, in these communities they and their children are more likely to see the teachers, principals, and district officials at the grocery store or at church, which may be awkward. Furthermore, these families may have no other educational options in the area to turn to if the dispute becomes too contentious. In some cultures, according to stakeholders, it is less common to challenge an authority figure, such as a school district official or teacher. In addition, according to stakeholders, parents may fear the school district will retaliate against their children or them if the parents initiate a dispute, such as by threatening to stop providing services. Stakeholders also told us that they are aware of cases in which the school district has called the state’s child protective services agency in what they believe was retaliation for parents bringing a dispute against the district, and that parents who are undocumented may fear that raising a dispute might result in unwanted attention from immigration officials. Further, according to stakeholders, some parents face other challenges, such as language barriers, difficulty obtaining time off from work, transportation, or internet access that could affect their use of IDEA dispute resolution and their ability to take advantage of resources, such as IDEA dispute resolution training, workshops, and online information. Education Funds Technical Assistance Providers That Explain Dispute Resolution Processes to Parents Education and SEAs provide technical assistance to support parents’ understanding of their rights under IDEA and to facilitate their use of dispute resolution options. According to stakeholders we interviewed, the area of special education in general and the federal law, IDEA, are complicated, and parents often do not understand the IDEA dispute resolution process. Education supports several efforts to help parents understand and use dispute resolution options afforded to them under IDEA. Procedural safeguards notice. To receive IDEA funds, states must ensure school districts notify parents of their rights under IDEA, including the right to initiate dispute resolution about the educational services provided to their child. School districts must provide a notice, referred to as a procedural safeguards notice, to parents that explains their rights under IDEA. According to Education officials, to help states meet their IDEA requirements, the agency developed a model notice, which states can, but are not required to, have school districts use to notify parents of their rights under IDEA. States may also develop their own procedural safeguards notice as long as it includes all the information required under IDEA. Technical assistance. Education established and funds different types of technical assistance centers that provide information, training, workshops, and advocate services, and collect and disseminate data on dispute resolution, among other activities. Specifically, Education officials reported that Education provided about $21 million to the network of Parent Training and Information Centers (PTI), about $2.9 million to the network of Community Parent Resource Centers, and $750,000 to CADRE in fiscal year 2019. In addition, Education’s technical assistance centers collaborate with P&As in some cases. Further, P&A staff we interviewed in some of our selected states told us they conduct trainings for advocates to attend meetings with parents, other attorneys working on special education issues, community organizations and agencies, and parents. Education officials told us that, in the past, the agency has facilitated meetings between PTIs and P&As, to improve collaboration between these organizations. According to Education officials, these meetings resulted in informal agreements between PTIs and P&As. In addition, Education’s Center for Parent Information and Resources, the national technical assistance center to the PTIs, provides resources on its website to help parents learn about their rights and the procedural safeguards notice they receive from schools. For example, the center’s website contains an explanation of the procedural safeguards notice and online training on procedural safeguards, among other issues. The website also provides contact information for the PTI(s) in each state. Further, CADRE, part of Education’s technical assistance and dissemination network, has developed concise, easy-to-read materials that it distributes to parent centers and others to help them understand the procedural safeguards and how to resolve disputes with school districts. Stakeholders we interviewed told us that parents often do not understand IDEA dispute resolution procedures, but that PTI staff are available to explain them, discuss the procedural safeguards notice, and offer other assistance at no cost to the parents. According to stakeholders, the IDEA procedural safeguards notice is usually a lengthy document that uses complex, legal language and that parents say the notice is hard to understand. Education officials told us their model notice is complex in part because it must reflect all the applicable provisions of the IDEA statute and regulations. To help parents understand the notice and their dispute resolution options, the PTIs in our selected states offer a variety of assistance, such as staffing telephone helplines, meeting with parents in person, offering workshops and training for parents, and developing or making available easy-to-read documents and other resources. PTI staff can also attend mediation meetings with parents and help parents write state complaints, including parents for whom English is not their first language. In addition, PTI staff told us they try to help specific populations, including parents who are not native English speakers, understand and navigate the dispute process. In some cases, PTI staff will attend mediation meetings with or provide interpreters for non-English speaking parents. PTI staff are also available to help parents who have lower levels of formal education or who have disabilities, which stakeholders identified as other factors that could affect parents’ use of dispute resolution options. States Also Provide Technical Assistance and Training to Help Parents Use Dispute Resolution Options Our five selected states provide technical assistance and training to help parents understand and use dispute resolution options, including how to file a state complaint. State officials in some of our selected states said they make available plain language documents that can supplement the legally required procedural safeguards notice. For example, all of the states created a parents’ rights handbook and several have one- or two- page documents describing the IDEA dispute resolution processes that they make available on the state’s public website (see fig. 6 for an example of such a document). In addition, the states we contacted post information about IDEA on their websites in multiple languages. For example, one state’s parents’ rights handbook is available in English and 11 other languages. Regarding the cost of due process hearings discussed earlier, one state we contacted provides information about free and low-cost services along with the state’s parents’ rights booklet, and several states include contact information for the PTIs and sometimes P&As in their booklet. State officials we interviewed also said their states offer telephone helplines that parents can call with questions about their dispute resolution options and the processes involved. Some state officials told us they have staff available by phone to explain the dispute options to parents, including to parents who do not speak English or have lower levels of formal education. One state has a phone line that connects parents to an early resolution specialist who will try to help parents resolve the dispute before a formal complaint becomes necessary. Officials in one state told us that the state has installed voice interpretation technology for its helpline so that parents who need assistance with hearing or speaking can communicate with staff. Some states also employ staff who can serve as interpreters to better assist non-English speaking parents. Officials in some states told us that staff answering the helpline are available to answer questions about dispute resolution documents for parents who have difficulty reading. In addition, some of the states we contacted said they made requesting mediation and/or filing state complaints easier by posting the required initiation forms on their websites. According to staff from one state, after the state posted its state complaint form online, the number of complaints doubled in 5 years. Further, some of our selected states provide training and technical assistance to school districts, parent advocate groups, and parents related to accessing IDEA dispute options. One of our selected states uses 16 regional support teams to provide training and technical assistance to school districts. Another state conducts parent training jointly with the Education-funded PTI in the state. We have previously reported on other efforts some states have taken to help parents understand their dispute rights and reduce the need for parents to initiate formal disputes. For example, some states have offered conflict resolution skills training to school district staff and parents, and support facilitated IEP meetings, among other initiatives. Agency Comments and Our Evaluation We provided a draft of this product to the Department of Education for review and comment. We received written comments from Education, which are reproduced in appendix I. Education also provided technical comments that we have incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0580 or nowickij@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Comments from the Department of Education Appendix II: Objectives, Scope, and Methodology This report examines the use of dispute resolution options available under the Individuals with Disabilities Education Act (IDEA). In particular, this report examines (1) how often IDEA dispute resolution options are used, and whether use in selected states varies across school district-level socioeconomic or demographic characteristics; and (2) what challenges parents face in using IDEA dispute resolution options and how Education and selected states help facilitate parents’ use of these options. To address our first objective, we obtained publicly available dispute resolution data at the national and state levels and collected and analyzed data on the number and types of dispute resolution options used from selected states at the school district level. To address how often dispute resolution options are used, we reviewed and analyzed publicly available data from the Center for Appropriate Dispute Resolution in Special Education (CADRE) from school years 2004-05 to 2016-17, the most recent data available when we conducted our analysis. We assessed the reliability of these data by interviewing knowledgeable CADRE staff and comparing CADRE data to other publicly available data. In addition, we interviewed staff at Parent Training and Information Centers (PTI) funded by the Department of Education (Education) and Protection and Advocacy (P&A) agencies funded by the Department of Health and Human Services, as well as state educational agency (SEA) officials in our five selected states to determine the reasons parents use dispute resolution. We also interviewed various national organizations that advocate for parents and local educational agencies (LEA) and SEAs. To determine whether the use of dispute resolution options varied by socioeconomic or racial and/or ethnic characteristics, we analyzed dispute resolution data we collected at the LEA level from five states for school year 2017-18, the most recent data available at the time of our analysis. We selected these states—Massachusetts, Michigan, New Jersey, Ohio, and Pennsylvania—based on a combination of criteria including the amount of dispute activity within the state (that is, the number of mediations, due process complaints, and state complaints); the large number of LEAs in the state with highly homogenous student populations to allow us to compare across LEAs with different student populations; the large number of IDEA-eligible students in the state; and the states’ ability to provide reliable LEA level data on disputes. We used Education’s Common Core of Data (CCD) to categorize each LEA in our selected states based on (1) income level, as measured by the percentage of students eligible for free or reduced-price school lunch; (2) racial and/or ethnic makeup, as measured by the percentage of Black and/or Hispanic students; and (3) population density, as categorized by CCD. We used Education’s school year 2016-17 CCD data, which was the most recent data available at the time of our analysis. In some cases, states had not reported 2016-17 free or reduced-price school lunch data to CCD so we used CCD data from a previous year. We assessed the reliability of the CCD data by (1) reviewing existing information about the data and the system that produced them and (2) reviewing data reliability assessments of the data from other recent GAO reports. We assessed the reliability of dispute resolution data provided by the states by (1) performing electronic testing of required data elements, (2) conducting interviews with knowledgeable agency officials and reviewing written responses to data reliability questions, and (3) reviewing existing information about the data and systems that produced them, where available. We determined that the CCD and data collected from the states were sufficiently reliable for the purposes of this report. We matched the LEA-level dispute data provided by our states to the LEA-level socioeconomic, race/ethnicity, and population density data from CCD to determine whether the frequency of use of dispute resolution options or the types of options used varied across LEAs with different characteristics. Because our analyses are at the LEA level, and not the individual student or family level, it is impossible to know with certainty whether the families using the dispute resolution options in our school districts match the categorization of the districts themselves. To address this concern to the greatest extent possible, we report on LEAs that are highly homogenous. These districts are those in which: 90 percent or more of the students were eligible for free or reduced- price school lunch (very low-income districts) compared to districts in which 10 percent or fewer of the students were eligible (very high- income districts), and 90 percent or more of the students were Black and/or Hispanic (very high-minority districts) compared to districts in which 10 percent or fewer of the students were Black and/or Hispanic (very low-minority districts). We conducted two separate analyses on the combined data. We analyzed and compared: 1. the percentage of all the “very low” districts in our data that had dispute resolution activity to the percentage of all the “very high” districts in our data with dispute resolution activity, as measured by whether the district had one or more mediation requests, due process complaints, or state complaints. We also conducted this analysis to compare the percentages of urban, suburban, and rural districts with dispute resolution activity. 2. the rate of dispute resolution activity in our “very low” districts and our “very high” districts, as measured by the number of mediation requests, due process complaints, and state complaints per 10,000 students served under IDEA. We also conducted this analysis for urban, suburban, and rural districts. This first analysis compared the percentages of school districts with different income and racial and/or ethnic characteristics that had at least one mediation request, due process complaint, or state complaint. In essence, it shows the differences in whether there is any dispute resolution activity in districts with different income and racial and/or ethnic characteristics, in our selected states. Because our analysis counts districts in which a single dispute resolution was initiated in the same manner as those with more activity, it is not potentially skewed by individual districts that may have unusually high or low levels of dispute resolution activity. To supplement this analysis, our second analysis compares the rate of dispute activity in these types of districts, which shows the magnitude of the various types of dispute resolution activity. Although we use this 90-10 threshold in the body of the report, we also conducted these analyses for districts where 75 percent or more of students were eligible for free or reduced-price lunch and 25 percent or fewer were not eligible. Similarly, we conducted our race/ethnicity analyses at this same level as well. These additional analyses can be found in appendix III. The results from our five states are not generalizable to all states. To address both research objectives, we reviewed relevant federal laws and regulations. We also reviewed Education documents, including its model Notice of Procedural Safeguards, PTI and CADRE documents, and relevant literature related to challenges parents face using dispute resolution. In addition, we interviewed Education officials about challenges families face in using dispute resolution options and Education’s efforts to assist families. We also interviewed PTI, P&A, and advocacy organization staff, and SEA officials from the five states from which we collected data. We conducted this performance audit from June 2018 to November 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Additional Data Tables This appendix contains tables that show data based on analyses we conducted using dispute resolution data collected from five states– Massachusetts, Michigan, New Jersey, Ohio, and Pennsylvania–for school year 2017-18, and the Department of Education’s Common Core of Data for school year 2016-17. In some cases, states did not report free or reduced-price school lunch data for school year 2016-17. In those cases, we used the most recent year for which the state reported those data. The total number of local educational agencies and the total number of students served in our income analysis and our race/ethnicity analysis are slightly different. Appendix IV: GAO Contact and Staff Acknowledgments Contact Jaqueline M. Nowicki, Director, (617) 788-0580 or nowickij@gao.gov. Staff Acknowledgements In addition to the contact named above, Bill MacBlane (Assistant Director), David Barish (Analyst-in-Charge), and Linda Siegel made key contributions to this report. In addition, key support was provided by James Bennett, Deborah Bland, Holly Dye, Sheila R. McCoy, Jean McSween, John Mingus, Amy Moran Lowe, Moon Parks, James Rebbe, Kelly Snow, Joy Solmonson, and Greg Whitney.
Why GAO Did This Study Almost 7 million children aged 3 to 21 received special education services under Part B of the Individuals with Disabilities Education Act (IDEA) in school year 2016-17. IDEA contains options parents and school districts may use to address disputes that arise related to the education of a student with a disability. These options include mediation and due process complaints, which can be used by parents and school districts; and state complaints, which can be used by any organization or individual, including the child's parent, alleging an IDEA violation. GAO was asked to review parents' use of IDEA dispute resolution options. This report examines (1) how often IDEA dispute resolution options are used, and whether use in selected states varies across school district-level socioeconomic or demographic characteristics; and (2) what challenges parents face in using IDEA dispute resolution options and how Education and selected states help facilitate parents' use of these options. GAO reviewed publicly available data on dispute resolution at the state level and collected data at the school district level from five states—Massachusetts, Michigan, New Jersey, Ohio, and Pennsylvania—selected based on the number of disputes initiated and school district characteristics, among other factors. GAO also reviewed relevant federal laws, regulations, and Education and state documents; and interviewed Education officials, state officials, staff from organizations providing technical assistance in these five states, and other national advocacy organizations. What GAO Found In school year 2016-17, 35,142 special education disputes were filed nationwide, and in five selected states GAO reviewed, dispute resolution options varied across school districts with different socioeconomic and demographic characteristics. The Individuals with Disabilities Education Act (IDEA) provides parents several ways to file and resolve disputes about plans and services that school districts provide to students with disabilities. A greater proportion of very high-income school districts had dispute resolution activity as well as higher rates of dispute activity than very low-income districts in most of the five states GAO reviewed. GAO also found that in most of these states, a smaller proportion of predominately Black and/or Hispanic districts had dispute resolution activity compared to districts with fewer minority students; however, predominately Black and/or Hispanic districts generally had higher rates of such activity. Technical assistance providers and others told GAO that parents used dispute resolution most often for issues related to school decisions about evaluations, placement, services and supports, and discipline of their children. Note: “Very high-income” districts are those in which 10 percent or fewer of students are eligible for free or reduced-price school lunch (FRPL). In “Very low-income” districts, 90 percent or more of students are eligible for FRPL. Parents may face a variety of challenges in using IDEA dispute resolution, and the Department of Education and states provide several kinds of support that, in part, may address some of these challenges. Stakeholders cited challenges such as paying for attorneys and expert witnesses at a due process hearing, parents' reluctance to initiate disputes because they feel disadvantaged by the school district's knowledge and financial resources, and parents' lack of time off from work to attend due process hearings. Education and state agencies provide technical assistance to support parents' understanding of their rights under IDEA and to facilitate their use of dispute resolution options, for example, by providing informational documents and phone help lines to parents.
gao_GAO-19-652T
gao_GAO-19-652T_0
Registering for and Receiving Assistance from FEMA and Its Partners Posed Challenges for Individuals with Disabilities Following the 2017 Disasters Aspects of FEMA’s Application Process for Assistance Created Challenges for Individuals with Disabilities To receive FEMA assistance under FEMA’s Individuals and Households Program, through which disaster survivors can receive help with housing and other needs, individuals must register by answering a standard series of intake questions. In our May 2019 report, we found that some individuals with disabilities may have faced long wait times and unclear registration questions, and that FEMA’s internal communication across its programs about survivors’ disability-related needs was ineffective. Long wait times: Individuals who tried to apply for assistance using the helpline confronted long wait times, which may have posed greater challenges for those with disabilities. In the days after Hurricane Maria affected Puerto Rico and the U.S. Virgin Islands— when survivors from Harvey and Irma were concurrently contacting the helpline—up to 69 percent of calls went unanswered and the daily average wait time for answered calls peaked at almost an hour and a half, according to our analysis of FEMA data. While long wait times could be burdensome for all individuals, state officials and disability advocates we interviewed said long wait times were especially burdensome for people with certain disabilities, such as those with attention disorders or whose assistive technology prevents multi- tasking when waiting on hold. Unclear registration questions: FEMA’s registration process did not give individuals a clear opportunity to state they have a disability or request an accommodation because the registration did not directly ask registrants to provide this information. According to FEMA officials at the time, information about disability-related needs can help FEMA staff match individuals with disabilities with appropriate resources in a timely and efficient manner and target additional assistance, such as help with the application process. However, individuals with disabilities may not have requested accommodations or reported their disability and related needs during FEMA’s registration-intake due to the unclear questions. As a result, the registration process may have under-identified people with disabilities. For example, in Puerto Rico, an estimated 21.6 percent of people have disabilities, according to 2017 Census data. However, less than 3 percent of all registrants in the territory answered “yes” to the disability-related question in response to Hurricanes Irma and Maria. Ineffective communication across FEMA programs: Individuals may have faced challenges receiving necessary assistance because FEMA did not effectively track and communicate information about individuals’ disability-related needs across its assistance programs after such needs were identified. FEMA officials we interviewed for the May 2019 report explained that accommodation requests and disability-related information identified after registration-intake are recorded in a general “notes” section of a registrant’s case file, which can be easily overlooked as a case file is passed along to subsequent FEMA officials. In our May 2019 report we recommended that FEMA implement new registration-intake questions to improve FEMA’s ability to identify and address survivors’ disability-related needs. FEMA concurred with this recommendation, and officials reported that in May 2019 the agency updated the questions to directly ask individuals if they have a disability. According to FEMA’s analysis of applications for assistance following recent disasters, which used the updated questions, the percentage of registrants who reported having a disability increased. FEMA officials stated this increase gives them confidence the change has improved FEMA’s ability to identify and address disability-related needs of individuals affected by disasters. We also recommended that FEMA improve its communication of registrants’ disability-related information across FEMA programs, such as by developing an alert within survivor files that indicates an accommodation request. FEMA did not concur with this recommendation, explaining that the agency lacks specific funding to augment the legacy data systems that capture and communicate registration information. In its comments on our May 2019 report, FEMA stated that it began a long- term initiative in April 2017 to improve data management and exchange, and improve overall data quality and standardization. After FEMA completes this initiative, which officials said will be in 2024, FEMA expects that efforts to share and flag specific disability-related data will be much easier. We believe that in the interim, FEMA could explore other cost-effective ways to improve communication, such as through agency guidance that encourages program officials to review registrants’ case file notes. As FEMA moves ahead with its initiatives to improve data, we encourage it to consider and ultimately implement technology changes, such as developing an alert within files that indicates an accommodation request, to help improve communication across FEMA programs. Officials Reported that Individuals with Disabilities Faced Challenges Obtaining Critical Goods and Services State, territorial, and local governments are primarily responsible for response and recovery activities in their jurisdictions, including those involving health and safety. In our May 2019 report, we found that the substantial damage caused by the 2017 hurricanes prevented or slowed some individuals with disabilities from obtaining food and water. According to territorial and nonprofit officials in Puerto Rico and the U.S. Virgin Islands, as well as survivors we interviewed in the U.S. Virgin Islands, this was due to centralized distribution models, in which the majority of food and water was distributed to centralized locations around the islands. Officials from one governmental agency in Puerto Rico said this posed a major barrier to people with mobility challenges or without caregivers receiving food and water because they had to rely on home delivery, which took time and in some cases, did not happen. We also found that Hurricane Maria survivors faced challenges obtaining needed medication and oxygen in Puerto Rico and the U.S. Virgin Islands, according to territorial and nonprofit officials. State, territorial, and local agencies are also primarily responsible for administering shelters, when necessary, for those affected by a disaster. We found in our May 2019 report that individuals with disabilities affected by the 2017 hurricanes may have faced challenges accessing basic services from local shelters, including restrooms and food, according to state, territorial, local, and nonprofit officials in Florida, Puerto Rico, Texas, and the U.S. Virgin Islands. For example, nonprofit officials in Florida and Puerto Rico described instances of shelter residents with impairments that prevented them from accessing shelter restrooms. We also found that transportation was especially challenging for those who relied on public transportation or were unable to walk long distances, such as people with disabilities, according to state, territorial, local, and nonprofit officials we interviewed. For example, Florida state officials reported that few public transportation services, including paratransit, were functional following Hurricane Irma. This may have prevented some people with disabilities from maintaining their health and wellness—such as by shopping for groceries or going to medical appointments—after the storm, according to state officials. Officials we interviewed from Texas, Florida, and Puerto Rico for our May 2019 report said they had difficulty obtaining FEMA data that could help them deliver assistance to individuals, including those with disabilities. The officials explained that data—including names and addresses— showing who has registered for and received assistance from FEMA can help local governments and nonprofits identify who in their community needs assistance. To better facilitate authorized nonfederal partners obtaining these needed data, we recommended that FEMA develop and publicize guidance for partners who assist individuals with disabilities on how to request and work with FEMA staff to obtain the data, as appropriate. FEMA concurred with this recommendation and officials told us in July 2019 that the agency plans to publish data-sharing guidelines on its website, among other actions. FEMA Had Taken Limited Steps to Effectively Implement Its New Disability Integration Approach FEMA Began Implementing Changes without Communicating Objectives to Regional Staff. Before initiating its new approach to disability integration, ODIC distributed an explanatory memorandum and other documentation to FEMA staff. For example, an April 2018 memorandum to FEMA Regional Administrators outlined a proposal to add new disability integration staff in each FEMA region to foster day-to-day relationships with state, territorial, and local emergency managers and disability partners. Also, ODIC distributed a document that described FEMA’s new approach to deployments. Under the new approach, fewer disability integration staff are to be deployed to disasters and all deployable staff and staff in programmatic offices are to receive training on disability issues during response and recovery deployments. However, in our May 2019 report, we found that these documents did not articulate objectives that could help the agency define success for the new approach. We concluded that without a set of common objectives for FEMA’s new disability integration approach, FEMA risks inconsistent application across its regions. In our report, we recommended that FEMA establish and disseminate a set of objectives for the new approach. FEMA concurred with this recommendation, and in July 2019 officials provided us with the draft of ODIC’s strategic plan for 2019-2022, which includes strategic goals and objectives that the new disability integration approach can help achieve. ODIC officials told us they will be working throughout 2019 with FEMA’s Office of External Affairs to disseminate the plan agency-wide and to nonfederal partners. We will continue to monitor FEMA’s progress toward sharing the objectives of its new approach to disability integration with critical stakeholders. FEMA Had Not Documented Plans for Training All Deployed Staff on Disability Competencies, but Has Taken Steps to Offer Training to Nonfederal Partners To implement FEMA’s new deployment model, which will shift the responsibility of directly assisting individuals with disabilities from disability integration staff to all FEMA staff, FEMA planned to train all deployable staff and staff in programmatic offices on disability issues. We reported in May 2019 that FEMA officials emphasized the need to integrate disability competencies throughout FEMA’s programmatic offices and deployable staff. However, we found that the agency did not have written plans—including milestones, performance measures, or a plan for monitoring performance—for developing new comprehensive training for all staff. Starting in the 2018 hurricane season, FEMA had taken initial steps toward training some deployed staff on disability issues. For example, FEMA required all staff to complete a 30-minute training on basic disability integration principles and offered targeted “just-in-time” training to deployed staff. We concluded that developing a training plan would better position FEMA to provide training to all staff to help achieve FEMA’s intended goals. In our May 2019 report, we recommended that FEMA develop a plan for delivering training to FEMA staff that promotes competency in disability awareness. In its letter commenting on our May 2019 report, FEMA stated that ODIC is developing a plan to introduce the disability competency in FEMA’s position task books for all deployable staff. The letter explained further that ODIC’s plan will describe how FEMA will communicate the disability integration competency throughout the agency, establish milestones for measuring how effectively the competency is integrated across the agency, and outline how ODIC will monitor and measure integration of the competency across the deployable workforce. In July 2019, FEMA officials told us ODIC plans to hire new staff to focus on integrating the disability competency FEMA-wide. According to the officials, after the position task books are updated, ODIC will work with FEMA’s training components to ensure that disability-related training is consistent with the content of the position task books. FEMA officials also noted that the Field Operations Division, and not ODIC, is responsible for measuring how effectively the disability competency is integrated across FEMA. We will continue to monitor FEMA’s progress toward developing a plan for delivering training to promote competency in disability awareness among its staff. As noted in our May 2019 report, the plan for delivering such training should include milestones, performance measures, and how performance will be monitored. In our May 2019 report, we found that deploying a smaller number of disability integration staff and shifting them away from providing direct assistance to individuals with disabilities may result in nonfederal partners (such as state, territorial, and local emergency managers) providing more direct assistance to individuals with disabilities than they did previously. In February 2017, we reported that the comprehensive introductory training course on disability integration that FEMA offered to its nonfederal partners included substantial information on how to incorporate the needs of people with disabilities in emergency planning. However, according to officials, FEMA stopped offering this 2-day course in September 2017. ODIC officials told us during our 2019 review they had determined that the course, as designed, did not provide actionable training to emergency management partners to meet the needs of individuals with disabilities and planned to replace it. However, we found in May 2019 that although officials had plans to replace the course with new training, they had not provided a timeline, which would help ensure that partners are provided with timely information on inclusive emergency management practices. We recommended that FEMA develop a timeline for completing the replacement course and, in June 2019, FEMA officials said they had begun procuring external consulting services to redevelop it. According to the officials, ODIC had evaluated alternatives to the suspended course and determined that an in-person, exercise-based course with remote participation capabilities would be an appropriate replacement. FEMA officials said the course will take about 1 year to develop and will be ready to field by August 2020. In conclusion, FEMA has taken a number of steps toward addressing our recommendations related to how it supports individuals with disabilities in obtaining disaster assistance. ODIC’s draft strategic plan for 2019-2022, which articulates objectives for the new approach to disability integration, is likely to help facilitate consistent implementation agency-wide. In addition, we are hopeful that FEMA’s revised registration-intake questions, as well as data sharing guidance for nonfederal partners, will help FEMA and its partners better identify and assist registrants with disabilities. However, we continue to believe that implementing changes to disability integration before staff have been fully trained may leave FEMA staff ill-prepared to identify and address the challenges that individuals with disabilities face while recovering from disasters. We will continue to monitor FEMA’s actions as it makes additional progress toward addressing our recommendations. Chairman Payne, Ranking Member King, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Elizabeth Curda, Director, Education, Workforce, and Income Security Issues at (202) 512-7215 or curdae@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Sara Schibanoff Kelly (Assistant Director), Sara Pelton (Analyst-in-Charge), and David Reed. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Three sequential hurricanes—Harvey, Irma, and Maria—affected more than 28 million people in 2017, according to FEMA. Hurricane survivors aged 65 and older and those with disabilities faced particular challenges evacuating to safe shelter, accessing medicine, and obtaining recovery assistance. In June 2018, FEMA began implementing a new approach to assist individuals with disabilities. This statement describes (1) reported challenges faced by these individuals in accessing disaster assistance from FEMA and its nonfederal partners following the 2017 hurricanes; and (2) the extent to which FEMA has implemented changes in how it supports these individuals. This statement is based on a May 2019 GAO report and selected updates. For the report, GAO analyzed FEMA documents and data from FEMA call centers and also visited 2017 hurricane locations to interview state, territorial, and local officials. GAO also interviewed FEMA officials from headquarters and deployed to each disaster location. To update FEMA's progress toward addressing its recommendations, GAO interviewed FEMA officials and analyzed agency documents. What GAO Found GAO's May 2019 report found that some individuals who are older or have disabilities may have faced challenges registering for and receiving assistance from the Federal Emergency Management Agency (FEMA) and its nonfederal partners (such as state, territorial, and local emergency managers). FEMA's registration did not include an initial question that directly asks individuals if they have a disability or if they would like to request an accommodation. GAO recommended that FEMA use new registration-intake questions to improve the agency's ability to identify and address individuals' disability-related needs. FEMA concurred and, in May 2019, updated the questions to directly ask individuals if they have a disability. GAO found that the substantial damage caused by the 2017 hurricanes prevented or slowed some individuals with disabilities from obtaining food, water, and other critical goods and services from states, territories, and localities. Officials from one state reported that few public transportation services, including paratransit, were functional following the 2017 hurricane affecting the state. The officials said this may have prevented people with disabilities from maintaining their health and wellness—such as by shopping for groceries or going to medical appointments—after the storm. GAO's May 2019 report also found that FEMA had taken limited steps to implement the agency's new approach to assist individuals with disabilities. GAO recommended the agency establish and disseminate objectives for implementing its new approach. FEMA concurred, and developed a draft strategic plan that includes strategic goals and objectives for the new approach, which the agency plans to finalize and disseminate in 2019. GAO recommended that FEMA, as part of its new approach, develop a plan for delivering training to all FEMA staff deployed during disasters that promotes competency in disability awareness. In concurring with this recommendation, FEMA described its plan to incorporate a disability awareness competency into the job requirements for all deployable staff, but has not yet developed a plan for training. GAO's May 2019 report also recommended that FEMA develop a timeline for completing the development of training on incorporating the needs of individuals with disabilities into emergency planning, which it planned to offer to its nonfederal partners. FEMA concurred with GAO's recommendation and, in June 2019, officials began procuring external consulting services to develop a replacement course. According to officials, the course will take about 1 year to develop and will be ready to field by August 2020. What GAO Recommends In the May 2019 report, GAO made seven recommendations to FEMA; FEMA concurred with six. FEMA has established new registration questions and a timeline to offer training to its partners. GAO continues to believe its recommendations to develop a plan to train its staff on disability awareness, among other actions, are valid.
gao_GAO-20-135
gao_GAO-20-135_0
Background Two GSA offices have roles in managing data related to federal real property. The Public Buildings Service (PBS) acts as a landlord for the federal government by acquiring new space for government agencies and tracking data on the property it acquires. PBS manages and publishes three databases that provide information to public stakeholders and researchers on federally owned and leased properties, and on properties eligible for disposal. Another office, the Office of Government-wide Policy (OGP), collects, manages, and reports on all federal real-property data through the FRPP database. OGP has managed the FRPP since its inception in fiscal year 2005 by collecting data from federal agencies on their real property assets. OGP is also responsible for compiling and managing the public database required by FASTA. FRPP is the most comprehensive database of federal real property holdings, containing details for about 398,000 assets (buildings, structures, and land). It is not public, but it also does not contain any classified national security information. FRPP data show the range of agency assets, including single buildings in a given location or multiple buildings located on installations, like a national park or research center. The FRPP identifies whether buildings are on installations, but does not identify whether buildings are public-facing or secure (and thus inaccessible by the public). We have repeatedly identified reliability issues with the FRPP, and GSA has taken actions to improve the reliability of FRPP data. Specifically, in 2016, GSA established its validation and verification (V&V) process. After agencies submit their data annually to FRPP, GSA identifies questionable entries (called anomalies) from 20 separate categories. Through these categories, GSA flags assets that are very small in size, changed from the previous year, or have unusual financial statistics, among other things. GSA then provides an annual list of anomalies to the agencies that entered the data. Agencies have 10 months to research each anomaly and correct errors or validate that the data are correct. GSA has provided instructions to agencies on how to respond to the V&V process. GSA also requires agencies to certify accuracy of the data and established database rules that require agencies to submit complete information on assets. GSA officials said that it must ultimately rely on agencies to submit correct data. FASTA required GSA to publish a single, comprehensive, descriptive database of all federal real property by December 16, 2017, while allowing it to exclude assets for reasons of national security, such as those that are secure installations. FASTA also required the database to be made public to the extent its release is consistent with national security and procurement laws. GSA officials said that GSA used the FRPP as the basis for developing the database it released to the public at the end of 2017. GSA presents the data in two ways: as a downloadable spreadsheet or in a searchable mapping application. GSA’s Efforts Have Not Effectively Addressed FRPP’s Reliability Issues, Which Affect the Public Database Most Street Addresses in Public Data Are Incomplete or Otherwise Unusable FASTA requires that the public database be machine-readable and permit searching or sorting of data to the extent practicable. Further, GSA guidance also calls for agencies to provide accurate and complete data. Specifically, GSA requires agencies to include either a complete street address or geo-coordinates for all 398,000 assets in the FRPP; for example, GSA’s FRPP data dictionary establishes the format agencies are to use when inputting asset addresses—number, street, city, zip code. This requirement carries over to the 305,000 assets included in the public database. We found that almost 214,000 of the assets in the public database included some street address information, but most of the addresses were incomplete or incorrectly formatted. Specifically, only approximately 70,000 (33 percent) fully met the standards. Since another 91,000 assets did not include a street address, a computer would only be able to locate about 23 percent of the 305,000 civilian federal assets using street addresses in the public database (See fig. 1.) GSA officials who manage the FRPP said that they were aware that many street addresses were not readable and have asked agency officials to review the accuracy of address information and correct it in future submissions. They acknowledged, however, that their efforts were not fully successful. As discussed later, GSA is currently taking steps to ensure that agencies provide more complete geo-coordinates when they submit data to the FRPP. For the remaining 67 percent of the assets (144,000) with some street address information that did not fully meet the standards, we found two types of problems—incomplete addresses and addresses that were not formatted correctly. First, more than 28,000 assets had street addresses that were incomplete. For example, instead of having individual address listings, we found that all 215 buildings at the Goddard Space Flight Center had a single listing of “Greenbelt Road.” This road actually stretches over 6 miles and many other buildings are located along the road. The front gate’s complete address is “8800 Greenbelt Road.” In these instances, GSA officials said that its public-mapping program selects the mid-point of the street, which in this case is over a mile from the public entrance to the installation. (See fig. 2.) As a result, someone using the database would not be able to determine exactly where Goddard is. Second, we found about 115,000 assets had street address information that was incorrectly formatted based on FRPP instructions. While we did not conduct a complete analysis of all these assets, we found examples of some of the address issues, such as: Extra descriptive information about the property in the address field. For example, “N220 AG Science Bldg North U of Kentucky” and “Beltsville AG Research Center, 10300 Baltimore Avenue.” The data in the address field for these two assets—which belong to the Department of Agriculture—could not be directly read by a computer or displayed on a map. Unrecognizable text. For example, “2881 F;B Road” and “1-15, Exit 172, 1 Mile East.” The data for these assets, which belong to the Department of Agriculture, could not be directly read by a computer or displayed on a map. GSA officials said that users may be able to interpret the individual asset addresses in the database but that GSA’s automated computer system could not map unreadable addresses. Similarly, a private-sector user who tried to use the public data to map federal facilities for clients said that he was unable to map many of the assets because addresses were not readable by his computer. As a result, he said that he excluded incomplete or unreadable addresses from the database he created. He noted that incomplete data would reduce clients’ interest. We also found problems with assets for which agencies provided geo- coordinates (latitude and longitude). Specifically, GSA guidance states that geo-coordinates must include a minimum of four decimal places. Of the 305,000 assets included in the public database, almost 220,000 included geo-coordinates but more than half—about 141,000—did not meet FRPP standards because they were not precise enough to map the location of the assets. GSA officials noted agencies are required to enter some type of information in the field for address or geo-coordinates, but an “open data” format did not prevent agencies from reporting information that was not strictly a street and address number. Consequently, some agencies may have entered incorrect values for the geo-coordinates just to complete the field. Our analysis supports this view; few (550 of about 131,000) of the assets with both sufficiently detailed geo-coordinates and street addresses pointed to the same location. In addition to the open data issue described above, officials also explained that GSA did not have a “business validation rule” in place that prevented agencies from inputting coordinates with less than four decimal places. GSA has taken a number of actions to correct the issues with geo- coordinates that they say should help address this problem for the next release of the public data in 2020. For example, GSA added V&V anomaly categories for fiscal year 2018 data that identified GPS coordinates pointing to unlikely locations, such as a location in the water, which identified about 80,000 potential anomalies. Agencies are currently checking these. Additionally, GSA added a feature to the fiscal year 2019 FRPP submission form that will force agencies to provide geo- coordinates that are detailed enough for their data to be accepted. GSA officials said that they would consider taking additional steps once they have analyzed the results of the GPS coordinate anomaly categories. GSA has asked agencies to review addresses for accuracy, and officials indicated that they have discussed plans to improve this data. However, GSA has not taken specific steps to work with agencies to ensure they input correct street addresses in the public database in light of the “open data” format. The lack of correct street addresses can affect users who may be interested in acquiring or leasing assets or who may be interested in installing telecommunications devices on an asset, from knowing exactly where those assets are located. As a result, until the street address information is complete and correctly formatted, the public may unknowingly pursue assets that are not available or suited to their needs. GSA’s V&V Process Does Not Efficiently Identify Erroneous Data We found that while GSA has identified close to 30,000 potential errors in the FRPP database over the first 2 years of the V&V process, agencies confirmed only 5 percent as errors (1,291 of 28,572). Agencies validated the remaining 27,281 anomalies as correct or left them unresolved. The low number of errors being identified indicates that GSA’s V&V process is not efficiently identifying errors in the data, either in terms of the anomaly categories themselves or the thresholds at which GSA flags data as an anomaly. This situation could ultimately mean that agencies are spending time researching correct information that was flagged as potentially erroneous or not fully actually researching anomalies and allowing mistakes to remain uncorrected. Agencies identified no anomalies as errors for five of GSA’s 16 anomaly categories in 2017, raising questions about the anomaly categories GSA has identified. OMB guidance suggests that agencies only do extra tasks that are justified by their cost. GSA officials who manage the V&V process said that the high number of anomaly categories for which agencies found no errors could reflect that the anomaly categories are flagging correct data as anomalies or that agencies are validating data as correct without actively checking it. We found examples of both. For example, we examined a selected sample of 14 V&V data anomalies at DOE sites in New Mexico. GSA flagged the buildings for being very small—office buildings less than 400 square feet and warehouses less than 64 square feet—and found that the information in the public database was correct. Figure 3 illustrates how such information flagged as being questionable, is actually correct according to GSA’s reporting rules for agencies, which specify data categories, such as the types of buildings GSA considers to be warehouses. Specifically, GSA flagged assets at DOE’s Los Alamos and Sandia National Laboratories because their square footage fell below certain amounts. But, in reality, these assets met GSA’s criteria for offices and warehouses despite being small. We also found instances where an agency verified information as correct that was incorrect. Figure 4 illustrates examples data validated as correct that was actually erroneous. Specifically, an agency erroneously reported water towers and antenna arrays as office buildings. Staff responsible for managing the V&V process for their agency’s assets said that they did not always consult the personnel with the best knowledge of the assets in resolving anomalies. Instead, they relied on their own judgment when determining whether to forward the anomalies to asset managers to ultimately check the data and correct any errors. This resulted in some errors going uncorrected. Thresholds—the points at which GSA flags data as anomalies—lead to a large number of data elements flagged, which can challenge the resources of affected agencies. Officials at two of our selected agencies said that the number of anomalies that the V&V process produces annually overwhelms their ability to validate the data. The large number of unresolved V&V anomalies appears to support this conclusion. GSA’s guidance allows agencies 10 months to validate the anomalous data, but the number of anomalies that remain unresolved after 10 months has risen sharply. Figure 5 shows that while agencies addressed all anomalies in the first year, they have since struggled to keep up. As of October 2019, 106,231 anomalies, or approximately 71 percent, remained unresolved after 10 months. Officials who are responsible for resolving anomalies at two selected agencies said that more realistic anomaly categories or thresholds could reduce the number of anomalies and better target actual errors, an approach that could help agencies better prioritize their resources when researching anomalies. GSA staff who manage the FRPP said that they brainstormed internally and used industry standards and policy initiatives to develop anomaly categories. They also explained that they adjust thresholds within each category. However, GSA officials said they had not reviewed the anomaly categories or their thresholds to see if they consistently capture incorrect data. This approach puts the stated goals of the V&V process—which are to improve data accuracy, promote data consistency among the agencies, and enable OMB to measure data quality improvement—at risk. In the absence of better information about the validity of categories and thresholds, the current process for V&V is taking up limited agency resources without efficiently correcting errors in the data. GSA and Agencies Withheld Information That Reduces the Completeness of the Public Database GSA and reporting agencies decided not to provide certain useful information from the public database in two ways, thereby reducing the data’s completeness and ultimately its utility. First, GSA withheld data from the public database without consulting agencies about their sensitivity. Second, selected agencies withheld information that was already publicly available or withheld similar types of information inconsistently within their agencies. GSA Withheld Data from the Public Database GSA chose to withhold 15 categories of data from the public database for all agencies. FASTA authorized the withholding of information from the public database for national security or procurement-related issues. GSA officials who manage the FRPP said that GSA does not have the security or intelligence expertise to issue guidance on national security issues. As a result, they sought input from the ISC on what information to withhold. ISC reviewed the security risks of FRPP data and provided written recommendations in a memo to GSA in November 2017. Specifically, ISC recommended that certain categories of data on assets be withheld from the public database because of the security risk that they could pose individually or in combination. ISC also recommended that agencies use internal guidance on restricting the public release of real property information and ISC’s mission criticality criteria to determine any individual real property assets to withhold entirely from the public database. GSA implemented ISC’s first recommendation by withholding 15 FRPP data categories for all assets from the public database without consulting the relevant agencies on this decision, considering the specific sensitivity of these categories for all assets, or assessing the effect withholding them would have on the database. ISC officials acknowledged that the memo that they prepared for GSA could have been clearer as to ISC’s intent that departments and agencies should consider the recommendations in making a final determination. According to ISC officials, they believed that implementation would involve GSA communicating these recommendations and leaving decisions on what to withhold to officials within individual departments and agencies who control real property assets. Specifically, the following five categories of data were among the 15 withheld by GSA: property’s/installation’s name, replacement value of an asset, annual operating and maintenance costs for owned assets, annual-operating and maintenance costs for leased assets, and breakdown of annual operating and maintenance costs (e.g., utilities costs, janitorial costs, sewage costs, etc.). Because GSA did not consult with agencies on this decision, the agencies did not have an opportunity to consider whether or not the 15 data categories GSA withheld included information that is sensitive or already publicly available. As a result, the public database is incomplete in ways that adversely affect users and limits agencies’ public accountability for reporting accurate information. For example, identifying assets in the public database is difficult without the property’s name—one of the data categories GSA withheld—especially given the insufficient location data in the database discussed earlier. Returning to the incomplete address example discussed earlier (NASA Goddard Space Flight Center), the public data also do not include the property’s name, “Goddard Space Flight Center,” leaving users with limited information to identify the buildings. As a result, someone using the public database cannot identify assets on NASA’s Goddard Space Flight Center campus without using outside sources for additional information. (See table 1.) As discussed in the next section, we found that some of the information from these 15 excluded data categories, such as property names, is often already in the public sphere. For example, “Goddard Space Flight Center” and its address are clearly disclosed on NASA’s public website, but GSA withheld the name for 215 NASA buildings at this address, including Goddard’s public visitors’ center. Using the public database alone, a member of the public would need to go through numerous steps to determine if assets are part of Goddard Space Flight Center and still have no way of being sure. (See fig. 6.) Moreover, third-party, private sector stakeholders we spoke with such as brokers, lessors, consultants, and a non-profit organization that work in federal real-property markets, noted that some of the data categories GSA withheld would be among the most useful to their work. For example, 10 of 14 stakeholders we spoke to said that financial data, such as operating costs and annual rent, were among the most useful FRPP data categories to their analyses of real property markets and business opportunities. Additionally, four stakeholders cited the property’s name as among the most important data categories for their work in analyzing federal real property. Agencies Withheld Publicly Available Information and Withheld Similar Assets Inconsistently, Making Analysis Difficult While GSA withheld the 15 categories of data across all agencies, it allowed each agency to determine if any specific assets should be withheld entirely from the public database, in accordance with ISC’s second recommendation. ISC officials told us that this was appropriate because individual departments and agencies that control real property assets should determine what information to withhold. GSA provided agencies with guidance that explained its decision to withhold the 15 data categories and instructed agencies to consult ISC’s mission criticality criteria and any additional internal agency criteria in determining what information to withhold from public release. ISC’s mission criticality criteria provide a page-long list of uses of real property assets that warrant consideration for national security exclusion, but do not provide other instructions for agencies to consult while making decisions on what information to withhold. Further, OMB Circular— Management of Reporting and Data Integrity Risk also instructs agencies to integrate a risk-based approach towards meeting reporting objectives, an approach that requires “management practices that identify, assess, respond, and report on risks.” However, we found that our selected agencies did not consistently identify internal guidance to supplement GSA’s instructions within their agencies. In September 2018, ISC recommended that GSA not withhold from the public database newly added data categories that provide information already in the public sphere. Additionally, the OPEN Government Data Act requires OMB to foster greater sharing, dissemination, and access to public information and issue guidance that, among other things, takes into account the requirement that data must be disclosed if it would otherwise be made available under a Freedom of Information Act request. For purposes of this report, we refer to this requirement as “assuming openness.” However, GSA’s instructions to agencies lacked specifics to help agencies apply a consistent, risk-based approach in determining which, if any, assets or asset-specific information should be withheld from public release. As a result, we found that some of the selected agencies withheld asset-related information from the public database that is available on their own public websites or from other official sources. Withholding information that is already publicly available unnecessarily reduces the completeness and utility of the public database that FASTA indicated should be comprehensive. For example: DHS’s Immigration and Customs Enforcement (ICE) withheld buildings at five of its publicly-accessible service-processing centers that are shown on a detention facility locator mapping system on its own website. ICE officials told us that they did not consider what information is already publicly available when deciding what information to withhold from the public database. FCC withheld all of its real property assets. FCC’s own website and regulations, however, list the locations and functions of FCC offices. The U.S. Coast Guard withheld information on its public-recruiting offices and lighthouses that it advertises on its public website. All buildings and structures that were not specifically used for the purpose of aids to navigation were withheld from the public data set. As a result public users can look up information on the Coast Guard’s aids to navigation, but cannot look up some of its publicly accessible locations, such as recruiting offices and lighthouses. In contrast, DOE decided to withhold none of its 20,378 assets from the public database. According to a DOE official responsible for submitting data to FRPP, DOE does not have a specific process for assessing what properties to make public. However, it is aware that much of the information in the public database is also publicly available through other sources. Table 2 shows how selected agencies took different approaches to withholding information from the public database. Under risk-based criteria assuming openness (as mentioned earlier), agencies may consider whether information made public in one instance should be withheld in another instance. However, neither ISC’s mission criticality criteria nor GSA’s instructions addressed the issue of consistency within specific agencies. Specifically, we found that selected agencies withheld the same assets differently over time, and similar assets inconsistently. Table 3 shows how reporting agencies made different decisions on whether to withhold the same types of assets. At times, some agencies withheld certain asset types that ISC’s mission criticality criteria did not identify as warranting withholding, resulting in almost 7,000 assets such as parking structures and disposed assets being withheld. This led to inconsistencies as to whether these agency assets were included or not in the public database, limited transparency about these assets, and prevented users from fully analyzing federal real property assets in these categories. In other cases, selected agencies withheld similar assets inconsistently, did not always follow written procedures and withheld similar assets. For example: DOI headquarters provided its bureaus with GSA’s instructions on withholding assets, but individual bureaus applied the instructions differently. For example: The Fish and Wildlife Service reports that it has 369 publicly accessible national wildlife refuges, but it withheld selected real property assets at 11 of them. However, the withheld assets are the same types as the assets the Service disclosed at other refuges. For example, it reported all but two of 447 restrooms and 10 of 2,066 recreational structures on its national wildlife refuges. The Fish and Wildlife Service told us it will re-evaluate its withholding for the fiscal year 2019 FRPP database. The National Park Service (NPS) reported that it has 374 publicly accessible national parks, monuments, memorials, historic sites, and recreation areas. NPS withheld some real property assets from 15 of those sites. For example, it reported all but 2 of 1,045 service buildings at its sites. These withheld assets are the same types as those disclosed at other sites. NASA withheld assets at a centralized level, but headquarters officials told us that they have not established instructions or policies for these decisions. NASA officials told us that they withhold real property assets shared with agencies working in defense and/or national- security, which led NASA to withhold 1,517 assets in fiscal year 2017. In fiscal year 2018, however, we found that NASA withheld all assets at certain field centers, causing the number to more than double from 1,517 in fiscal year 2017 to 3,696 in fiscal year 2018. Finally, our comparison of the fiscal year 2018 FRPP and public databases found that seven agencies did not identify whether data on 3,845 assets should be withheld despite GSA guidance to do so for every asset. GSA included these assets in the public database without consulting agencies on the assets’ sensitivity or risks in releasing information on them. GSA officials said that these data should not have been accepted and that they had implemented controls to ensure that agencies identify whether data should be withheld. Data Presentation and Limited Stakeholder Awareness Hinder Usefulness of the Public Database Data Presentation Issues Limit the Usefulness of the Public Database It is difficult for a user of the public database to determine when assets are located on a secure installation that the public cannot access. For example, returning to the NASA Goddard Space Flight Center illustration from earlier in the report, assets located at the Space Flight Center are listed individually, with no indication that the assets are all located on a secure installation. The public database lists all 215 assets at the same location—Greenbelt Road in Greenbelt, MD, but provides no further indication that the assets are part of a larger, secure facility. (See fig. 7.) Currently, GSA requires civilian agencies to report individual assets, including those on secure installations. Detailed, asset-specific information could be useful for government decision makers, and GSA applied this approach to the public database. However, asset-level information can cause challenges for users when they are located on secure installations because GSA withheld the installation names from the public database. Listing assets individually could prompt fruitless public interest in inaccessible secure facilities. One expected use of the public database is for the private sector to identify possible locations for installing commercial telecommunications infrastructure, such as cell towers and antennas. However, as this infrastructure cannot be installed on secure installations, the public database would be more useful to such companies if they could readily determine whether a potential location was on a secure installation or not. For example, officials on a secure installation we visited told us that reporting individual buildings does not make sense because there are few, if any, legitimate reasons for public interest in the individual assets on a secure installation. FASTA required GSA to develop a comprehensive database and provide the public with database access, but recognized the importance of protecting national security. In that respect, a key organizational issue faced by GSA and agencies is how to present data for reporting assets on campuses that are not accessible to the public. While non-disclosure is permitted, such actions to withhold this information may reduce the usefulness of the public database as a whole. The Department of Defense (DOD) takes a different approach for its secure military bases in the public database. According to GSA officials, DOD submits a separate summary-level report for public release. This summary-level information shields sensitive information and alerts users that those assets are not accessible or of use to private-sector interests. Civilian agencies’ assets located on closed federal installations are similar to those on DOD bases in that the public may have less interest in or reason for knowing about assets that are not available to the public. Officials from NASA and two DHS bureaus said that the installation-level approach to reporting would be more appropriate for their circumstances than the asset-level reporting currently applied to civilian agencies and would likely allow them to release more information to the public. Officials from DHS added that they already release some information to the public on the web site. We found that other selected agencies also release information about secure installations on their public websites, including NASA and its Goddard Space Flight Center. Stakeholders’ Lack of Awareness of the Public Database and Confusion with Other Databases Limits Usefulness In our interviews with 14 private sector stakeholders, we found varying levels of awareness and understanding of GSA’s publicly available real- property datasets. Of the 14 private sector stakeholders we interviewed, eight told us that they were aware of the public database. Of these, five told us they tried to use it. Several selected stakeholders—regardless of whether or not they had used the database—cited concerns about the usefulness of the data, specifically with its reliability, completeness, formatting, and organization. For example, officials from one brokerage firm told us that, while the information could theoretically be useful for agency consolidation efforts, the database was too cumbersome to analyze for that purpose. Similarly, officials with a federal real-estate- consulting firm told us that they do not refer customers to the public database because they believe that the data are not complete, correct, or intuitive. Moreover, one member of a federal real-property trade association noted serious limitations in the database’s completeness and organization. In addition, one user said that he hoped the public release would allow better access to real property data but that the poor quality, completeness, and organization of the data means access to data is no better than it was before the release. Further, six of the private sector stakeholders we interviewed were not aware of the public database, including a stakeholder who confused it with GSA’s Lease Inventory database. The lack of a single location on GSA’s website that contains information about all of GSA’s real property databases may contribute to the awareness, confusion, and usefulness issues expressed by these stakeholders. Specifically, public access to the FRPP public database, the GSA’s Lease Inventory database and two other publicly available real-property databases is found in different places on GSA’s website: Public FRPP http://publicfrppdata.realpropertyprofile.gov (managed by GSA’s Office of Government-wide Policy) GSA lease inventory https://www.gsa.gov/real-estate/real-estate- services/leasing-policy-procedures/lease-inventory (managed by GSA’s Office of Leasing) GSA inventory of owned & leased properties https://www.gsa.gov/tools/buildings-real-estate-etools/inventory-of- owned-and-leased-properties (managed by GSA’s Public Building Services) GSA disposal inventory https://disposal.gsa.gov/s/ (managed by GSA’s Office of Property Disposal) The Open Government Data Act requires the Administrator of GSA to maintain a single public interface online as a point of entry dedicated to sharing an agency’s data assets with the public. While the databases serve different purposes, some asset-level data are similar, such as location or size. According to a GSA official, these databases are operated by different offices within GSA. This situation poses challenges to listing the database on a consolidated webpage. Nevertheless, GSA officials agreed that there could be clearer links and said that they plan to add them based on our findings. Without a consolidated webpage or clear links showing how the databases relate to each other and how to access each database, users of the various databases may not be aware of what databases do exist to search for assets that could be available to the public. Data’s Presentation Issues May Affect the Level of Use The public database’s presentation issues, combined with stakeholder confusion and lack of awareness, could contribute to low numbers of people who accessed the database compared to another GSA-managed real property database. GSA data indicate that users accessed civilian agency data from the public database 147 times per month on average from December 2017 through July 2019 and some months fewer than 10 times. However, according to a GSA official, the number of times users access the public database through the GSA website doesn’t necessarily reflect the extent to which people use the data. The official explained that, since GSA only issues the data once a year, users only need to access and save it once for use in a given year and that GSA usually sees a peak in users accessing the data when GSA publishes its annual update to the database. As indicated in figure 8, there was a peak in users accessing the database when GSA first issued the 2016 data in December 2017, and again in March and April 2018 when GSA published 2017 data (28 and 162 times, respectively), and in June 2019 when GSA published the 2018 data (170 times). In comparison, users access another real property database, GSA’s Inventory of Owned and Leased Property database—which is updated weekly—more often than they access the public database. Users access the Inventory of Owned and Leased Property database to search for properties controlled by GSA. Specifically, since the public database was released in December 2017, the public has continued to access GSA’s Inventory of Owned and Leased Property almost 10 times more per month than the public database on average (see fig. 8). Conclusions Federal agencies spend billions of dollars annually to operate and maintain hundreds of thousands of real property assets. GSA’s public database, extracted from FRPP data, is a comprehensive, descriptive, database of federal real property. Through the database, the public should be able to learn about federal assets, whether people are conducting research or interested in potential uses such as leasing or purchasing. Issues with the data, however, undermine these uses. GSA has taken a number of actions to improve the accuracy of the data, such as implementing the V&V process for identifying and correcting possible errors. But until GSA has better processes to ensure accuracy of street address information and identify anomalies, the public data will continue to lack the type of database most useful to the public. Moreover, the absence of a risk-based, consistent approach for withholding assets from the public database or reporting assets to it further erodes its utility. Finally, utilization of the data base is low; GSA’s choices on how the database information is presented and how users find out about and access the public database and other real-property databases may contribute to this lack of use. Unless GSA improves the accuracy, completeness, and usefulness of the public database, its intended benefits—to the public and the federal government—will remain unrealized. Recommendations We are making the following six recommendations to GSA: The Administrator of GSA should coordinate with agencies to ensure that street address information in the public database is complete and correctly formatted. (Recommendation 1) The Administrator of GSA should coordinate with agencies to review V&V anomaly categories to better target incorrect data. (Recommendation 2) The Administrator of GSA should work in consultation with agencies to determine which, if any, data should be withheld from public release. (Recommendation 3) The Administrator of GSA should instruct each agency to apply a consistent, risk-based approach in determining which, if any, assets or asset-specific information should be withheld from public release. (Recommendation 4) The Administrator of GSA should allow agencies to provide summary data for secure installations. (Recommendation 5) The Administrator of GSA should link all of GSA’s publicly available real- property data sources. (Recommendation 6) Agency Comments and Our Evaluation We provided a draft of this report to GSA, DHS, DOE, DOI, FCC and NASA for comment. GSA provided written comments, which are reprinted in appendix II and summarized below. We received, via email from DOI, technical comments, which we incorporated as appropriate. DOI, in its email comments, also suggested revisions to two recommendations, which we clarified as appropriate. DHS and NASA provided, in email, technical comments, which we incorporated as appropriate. DOE and FCC told us they had no comments. GSA agreed with five of our six recommendations but disagreed with our third recommendation. GSA wrote that allowing agencies to unilaterally determine which categories of data to withhold from the public would not be useful and would complicate comparisons among agencies. We did not intend that our recommendation allow agencies to decide without consulting with GSA, and we have clarified our recommendation accordingly. We continue to believe this recommendation, as clarified, is valid. As we reported, GSA currently withholds 15 variables—categories of data—for all federal assets, including the name of every federal building and structure. While this approach is consistent for all assets, it reduces the overall usefulness of the data by withholding information that federal agencies already make public. In addition, the ISC told us that the landholding agencies, not GSA, are in the best position to know what data about their assets are sensitive. We amended the recommendation by removing the reference to categories of data and adding that GSA work in consultation with agencies to determine what data to withhold. This change would create a consistent way for agencies to release useful data while withholding sensitive data for individual assets, a step they already take by withholding assets from the public database. GSA plans to work with the ISC and federal agencies to review related guidance and modify it as needed. We support these plans. In addition, DOI suggested in email comments that we revise our second recommendation to include coordinating with agencies to review V&V anomaly categories to better target incorrect data. Our original recommendation did not preclude coordination, and since we agree that such coordination would help improve the V&V process, we clarified the recommendation accordingly. We are sending copies of this report to the appropriate congressional committees, the Administrator of the General Services Administration, the Acting Secretary of Homeland Security, the Secretary of Energy, the Secretary of the Interior, Chair of the Federal Communication Commission, the Administrator of the National Aeronautics and Space Administration, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or rectanusl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Categories of Data Withheld from the Public Database Costs related to the everyday functions of an asset Code to identify an installation (i.e. buildings, structures, land or any combination of these) Code to identify a part of an installation (i.e. buildings, structures, land or any combination of these) Building name or the name of an entire installation (such as an agency campus) Total number of full and part time federal employees Total number of full and part time contract employees Identifies whether an asset is part of a field office (any location that is not the headquarters location for the agency) Appendix II: Comments from the U.S. General Services Administration Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Keith Cunningham (Assistant Director), Lynn Filla-Clark (Analyst-in-Charge), Melissa Bodeau, George Depaoli, James Duke, Rami Khalfani, Terence Lam, John Mingus, Joshua Ormond, Crystal Wesco, and Elizabeth Wood made key contributions to this report.
Why GAO Did This Study The lack of reliable data on federal assets is one of the main reasons Federal Real Property Management remains on GAO's high risk list. In 2016, legislation required GSA to publish a single, comprehensive, and descriptive database of federal real property that would be available to the public. The database could be used for research and other potential applications. GAO was asked to study the public database. This report assesses (1) GSA's efforts to improve the reliability of FRPP's data and the public database, (2) the public database's completeness, and (3) the presentation of the data in the public database. GAO reviewed federal laws, documents, and data, including GSA's fiscal years 2017 and 2018 FRPP and public databases. GAO interviewed officials at GSA and from six federal agencies selected in locations with enough questionable data in the public database to analyze, among other things, and studied assets in Washington, D.C., Illinois, and New Mexico. GAO also interviewed selected stakeholders involved in federal real property management, such as real estate brokers. What GAO Found The General Services Administration (GSA) has worked in recent years to improve reliability of the Federal Real Property Profile (FRPP), which tracks federal real property assets. However, numerous errors in the database were carried into the public version. GSA extracted data from the FRPP's 398,000 civilian federal assets to create a public database to be used, for example, by researchers and real estate developers. However, GSA's data verification process did not address key errors. GAO found that 67 percent of the street addresses in the public database were incomplete or incorrectly formatted. For example, the database lists “Greenbelt Road” as the address for over 200 buildings at NASA's Goddard Space Flight Center, but the road stretches over 6.3 miles, thereby reducing a user's ability to locate specific buildings. The public database is not complete because GSA and selected agencies decided not to provide certain useful information. Specifically, GSA withheld assets' information without consulting those agencies managing the assets and allowed agencies to withhold information that is already publicly available. For example, GSA withheld the name “Goddard Space Flight Center” from the public database, but NASA's website lists this name and the Center's location. Unnecessarily withholding information limits the database's utility and undermines analysis. The public database's usefulness is further limited by how GSA presents the information. Because the database does not identify if an asset is part of a secure installation, the public does not know if assets, such as the unnamed buildings at Goddard, are accessible to the public. Unless GSA improves the public database's accuracy, completeness, and usefulness, its benefits may not be realized. What GAO Recommends GAO is making six recommendations to GSA, including improving the accuracy of the database, consulting with agencies on assets' information withheld from the database, and improving the public database's presentation. GSA agreed with five of the recommendations. GAO clarified the recommendation on withholding information on agencies' assets, to address GSA's comments.